How to select the best Spatial Transcriptomics platform for your work!

In recent years, spatial transcriptomics has gone from being a long-shot futuristic technology that many were sceptical of, to one of the trendiest and mostly widely attempted omics methods on the market. Having once been the handywork of a few isolated academic labs developing methodologies in-house, it is now commercially available from a wide range of competing companies, with kits, specialised pieces of equipment, and bespoke analysis pipelines at the ready.

Data in this image is adapted from https://rna-seqblog.com/identification-of-spatial-expression-trends-in-single-cell-gene-expression-data/
Image source: https://onlinelibrary.wiley.com/doi/epdf/10.1002/bies.201900221

Assuming you like the idea of seeing thousands of genes spatially resolved across your cells or tissue samples (and who wouldn’t?), and assuming you’ve got some funding at your disposal, how can you decide which method to paint your molecular picture with?

You will need to take into account the usual considerations when planning your research, as well as some methodology-specific and sample-specific requirements:

-Usual considerations: cost, equipment needed, time etc.

-Sample type (live cells / fixed cells / frozen sections / FFPE)

-Sample type (auto fluorescence?)

-Targeted or transcriptome-wide?

-Spatial resolution: anatomical features? Subcellular?

-Has it been demonstrated outside of originator’s lab?

Fortunately, the experts have already pulled together much of the method-specific requirements, which you can see summarised in the table below! Be sure to read the full paper (linked below the table), written by Michaela Asp, Joseph Bergenstråhle, and Joakim Lundeberg.

Table source: https://onlinelibrary.wiley.com/doi/epdf/10.1002/bies.201900221

With such exciting methods already available, I look forward to further development in this area, specifically:

•cBioportal/Cancertool for spatial datasets?

•Larger cohorts – spatial equivalent of TCGA?

•Automated spatial transcriptomics?

•High content spatial transcriptomics?

•Combining with spatial proteomics?

•Combining with spatial metabolomics?

•Combining with spatial epigenomics?

This is an exciting area with rapidly accelerating development, so I’m sure it won’t be long before these development become easily accessible. One preprint already seems to show improvement in several of these areas:

“Here, we advance the application of ST at scale, by presenting Spatial Multiomics (SM-Omics) as a fully automated high-throughput platform for combined and spatially resolved transcriptomics and antibody-based proteomics.”

https://www.biorxiv.org/content/10.1101/2020.10.14.338418v1.full

Are you using spatial transcriptomics yet? What is your method of choice?

Internet friends: help me answer a question!

Image may contain: text that says "What is the biggest challenge in cancer research today? Creren) SOREEN"

I’m currently working on a new video for Cancer Research Demystified, where I’m going to attempt to answer this lofty question. What is the biggest challenge in cancer research today?

For the video, I’ll summarise a few different perspectives on this: the patients, the advocates, the funders, the institutions, the public, and the researchers ourselves. The most common answer so far is of course ‘there’s more than one!’ so I’ll cover as many as I can, and give my two cents on what could be considered the one single greatest challenge.

The NCRI cover their top priorities here – (of which there are of course more than one!) and you can see similar lists from many other groups. But what is the biggest one?! I’ve been asking around on Twitter, Instagram and Facebook, and I’ve gotten 24 responses so far, mostly from other cancer researchers, but some from patients & funders too. Before I compile, compare & contrast these, I wanted to ask you too – what do you think is the single greatest challenge in cancer research today? I’ll give you a head start by saying that the answers I’m getting are falling into two few common themes: biology & barriers.

Does one of these jump out at you as being a bigger challenge than the others? Do you have something to add? Comment below or DM me on Twitter/Facebook/Instagram/Reddit/LinkedIn and I’ll discuss your thoughts (anonymised if via DM) in our upcoming video!

Human tissue models for the replacement of mice in cancer research: Are we there yet?

I recently came across a review which asked if it’s time for peer reviewers to request ‘organ on a chip’ models instead of animal validation studies, and it got me thinking – are we there yet?

https://pubmed.ncbi.nlm.nih.gov/33240763/

As cancer researchers, if we submit an article for publication that contains only data from cell lines, we’re often asked by peer reviewers to carry out animal studies – usually in mice. This review suggests that it might be nearly time for reviewers to ask for human tissue work instead – maybe some of our newest human tissue models are good enough to replace these types of animal studies?

Personally, I’m a big advocate for human tissue work in cancer research. Anyone who collaborates with me knows that I favour ex vivo / 3D culture of human tumours over mouse models. Of course there are ethical considerations here around reducing the number of animals used for research, but my opinion stems mostly from the science – because of the very simple fact that mice are not humans. The differences between mouse biology and human biology are too wide ranging, with far too many variables to feasibly take into account. Frankly, neither have been characterized rigorously enough to pick apart their similarities and normalize for their differences.

Of course, to date, mouse xenografts (and more recently, patient derived xenografts) are pretty much the best models we’ve got in terms of testing new cancer drugs in a better model than cell lines, without the ethical risks of testing them in living humans too early.

As such, many scientists like me around the world have been developing a huge range of human tissue models, usually removed from a cancer patient at biopsy or surgery, and donated for research. The idea being that one day we’ll get these cells or tissues to survive outside the body while changing as little of their biology as possible, and treat them with experimental drugs for research. Ultimately, replacing animal models.

Roughly speaking, these types of models fall into three categories: explant cultures, organoids/tumouroids, and ‘organ on a chip’ models.

Explant cultures involve taking a small piece of donated human tissue, and trying to keep it alive for a few days in an incubator, helped along by different nutrients and materials. One of the main benefits of explants is that the tissue stays whole, rather than the scientist isolating out particular cell types. The original architecture of the tissue, and range of different cell types within it can remain somewhat intact (this isn’t perfect, but it’s improving). I’ve been using a version of explants for the last five years, testing new drugs in prostate cancer, as part of my fellowship project ‘SCREEN’, kindly funded by Prostate Cancer UK.

Organoids, or specifically within our field of cancer research – ‘tumouroids’, represent human tumour cells that are grown in 3D outside of the human body, including multiple key cell types and environmental factors. Here the structure of the tissue does not remain intact as with explants, but key molecular signals added by scientists can induce the cells to organise themselves in the same way that the original tumour would have done in the human. These can be cultured for longer than explants generally, and offer more flexibility for the researcher to tweak particular aspects of their behaviour.

Organ on a chip models can be based on either of the above, but include additional extras like midrofluidics (a system that allows for nutrients to flow over and around the cells in the same way blood would in the body), which can encourage blood vessels to grow and feed the tumour, as they would in a human. These are getting ever closer replicating human tumours outside of humans.

But are any of these good enough to replace mouse experiments yet? My gut says no – but we really are very very close.

One of the issues with this branch of cancer research is that there are just so many different types of models being investigated. Yes, they do fall roughly within three categories, but within each of these categories, there are dozens if not hundreds of iterations being researched around the world. In my view, to properly validate them, we need a consensus – not a new model every five minutes! This consensus will be difficult to achieve, as within the structure of academic research we are encouraged to generate new intellectual property (IP), and we’re generally taught that to get a model validated and used in the clinic, we need to either commercialize it ourselves, or licence it to a company who will develop it for us. This is the approach that will get us the next grant, the next paper, the next promotion – i.e. more cred, and potentially personal financial gain. So why would we bother to further develop, independently validate and rigorously characterize someone else’s model, when we could be changing it slightly to add our own ‘unique selling point’ and branding it as our own?

My hope is to reject this way of thinking. Over the first few years of my new lab, I am to compare and contrast the leading models from around the world in a fully independent setting, where I’m not backing any horse in the race – where I have no allegiance to one human tissue model over another – and just purely try to see if the best one(s) reflect how humans actually respond to anti-cancer treatments. If we can pull this unbiased validation and rigorous characterization off, then I truly believe the peer reviewer mentioned in the paper linked above should absolutely be asking researchers to validate their research in these human models rather than animal models.

It’s worth mentioning that I also tweeted this paper and got varying responses. While one person replied a jokey ‘I wonder what reviewer 3 wrote in the report :)’, another expressed caution:

And I agree somewhat – we still don’t have strong enough validation in my mind to fully replace animal studies. But should reviewers be requesting more human work incrementally as our models get better and better? Yes, I think so. They’re certainly worth carrying out in addition to animal studies – just maybe not instead of animal studies just yet.

Dr Dania Movia from Trinity College Dublin commented on the frustration of human tissue researchers still being required to validate their findings in animals instead of humans – why do we think of mice as a gold standard for how human biology behaves? It makes no sense, and I couldn’t agree more! While mouse models bring some valuable extra data that human models don’t have perfect yet, they’re certainly imperfect in a lot of other ways, and not the right place to validate a human model.

Check out the review linked at the top of this blog if you’d like to read a more technical summary of where the field is at (though the review is not specific to cancer research). And let me know what you think! Are we ready to replace animal models with human models today? Will be there in a year, in a decade, or ever?

Guidelines for reporting research

A quick blog this week as I’m in the midst of lots of teaching & grant writing! On this week’s teaching agenda I’ve got research reporting, research presentation skills, in vitro, in vivo, and in silico research, acute & chronic inflammation, image analysis and drug efficacy. I thought I’d share with you some of the resources we are using in one of these lessons (not compiled by me), as frankly – they’re quite useful!

Research reporting – something we all need to get right!

According the declaration of Helsinki, researchers and authors have a duty to make their results available publicly using accepted guidelines for ethical reporting.

Naturally we’ll be teaching our students general tips on which types of content should be included in the different sections of a general research paper. We also discuss why it’s important to report our research fully, and what can go wrong when we don’t!

We also give the students a list of guidelines for specific types of research reports. Some of these are slightly peripheral to my own research interests, and I found them quite interesting, so I thought you might too! If you’re new to research reporting, perhaps a bit rusty, or trying to remember one of those many many reporting acronyms, then here’s an overview that might be helpful for you.

EQUATOR have also developed a wizard that can be useful to help decide on how to report your research. This tool asks what type of research you are conducting, and identifies useful checklists to make sure you are include the required information in your report: https://www.goodreports.org/

The list! (Courtesy of Prof Kurinchi Gurusamy):

•Consolidated Standards Of Reporting Trials (CONSORT)
www.consort-statement.org
–Design, analysis and interpretation of the RCT.


•Strengthening the Reporting of Observational studies in Epidemiology (STROBE)
www.strobe-statement.org
–Reporting of observational studies


•Standards for Reporting Studies of Diagnostic Accuracy (STARD)
www.stard-statement.org
–Reporting of diagnostic accuracy studies


•Quality assessment of diagnostic accuracy studies (QUADAS 2)
www.bris.ac.uk/quadas
–Quality assessment of diagnostic accuracy studies

•Transparent Reporting of a multivariable prediction model for Individual Prognosis Or
Diagnosis (TRIPOD)
http://annals.org/article.aspx?articleid=2088549
–Reporting of prediction models


•Consolidated Health Economic Evaluation Reporting Standards (CHEERS)
http://www.bmj.com/content/346/bmj.f1049
–Reporting practices for economic evaluations of interventional studies


•Consolidated criteria for reporting qualitative research (COREQ)
http://intqhc.oxfordjournals.org/content/19/6/349.long
–Reporting of qualitative data from interviews and focus groups


•Standards for reporting qualitative research: a synthesis of recommendations (SRQR)
http://www.ncbi.nlm.nih.gov/pubmed/24979285
–Reporting of qualitative data


•Consensus-based Clinical Case Reporting Guideline Development (CARE)
www.carestatement.org
–Reporting of case reports


•Standards for Quality Improvement Reporting Excellence (SQUIRE)
www.squire-statement.org
–Reporting of quality improvement in health care

•Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
www.prisma-statement.org
–Reporting systematic reviews and meta-analyses

•Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ)
http://www.biomedcentral.com/1471-2288/12/181
–Reporting of systematic reviews of qualitative research


•Animals in Research: Reporting In Vivo Experiments (ARRIVE)
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2893951/
–Reporting of animal research


•Statistical Analyses and Methods in the Published Literature (SAMPL)
http://www.equator-network.org/wp-content/uploads/2013/03/SAMPL-Guidelines-3-13-13.pdf
–Reporting of statistical methods and analyses of all types of biomedical research

Hopefully you’ll find this list as handy as I did – many thanks to Prof Gurusamy for compiling it & hopefully you’ll forgive the short blog this week – I’m off to continue my grant!!

Peer reviewed videos: the way forwards for methods papers?

Last year I published my first ‘paper’ with JoVE – the Journal of Visualized Experiments. JoVE are a video journal, that I had heard about from a collaborator – who suggested that our MRI-targeted prostate slicing method ‘PEOPLE’ might be a good fit. It sounded like a great idea!

I’m happy to report that there’s no twist coming in this blog – the experience was great, and I’d recommend them to others too!

Seal of Approval by Jaco Haasbroek | Perfect Fit Phone Case Threadless
Image source: threadless.com

With JoVE, you submit an abstract & basic written paper of your method (or whatever research you’d like to publish as a video). The written submission is peer reviewed, edited as necessary, and once the reviewers are happy, you begin to plan a filming day. There are a few options here – I chose to go with the more expensive option of having JoVE arrange the script, filming & editing for me, rather than having to do it myself. The benefit here is you get to work with professionals, who know how to get the right shots, the right lighting, and edit everything in such a way that other scientists can see everything they need to see clearly, and learn the method so that they can carry it out themselves.

This was of particular benefit to me, as a (very!) amateur YouTuber with Cancer Research Demystified – I wanted to learn how the professionals do it!

Our videographer was Graham from https://www.sciphi.tv/. Working with him was a brilliant experience – he was an ex-researcher himself, and had extensive experience both carrying out and filming science. He made the day fun, quick and easy – if you ever need someone to film an academic video for you I highly recommend his company!

Filming day itself wouldn’t have been possible without the rest of our research team helping out (in particular Hayley and Aiman – thank you!) and of course a very generous prostate cancer patient, who was undergoing radical prostatectomy, kindly agreeing to take part in our research.

After a short wait we received a first draft of our video which we were really happy with – we had the opportunity to make a round of edits (there weren’t many), and then before long the video was up on JoVE’s website, as well as Pubmed and all the usual places you’d read scientific research in paper form!

Personally, I think videos make a whole lot more sense than written papers for sharing methodologies. I’ve used JoVE videos for training myself – notably for learning to build tissue microarrays (TMAs), and without those videos I’m not sure I could have learned this skill at all – as our resident experts had left the lab! A paper just wouldn’t be able to clearly explain how to use that equipment. With JoVE, there’s always a PDF that goes alongside the paper too, so once you’ve watched and understood the practical side, you have the written protocol to hand while you’re in the lab. The best of both worlds.

I’ve always been a fan of simple solutions (I’m a bit of a broken record on this) – and JoVE is a perfectly simple solution to providing training that will show you how to do something rather than just tell you.

Once caveat – it’s not cheap. But your fellow scientist who want to learn your methods will thank you – you’re doing the rest of us a favour! Of course, there’s always YouTube for a free (ish) alternative. But in my view, the added layers of peer review and professional production are worth the extra cost.

Here’s our JoVe video & PDF publication – enjoy!

https://www.jove.com/t/60216/use-magnetic-resonance-imaging-biopsy-data-to-guide-sampling

And no, this blog was not sponsored by anyone – I’m just a fan & paying customer!

A tour of our lab!

A quick blog this week! I wanted to take a moment to introduce one of our favourite Cancer Research Demystified videos. Here, we give a tour of our lab so that cancer patients, carers, students and anyone with an interest can see what cancer research really looks like!

During our first couple of years meeting with cancer patients, myself and Hayley noticed that for a lot of them, their main frame of reference for what a science lab looked like was ‘the telly’. Whether it was CSI, or even a particularly slick BBC News segment, it was clear that research labs were expected to be minimalist, futuristic, and full of coloured liquids.

The occasional person would describe the opposite picture – dark wooden cabinets filled with dusty glass specimen jars, stained benches, blackboards, worn-off labels on mystery chemicals, and that strong, ambiguous, smell.

Of course, neither are accurate. Real cancer research labs are somewhat modern, sure, but even the most expensive and ‘futuristic’ equipment typically looks more like a tumble dryer than an interactive hologram, and though much of our equipment does use lasers – they are hidden deep inside rather than scanning the lab for spies! Blackboards are long gone, replaced with white boards, dusty unlabeled jars are disposed of due to strict health and safety protocols, although stains on benches….? Well, some of those remain.

We did initially face some mild resistance when we first attempted to film this video. A senior member of staff advised us that patients want the comfort of knowing that the best brains in the world are working on a cure, using the best technology and most impressive workspaces. That’s why, we were told, we need to clear out so much lab mess before the camera crews come in for a news segment.

But frankly – those perfect, sterile, swish labs are out there – if someone wants to see a scientist in a never-before-worn white coat pipetting some pink liquid into a plate, all they need to do is turn on the news. We wanted to show something different – and frankly, more honest – warts and all!

The video we ended up with is a little on the nose perhaps, but we felt it needed to be. We show the reality of what it’s like to work in a lab (well, close to reality anyway – we filmed after hours to avoid getting in people’s way, so it is unusually quiet). Some of the difference between day-to-day lab work versus office work are highlighted, such as not being able to eat, drink or touch up your make up within the lab, and having to wear appropriate PPE.

I came back to this video during lockdown because I missed the lab. I still haven’t been back in there, and I’m not sure when I next will be. Other people are back there now though, under strict covid protocols, with significantly reduced capacity and masks. I hope to join them one day, but for now I’m minding my asthmatic lungs at home!

If you’re a cancer patient or carer – here’s a real look at where we’re carrying out the research to build better diagnostics and therapeutics. If you’re a student thinking about doing a medical/biology based research project – this is the sort of place you’ll find yourself working. Please enjoy!

For more Cancer Research Demystified content, here’s where you can find us:

YouTube: https://www.youtube.com/c/CancerResearchDemystified

Twitter: @CRDemystified

Instagram: cancer.research.demystified

These blogs come out every Monday at 11am GMT – so I’ll see you next week!

My love/hate relationship with impact metrics.

Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.

Checkbox - Graph Going Up Icon PNG Image | Transparent PNG Free Download on  SeekPNG
Image from SeakPNG

But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.

Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!

There are problems.

They are noteworthy.

But we still use impact factor religiously regardless.

My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.

There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:

1. “New England Journal of Medicine reviews controversial stent study”: https://www.bmj.com/content/368/bmj.m878

2. “Two retractions highlight long-standing issues of trust and sloppiness that must be addressed”: https://www.nature.com/news/stap-retracted-1.15488

3. “Retraction—Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis”: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)31324-6/fulltext

Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.

The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.

Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.

An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.

Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.

If you’re looking for a bad metric rabbit hole to go down, please enjoy the wide range of controversy both highlighted by and surrounding Beall’s lists: https://beallslist.net/misleading-metrics/

Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.

Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.

I can’t help it, I like to see numbers that go up!

In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.

If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!

Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.

Shifting the bench/desk balance: impact of COVID19.

During the last few years I’ve noticed one topic coming up again and again over coffee/drinks with other researchers: our collective gradual shift from the bench to the desk.

Of course, none of us were expecting the wet lab to actually go off limits for six months!

Best Practices for Limiting Access to Your Lab | Lab Manager
image source – labmanager.com

Pre-covid:

During my PhD, most of the day would be spent hanging out in the lab, with two or three ‘wet’ experiments on the go at a time, and minimal time during incubations for analysis/writing. During my postdoc years, this balance began to shift for me, and I think this is the same for a lot of us. We had all noticed a massive increase in wet lab data being generated, with virtually every technique gradually being made obsolete by increasingly affordable multiplexed or genome-wide versions. With more and more data being generated quicker and quicker, we all had a bit more time to sit at the desk, and a lot more data to play with there.

This manifested itself quite clearly in the perpetual fight for space in academic departments shifting from fighting over bench spaces in the labs, to desk spaces in the offices!

With my generation of researchers not always having in depth bioinformatics or statistical knowledge as a given, there has been an element of trying to play catch-up at the desk. Most of us know one or two computer whizzes who we can ask for help in our departments, but they of course are swamped with ‘quick’ questions from everyone, and just can’t train everyone from first principles. So we’ve been collectively trying to self-learn large scale data analysis while still producing wet lab data at the same time. It’s been a lot.

The covid months:

So how has seven months at home affected this? Well for me, it’s safe to say I’m beginning to run out of data to analyse for the first time in a very long time. I didn’t anticipate ‘running out’ of my own wet lab data ever – so it’s quite an odd feeling. I’m simultaneously making the transition to life as a faculty member, taking over modules and preparing new ways of teaching online, so it probably took me a bit longer than the average researcher to run out of research data – I imagine many wet lab PhD students hit this stage a good few months before I did.

For others, from what I’ve seen and heard, there has been a lot of upskilling happening to fill that lab-gap, and not a moment too soon. Many have been learning R or Python for the first time, or brushing off old half-attempted databases. Many have been learning to conduct systematic reviews and meta-analyses for the first time too, with our Division’s online modules on these topics having recently been made available to staff to as well as students – and with an enthusiastic uptake.

On a wider scale, for the first time in what feels like a long time, my field is starting to catch up with itself. People are stepping back, taking a breath, and appreciating the enormous volume of data around us. What’s more, we’re taking the time to not only read more of each other’s papers, but critically analyse them, validate what we can from home, and publish these findings too. This is something we’ve all previously lamented at those coffee/drinks chats that we wish we had the time to do!

This is much-needed, and well overdue.  

Post-lockdown:

I can only hope we continue to take this approach to research, as we gradually transition back to life in the lab. I now fully believe that one or two days of the week at the bench, with three or four at home or in the office could honestly achieve more overall than my previous habit of 5 days minimum in the lab.

For this academic year, although our labs have partially reopened, I’ve designed four student research projects that are all fully desk based. This means that whether lockdowns happen or not, research can continue. If you’d asked me this time last year, I wouldn’t have thought I could supervise four non wet lab students, but the collective ‘we will figure this out’ attitude has rubbed off on me! If all four go to plan, they’ll really help to get my lab off the ground while I’m recruiting my new team, and I’m really glad that this is possible from home.

It’s hard to find silver linings from 2020, but I honestly think our collective shift in focus from creation of data to critical analysis of data could be transformative. Let’s hope we all learn from this and continue to improve our practice as time goes on!

Myth busting the fake news about cancer research

When Hayley and I began our YouTube channel, Cancer Research Demystified, we had a clear aim in mind: to give patients & their loved ones answers to their questions about cancer research. We began with tackling the science of common treatments like chemotherapy and radiotherapy, explaining the latest hot topics in research like immunotherapy, and showing footage of what happens to a patient’s donated blood or tissue sample when we receive it in a research lab.

But over time, we noticed that these weren’t necessarily the most common questions we were actually getting from patients. Whether we were discussing latest advances in a support group meeting, consenting a patient to take part in a research study, or even just chatting to a taxi driver or barman who mentioned they had a family member with cancer – one question type was emerging as a very common trend.

Cancer conspiracies.

Now and then, patients & their loved ones would ask us if it was true that big pharma is keeping the cure to cancer a secret. Or indeed, politely inform us that this was happening, and with certainty – to them it was a fact.

While getting an Uber to my lab one day in Cold Spring Harbour Laboratory, USA, my driver told me that what I was doing was a waste of my time – that his cousin was importing the cure from China and selling it at a very reasonable price, and that the US regulators refuse to approve it, because they make too much money from chemotherapy.

In trying to engage with the online cancer patient support community, I joined a wide range of Facebook cancer support groups early on in the Cancer Research Demystified days. I was baffled at the sheer volume of misinformation being shared there. It seemed every time I logged in I came across someone trying to make money off desperate cancer patients – whether it was essential oils, CBD products or alkaline water, the list goes on.

It enraged me to see people trying to make a quick buck off vulnerable people. A cancer diagnosis is an extremely overwhelming thing, with patients getting a huge amount of technical jargon thrown at them during a time of great emotional challenge. You can’t be expected to get a PhD or MD overnight, in order to tell apart the clinicians from the scam artists, and you shouldn’t have to.

Of course the moment you bring up this topic in an office full of cancer researchers – you get a response. Everyone had their story to tell, whether it was a vulnerable relative being lead to believe they could avoid surgery for their cancer and just get acupuncture instead, or a set of memes or viral tweets convincing people that cancer researchers like us are keeping a cure a secret in order to line our own pockets.

It didn’t take long for us to decide to make a small series about this for YouTube. We roped in a colleague, Ben Simpson, who had a penchant for schooling those who were attempting to spread misinformation online. And so far, we’ve produced three episodes, under our series ‘Spam Filter’. The aim is to address these sorts of questions by reviewing the peer reviewed literature on each topic, explain the facts, and discuss why some of these rumours or myths might have managed to take hold.

Is cannabis a cure for cancer?

This topic is persistent online, and it’s easy to understand how it has grown legs, given some of the chemicals found in cannabis can genuinely help to relieve some symptoms/side effects of cancer or cancer treatment. It is not, however, a cure.

Are big pharma covering up the cure to cancer?

This one is a bit irritating to us to say the least, given we have all dedicated our lives to researching cancer. It’s also hard to provide peer reviewed data on something that isn’t real, but we’ve done our best to explain the reality of just how hard it would be to cover up a cure, given the numbers involved – as well as why nobody would bother, given they’d become rich beyond their wildest dreams by just marketing the cure instead!

Finally, the alkaline diet

This is a persistent myth online, that making you body more alkaline by eating alkaline foods (which in some case are actually acidic) could prevent or cure cancer. It’s a trendy diet, that really doesn’t make much sense at all. However, it’s very easy to see why people might think it is working, given they can test differences in their urine’s pH, that make it seem like something is changing. For this video we did some urine and blood tests on Ben, before, during and after a day of eating this diet, and discussed the facts and myths involved.

Which cancer myth do you think we should bust next? Or better yet, is there a rumour, trend or theory going around that you’ve seen, and you can’t tell whether it’s legit or not? Let us know and we’ll try our best to get to the bottom of it!