In recent years, spatial transcriptomics has gone from being a long-shot futuristic technology that many were sceptical of, to one of the trendiest and mostly widely attempted omics methods on the market. Having once been the handywork of a few isolated academic labs developing methodologies in-house, it is now commercially available from a wide range of competing companies, with kits, specialised pieces of equipment, and bespoke analysis pipelines at the ready.
Assuming you like the idea of seeing thousands of genes spatially resolved across your cells or tissue samples (and who wouldn’t?), and assuming you’ve got some funding at your disposal, how can you decide which method to paint your molecular picture with?
You will need to take into account the usual considerations when planning your research, as well as some methodology-specific and sample-specific requirements:
-Usual considerations: cost, equipment needed, time etc.
-Has it been demonstrated outside of originator’s lab?
Fortunately, the experts have already pulled together much of the method-specific requirements, which you can see summarised in the table below! Be sure to read the full paper (linked below the table), written by Michaela Asp, Joseph Bergenstråhle, and Joakim Lundeberg.
With such exciting methods already available, I look forward to further development in this area, specifically:
•cBioportal/Cancertool for spatial datasets?
•Larger cohorts – spatial equivalent of TCGA?
•Automated spatial transcriptomics?
•High content spatial transcriptomics?
•Combining with spatial proteomics?
•Combining with spatial metabolomics?
•Combining with spatial epigenomics?
This is an exciting area with rapidly accelerating development, so I’m sure it won’t be long before these development become easily accessible. One preprint already seems to show improvement in several of these areas:
“Here, we advance the application of ST at scale, by presenting Spatial Multiomics (SM-Omics) as a fully automated high-throughput platform for combined and spatially resolved transcriptomics and antibody-based proteomics.”
Our latest YouTube video for Cancer Research Demystified came out yesterday, and it attempts to answer this very tough question: what is the single greatest challenge in cancer research?
Here’s a little behind the scenes look at how the video came to be!
I’m still not back to working in the real world post COVID19, so Hayley and I are mostly making separate videos this year, but finally we are both on screen ‘together’ again, as she recorded a clip for this one from her house! To pull the video together, I combined mine & Hayley’s thoughts on the topic with those of our internet friends, along with the key strategies of some of the leading funding bodies. The response to this lofty question across Twitter, Instagram, Reddit, Facebook, this blog, LinkedIn and my various DMs was fantastic and really enjoyable to read – everyone had their two cents, from researchers, students and clinicians to patients, advocates and the funders themselves. So many opinions were expressed that one thing became immediately clear: this is not something we all agree on! I attempted to pull together some common themes, which in my mind fell into a few subdivisions of either biological challenges, or research barriers.
How long does it take to make a CRD video?
I’ve been asked this question a few times, so I thought I’d use this video as an example and go through each of the tasks: The question I was asking for this video was open for answers for approximately one month across our different online platforms. Input from me asking this question in different places throughout this month was probably a combined total of 30 mins – nothing huge. Once all the answers were in, I spent about three hours one evening collating them (i.e. lots of screen shotting!) and trying to find common themes, as well as making the summary PowerPoint slide which I would use as an anchor throughout the video. Filming took about two hours one Saturday, followed by approx. five hours of editing – including re-recording some bits that didn’t make sense.
The rough cut, which contained all the bits I wanted to include initially, was about 75 minutes long – I clearly have spent too much time lecturing this term and was enjoying the sound of my own voice too much!!
The final edit was about 20 mins long – much more palatable I hope!
Export, upload and writing social media descriptions took a couple of hours that Saturday evening. Release the next day and sharing everywhere took about an hour. All in all this adds up to about 13.5 hours of my personal input for this video, give or take.
I would say this is on the light end of average for a CRD video. Some of our videos are miraculously conceived, edited and uploaded within one evening session of 3 hours after work on a Tuesday (6 human hours, since there would usually be two of us), but this is extremely rare! Generally we spend one evening planning, one evening filming and starting to edit, and a third evening finishing editing and uploading, so more like 18 human hours. Our first few videos back in 2016/2017 needed to be re-recorded several times, as we were awkward on camera, unpractised at getting everything we needed, and not working particularly efficiently yet. I’d say the longest was one of our early videos about blood samples – which must have been over 50 human hours, or at least it felt like it…
My favourite part about making this video was reading through all of the answers we received, particularly on Twitter, Reddit and Instagram. This turned into a whole conversation, and it was great to see so many researchers, patients and advocates discussing their views on cancer research. This is exactly what we have always been hoping to achieve with CRD.
My least favourite part was when I saw that the rough cut was 75 minutes long… that is just too long for a YouTube video, too detailed, too rambling, and I knew I’d have to work hard to cut it down to an acceptable length. I ended up cutting out my description of each of the 9 grand challenges that CRUK are currently trying to fund, which was detailed and took a fair amount of effort to pull together. It’s never fun to leave science on the cutting room floor! I think it was worth it in the end though.
If I could change one thing about this video, it would of course be that I wish it was filmed in the lab with Hayley. Maybe that will happen again one day, if I can get my hands on a vaccine….
I think that covers all the ‘behind the scenes’ for this video. Please watch it if you get a chance to, and share with any patients, carers, advocates or students you know who might like to find out more about cancer research!
I’m currently working on a new video for Cancer Research Demystified, where I’m going to attempt to answer this lofty question. What is the biggest challenge in cancer research today?
For the video, I’ll summarise a few different perspectives on this: the patients, the advocates, the funders, the institutions, the public, and the researchers ourselves. The most common answer so far is of course ‘there’s more than one!’ so I’ll cover as many as I can, and give my two cents on what could be considered the one single greatest challenge.
The NCRI cover their top priorities here – (of which there are of course more than one!) and you can see similar lists from many other groups. But what is the biggest one?! I’ve been asking around on Twitter, Instagram and Facebook, and I’ve gotten 24 responses so far, mostly from other cancer researchers, but some from patients & funders too. Before I compile, compare & contrast these, I wanted to ask you too – what do you think is the single greatest challenge in cancer research today? I’ll give you a head start by saying that the answers I’m getting are falling into two few common themes: biology & barriers.
Does one of these jump out at you as being a bigger challenge than the others? Do you have something to add? Comment below or DM me on Twitter/Facebook/Instagram/Reddit/LinkedIn and I’ll discuss your thoughts (anonymised if via DM) in our upcoming video!
I recently came across a review which asked if it’s time for peer reviewers to request ‘organ on a chip’ models instead of animal validation studies, and it got me thinking – are we there yet?
As cancer researchers, if we submit an article for publication that contains only data from cell lines, we’re often asked by peer reviewers to carry out animal studies – usually in mice. This review suggests that it might be nearly time for reviewers to ask for human tissue work instead – maybe some of our newest human tissue models are good enough to replace these types of animal studies?
Personally, I’m a big advocate for human tissue work in cancer research. Anyone who collaborates with me knows that I favour ex vivo / 3D culture of human tumours over mouse models. Of course there are ethical considerations here around reducing the number of animals used for research, but my opinion stems mostly from the science – because of the very simple fact that mice are not humans. The differences between mouse biology and human biology are too wide ranging, with far too many variables to feasibly take into account. Frankly, neither have been characterized rigorously enough to pick apart their similarities and normalize for their differences.
Of course, to date, mouse xenografts (and more recently, patient derived xenografts) are pretty much the best models we’ve got in terms of testing new cancer drugs in a better model than cell lines, without the ethical risks of testing them in living humans too early.
As such, many scientists like me around the world have been developing a huge range of human tissue models, usually removed from a cancer patient at biopsy or surgery, and donated for research. The idea being that one day we’ll get these cells or tissues to survive outside the body while changing as little of their biology as possible, and treat them with experimental drugs for research. Ultimately, replacing animal models.
Roughly speaking, these types of models fall into three categories: explant cultures, organoids/tumouroids, and ‘organ on a chip’ models.
Explant cultures involve taking a small piece of donated human tissue, and trying to keep it alive for a few days in an incubator, helped along by different nutrients and materials. One of the main benefits of explants is that the tissue stays whole, rather than the scientist isolating out particular cell types. The original architecture of the tissue, and range of different cell types within it can remain somewhat intact (this isn’t perfect, but it’s improving). I’ve been using a version of explants for the last five years, testing new drugs in prostate cancer, as part of my fellowship project ‘SCREEN’, kindly funded by Prostate Cancer UK.
Organoids, or specifically within our field of cancer research – ‘tumouroids’, represent human tumour cells that are grown in 3D outside of the human body, including multiple key cell types and environmental factors. Here the structure of the tissue does not remain intact as with explants, but key molecular signals added by scientists can induce the cells to organise themselves in the same way that the original tumour would have done in the human. These can be cultured for longer than explants generally, and offer more flexibility for the researcher to tweak particular aspects of their behaviour.
Organ on a chip models can be based on either of the above, but include additional extras like midrofluidics (a system that allows for nutrients to flow over and around the cells in the same way blood would in the body), which can encourage blood vessels to grow and feed the tumour, as they would in a human. These are getting ever closer replicating human tumours outside of humans.
But are any of these good enough to replace mouse experiments yet? My gut says no – but we really are very very close.
One of the issues with this branch of cancer research is that there are just so many different types of models being investigated. Yes, they do fall roughly within three categories, but within each of these categories, there are dozens if not hundreds of iterations being researched around the world. In my view, to properly validate them, we need a consensus – not a new model every five minutes! This consensus will be difficult to achieve, as within the structure of academic research we are encouraged to generate new intellectual property (IP), and we’re generally taught that to get a model validated and used in the clinic, we need to either commercialize it ourselves, or licence it to a company who will develop it for us. This is the approach that will get us the next grant, the next paper, the next promotion – i.e. more cred, and potentially personal financial gain. So why would we bother to further develop, independently validate and rigorously characterize someone else’s model, when we could be changing it slightly to add our own ‘unique selling point’ and branding it as our own?
My hope is to reject this way of thinking. Over the first few years of my new lab, I am to compare and contrast the leading models from around the world in a fully independent setting, where I’m not backing any horse in the race – where I have no allegiance to one human tissue model over another – and just purely try to see if the best one(s) reflect how humans actually respond to anti-cancer treatments. If we can pull this unbiased validation and rigorous characterization off, then I truly believe the peer reviewer mentioned in the paper linked above should absolutely be asking researchers to validate their research in these human models rather than animal models.
It’s worth mentioning that I also tweeted this paper and got varying responses. While one person replied a jokey ‘I wonder what reviewer 3 wrote in the report :)’, another expressed caution:
And I agree somewhat – we still don’t have strong enough validation in my mind to fully replace animal studies. But should reviewers be requesting more human work incrementally as our models get better and better? Yes, I think so. They’re certainly worth carrying out in addition to animal studies – just maybe not instead of animal studies just yet.
Dr Dania Movia from Trinity College Dublin commented on the frustration of human tissue researchers still being required to validate their findings in animals instead of humans – why do we think of mice as a gold standard for how human biology behaves? It makes no sense, and I couldn’t agree more! While mouse models bring some valuable extra data that human models don’t have perfect yet, they’re certainly imperfect in a lot of other ways, and not the right place to validate a human model.
Check out the review linked at the top of this blog if you’d like to read a more technical summary of where the field is at (though the review is not specific to cancer research). And let me know what you think! Are we ready to replace animal models with human models today? Will be there in a year, in a decade, or ever?
A quick blog this week as I’m in the midst of lots of teaching & grant writing! On this week’s teaching agenda I’ve got research reporting, research presentation skills, in vitro, in vivo, and in silico research, acute & chronic inflammation, image analysis and drug efficacy. I thought I’d share with you some of the resources we are using in one of these lessons (not compiled by me), as frankly – they’re quite useful!
Research reporting – something we all need to get right!
According the declaration of Helsinki, researchers and authors have a duty to make their results available publicly using accepted guidelines for ethical reporting.
Naturally we’ll be teaching our students general tips on which types of content should be included in the different sections of a general research paper. We also discuss why it’s important to report our research fully, and what can go wrong when we don’t!
We also give the students a list of guidelines for specific types of research reports. Some of these are slightly peripheral to my own research interests, and I found them quite interesting, so I thought you might too! If you’re new to research reporting, perhaps a bit rusty, or trying to remember one of those many many reporting acronyms, then here’s an overview that might be helpful for you.
EQUATOR have also developed a wizard that can be useful to help decide on how to report your research. This tool asks what type of research you are conducting, and identifies useful checklists to make sure you are include the required information in your report: https://www.goodreports.org/
The list! (Courtesy of Prof Kurinchi Gurusamy):
•Consolidated Standards Of Reporting Trials (CONSORT) – www.consort-statement.org –Design, analysis and interpretation of the RCT.
•Strengthening the Reporting of Observational studies in Epidemiology (STROBE) – www.strobe-statement.org –Reporting of observational studies
•Standards for Reporting Studies of Diagnostic Accuracy (STARD) – www.stard-statement.org –Reporting of diagnostic accuracy studies
•Quality assessment of diagnostic accuracy studies (QUADAS 2) –www.bris.ac.uk/quadas –Quality assessment of diagnostic accuracy studies
Last year I published my first ‘paper’ with JoVE – the Journal of Visualized Experiments. JoVE are a video journal, that I had heard about from a collaborator – who suggested that our MRI-targeted prostate slicing method ‘PEOPLE’ might be a good fit. It sounded like a great idea!
I’m happy to report that there’s no twist coming in this blog – the experience was great, and I’d recommend them to others too!
With JoVE, you submit an abstract & basic written paper of your method (or whatever research you’d like to publish as a video). The written submission is peer reviewed, edited as necessary, and once the reviewers are happy, you begin to plan a filming day. There are a few options here – I chose to go with the more expensive option of having JoVE arrange the script, filming & editing for me, rather than having to do it myself. The benefit here is you get to work with professionals, who know how to get the right shots, the right lighting, and edit everything in such a way that other scientists can see everything they need to see clearly, and learn the method so that they can carry it out themselves.
This was of particular benefit to me, as a (very!) amateur YouTuber with Cancer Research Demystified – I wanted to learn how the professionals do it!
Our videographer was Graham from https://www.sciphi.tv/. Working with him was a brilliant experience – he was an ex-researcher himself, and had extensive experience both carrying out and filming science. He made the day fun, quick and easy – if you ever need someone to film an academic video for you I highly recommend his company!
Filming day itself wouldn’t have been possible without the rest of our research team helping out (in particular Hayley and Aiman – thank you!) and of course a very generous prostate cancer patient, who was undergoing radical prostatectomy, kindly agreeing to take part in our research.
After a short wait we received a first draft of our video which we were really happy with – we had the opportunity to make a round of edits (there weren’t many), and then before long the video was up on JoVE’s website, as well as Pubmed and all the usual places you’d read scientific research in paper form!
Personally, I think videos make a whole lot more sense than written papers for sharing methodologies. I’ve used JoVE videos for training myself – notably for learning to build tissue microarrays (TMAs), and without those videos I’m not sure I could have learned this skill at all – as our resident experts had left the lab! A paper just wouldn’t be able to clearly explain how to use that equipment. With JoVE, there’s always a PDF that goes alongside the paper too, so once you’ve watched and understood the practical side, you have the written protocol to hand while you’re in the lab. The best of both worlds.
I’ve always been a fan of simple solutions (I’m a bit of a broken record on this) – and JoVE is a perfectly simple solution to providing training that will show you how to do something rather than just tell you.
Once caveat – it’s not cheap. But your fellow scientist who want to learn your methods will thank you – you’re doing the rest of us a favour! Of course, there’s always YouTube for a free (ish) alternative. But in my view, the added layers of peer review and professional production are worth the extra cost.
A quick blog this week! I wanted to take a moment to introduce one of our favourite Cancer Research Demystified videos. Here, we give a tour of our lab so that cancer patients, carers, students and anyone with an interest can see what cancer research really looks like!
During our first couple of years meeting with cancer patients, myself and Hayley noticed that for a lot of them, their main frame of reference for what a science lab looked like was ‘the telly’. Whether it was CSI, or even a particularly slick BBC News segment, it was clear that research labs were expected to be minimalist, futuristic, and full of coloured liquids.
The occasional person would describe the opposite picture – dark wooden cabinets filled with dusty glass specimen jars, stained benches, blackboards, worn-off labels on mystery chemicals, and that strong, ambiguous, smell.
Of course, neither are accurate. Real cancer research labs are somewhat modern, sure, but even the most expensive and ‘futuristic’ equipment typically looks more like a tumble dryer than an interactive hologram, and though much of our equipment does use lasers – they are hidden deep inside rather than scanning the lab for spies! Blackboards are long gone, replaced with white boards, dusty unlabeled jars are disposed of due to strict health and safety protocols, although stains on benches….? Well, some of those remain.
We did initially face some mild resistance when we first attempted to film this video. A senior member of staff advised us that patients want the comfort of knowing that the best brains in the world are working on a cure, using the best technology and most impressive workspaces. That’s why, we were told, we need to clear out so much lab mess before the camera crews come in for a news segment.
But frankly – those perfect, sterile, swish labs are out there – if someone wants to see a scientist in a never-before-worn white coat pipetting some pink liquid into a plate, all they need to do is turn on the news. We wanted to show something different – and frankly, more honest – warts and all!
The video we ended up with is a little on the nose perhaps, but we felt it needed to be. We show the reality of what it’s like to work in a lab (well, close to reality anyway – we filmed after hours to avoid getting in people’s way, so it is unusually quiet). Some of the difference between day-to-day lab work versus office work are highlighted, such as not being able to eat, drink or touch up your make up within the lab, and having to wear appropriate PPE.
I came back to this video during lockdown because I missed the lab. I still haven’t been back in there, and I’m not sure when I next will be. Other people are back there now though, under strict covid protocols, with significantly reduced capacity and masks. I hope to join them one day, but for now I’m minding my asthmatic lungs at home!
If you’re a cancer patient or carer – here’s a real look at where we’re carrying out the research to build better diagnostics and therapeutics. If you’re a student thinking about doing a medical/biology based research project – this is the sort of place you’ll find yourself working. Please enjoy!
For more Cancer Research Demystified content, here’s where you can find us:
Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.
But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.
Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!
There are problems.
They are noteworthy.
But we still use impact factor religiously regardless.
My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.
There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:
Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.
The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.
Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.
An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.
Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.
Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.
Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.
I can’t help it, I like to see numbers that go up!
In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.
If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!
Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.
During the last few years I’ve noticed one topic coming up again and again over coffee/drinks with other researchers: our collective gradual shift from the bench to the desk.
Of course, none of us were expecting the wet lab to actually go off limits for six months!
During my PhD, most of the day would be spent hanging out in the lab, with two or three ‘wet’ experiments on the go at a time, and minimal time during incubations for analysis/writing. During my postdoc years, this balance began to shift for me, and I think this is the same for a lot of us. We had all noticed a massive increase in wet lab data being generated, with virtually every technique gradually being made obsolete by increasingly affordable multiplexed or genome-wide versions. With more and more data being generated quicker and quicker, we all had a bit more time to sit at the desk, and a lot more data to play with there.
This manifested itself quite clearly in the perpetual fight for space in academic departments shifting from fighting over bench spaces in the labs, to desk spaces in the offices!
With my generation of researchers not always having in depth bioinformatics or statistical knowledge as a given, there has been an element of trying to play catch-up at the desk. Most of us know one or two computer whizzes who we can ask for help in our departments, but they of course are swamped with ‘quick’ questions from everyone, and just can’t train everyone from first principles. So we’ve been collectively trying to self-learn large scale data analysis while still producing wet lab data at the same time. It’s been a lot.
The covid months:
So how has seven months at home affected this? Well for me, it’s safe to say I’m beginning to run out of data to analyse for the first time in a very long time. I didn’t anticipate ‘running out’ of my own wet lab data ever – so it’s quite an odd feeling. I’m simultaneously making the transition to life as a faculty member, taking over modules and preparing new ways of teaching online, so it probably took me a bit longer than the average researcher to run out of research data – I imagine many wet lab PhD students hit this stage a good few months before I did.
For others, from what I’ve seen and heard, there has been a lot of upskilling happening to fill that lab-gap, and not a moment too soon. Many have been learning R or Python for the first time, or brushing off old half-attempted databases. Many have been learning to conduct systematic reviews and meta-analyses for the first time too, with our Division’s online modules on these topics having recently been made available to staff to as well as students – and with an enthusiastic uptake.
On a wider scale, for the first time in what feels like a long time, my field is starting to catch up with itself. People are stepping back, taking a breath, and appreciating the enormous volume of data around us. What’s more, we’re taking the time to not only read more of each other’s papers, but critically analyse them, validate what we can from home, and publish these findings too. This is something we’ve all previously lamented at those coffee/drinks chats that we wish we had the time to do!
This is much-needed, and well overdue.
I can only hope we continue to take this approach to research, as we gradually transition back to life in the lab. I now fully believe that one or two days of the week at the bench, with three or four at home or in the office could honestly achieve more overall than my previous habit of 5 days minimum in the lab.
For this academic year, although our labs have partially reopened, I’ve designed four student research projects that are all fully desk based. This means that whether lockdowns happen or not, research can continue. If you’d asked me this time last year, I wouldn’t have thought I could supervise four non wet lab students, but the collective ‘we will figure this out’ attitude has rubbed off on me! If all four go to plan, they’ll really help to get my lab off the ground while I’m recruiting my new team, and I’m really glad that this is possible from home.
It’s hard to find silver linings from 2020, but I honestly think our collective shift in focus from creation of data to critical analysis of data could be transformative. Let’s hope we all learn from this and continue to improve our practice as time goes on!
When Hayley and I began our YouTube channel, Cancer Research Demystified, we had a clear aim in mind: to give patients & their loved ones answers to their questions about cancer research. We began with tackling the science of common treatments like chemotherapy and radiotherapy, explaining the latest hot topics in research like immunotherapy, and showing footage of what happens to a patient’s donated blood or tissue sample when we receive it in a research lab.
But over time, we noticed that these weren’t necessarily the most common questions we were actually getting from patients. Whether we were discussing latest advances in a support group meeting, consenting a patient to take part in a research study, or even just chatting to a taxi driver or barman who mentioned they had a family member with cancer – one question type was emerging as a very common trend.
Now and then, patients & their loved ones would ask us if it was true that big pharma is keeping the cure to cancer a secret. Or indeed, politely inform us that this was happening, and with certainty – to them it was a fact.
While getting an Uber to my lab one day in Cold Spring Harbour Laboratory, USA, my driver told me that what I was doing was a waste of my time – that his cousin was importing the cure from China and selling it at a very reasonable price, and that the US regulators refuse to approve it, because they make too much money from chemotherapy.
In trying to engage with the online cancer patient support community, I joined a wide range of Facebook cancer support groups early on in the Cancer Research Demystified days. I was baffled at the sheer volume of misinformation being shared there. It seemed every time I logged in I came across someone trying to make money off desperate cancer patients – whether it was essential oils, CBD products or alkaline water, the list goes on.
It enraged me to see people trying to make a quick buck off vulnerable people. A cancer diagnosis is an extremely overwhelming thing, with patients getting a huge amount of technical jargon thrown at them during a time of great emotional challenge. You can’t be expected to get a PhD or MD overnight, in order to tell apart the clinicians from the scam artists, and you shouldn’t have to.
Of course the moment you bring up this topic in an office full of cancer researchers – you get a response. Everyone had their story to tell, whether it was a vulnerable relative being lead to believe they could avoid surgery for their cancer and just get acupuncture instead, or a set of memes or viral tweets convincing people that cancer researchers like us are keeping a cure a secret in order to line our own pockets.
It didn’t take long for us to decide to make a small series about this for YouTube. We roped in a colleague, Ben Simpson, who had a penchant for schooling those who were attempting to spread misinformation online. And so far, we’ve produced three episodes, under our series ‘Spam Filter’. The aim is to address these sorts of questions by reviewing the peer reviewed literature on each topic, explain the facts, and discuss why some of these rumours or myths might have managed to take hold.
This topic is persistent online, and it’s easy to understand how it has grown legs, given some of the chemicals found in cannabis can genuinely help to relieve some symptoms/side effects of cancer or cancer treatment. It is not, however, a cure.
This one is a bit irritating to us to say the least, given we have all dedicated our lives to researching cancer. It’s also hard to provide peer reviewed data on something that isn’t real, but we’ve done our best to explain the reality of just how hard it would be to cover up a cure, given the numbers involved – as well as why nobody would bother, given they’d become rich beyond their wildest dreams by just marketing the cure instead!
This is a persistent myth online, that making you body more alkaline by eating alkaline foods (which in some case are actually acidic) could prevent or cure cancer. It’s a trendy diet, that really doesn’t make much sense at all. However, it’s very easy to see why people might think it is working, given they can test differences in their urine’s pH, that make it seem like something is changing. For this video we did some urine and blood tests on Ben, before, during and after a day of eating this diet, and discussed the facts and myths involved.
Which cancer myth do you think we should bust next? Or better yet, is there a rumour, trend or theory going around that you’ve seen, and you can’t tell whether it’s legit or not? Let us know and we’ll try our best to get to the bottom of it!