Peer reviewed videos: the way forwards for methods papers?

Last year I published my first ‘paper’ with JoVE – the Journal of Visualized Experiments. JoVE are a video journal, that I had heard about from a collaborator – who suggested that our MRI-targeted prostate slicing method ‘PEOPLE’ might be a good fit. It sounded like a great idea!

I’m happy to report that there’s no twist coming in this blog – the experience was great, and I’d recommend them to others too!

Seal of Approval by Jaco Haasbroek | Perfect Fit Phone Case Threadless
Image source: threadless.com

With JoVE, you submit an abstract & basic written paper of your method (or whatever research you’d like to publish as a video). The written submission is peer reviewed, edited as necessary, and once the reviewers are happy, you begin to plan a filming day. There are a few options here – I chose to go with the more expensive option of having JoVE arrange the script, filming & editing for me, rather than having to do it myself. The benefit here is you get to work with professionals, who know how to get the right shots, the right lighting, and edit everything in such a way that other scientists can see everything they need to see clearly, and learn the method so that they can carry it out themselves.

This was of particular benefit to me, as a (very!) amateur YouTuber with Cancer Research Demystified – I wanted to learn how the professionals do it!

Our videographer was Graham from https://www.sciphi.tv/. Working with him was a brilliant experience – he was an ex-researcher himself, and had extensive experience both carrying out and filming science. He made the day fun, quick and easy – if you ever need someone to film an academic video for you I highly recommend his company!

Filming day itself wouldn’t have been possible without the rest of our research team helping out (in particular Hayley and Aiman – thank you!) and of course a very generous prostate cancer patient, who was undergoing radical prostatectomy, kindly agreeing to take part in our research.

After a short wait we received a first draft of our video which we were really happy with – we had the opportunity to make a round of edits (there weren’t many), and then before long the video was up on JoVE’s website, as well as Pubmed and all the usual places you’d read scientific research in paper form!

Personally, I think videos make a whole lot more sense than written papers for sharing methodologies. I’ve used JoVE videos for training myself – notably for learning to build tissue microarrays (TMAs), and without those videos I’m not sure I could have learned this skill at all – as our resident experts had left the lab! A paper just wouldn’t be able to clearly explain how to use that equipment. With JoVE, there’s always a PDF that goes alongside the paper too, so once you’ve watched and understood the practical side, you have the written protocol to hand while you’re in the lab. The best of both worlds.

I’ve always been a fan of simple solutions (I’m a bit of a broken record on this) – and JoVE is a perfectly simple solution to providing training that will show you how to do something rather than just tell you.

Once caveat – it’s not cheap. But your fellow scientist who want to learn your methods will thank you – you’re doing the rest of us a favour! Of course, there’s always YouTube for a free (ish) alternative. But in my view, the added layers of peer review and professional production are worth the extra cost.

Here’s our JoVe video & PDF publication – enjoy!

https://www.jove.com/t/60216/use-magnetic-resonance-imaging-biopsy-data-to-guide-sampling

And no, this blog was not sponsored by anyone – I’m just a fan & paying customer!

My love/hate relationship with impact metrics.

Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.

Checkbox - Graph Going Up Icon PNG Image | Transparent PNG Free Download on  SeekPNG
Image from SeakPNG

But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.

Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!

There are problems.

They are noteworthy.

But we still use impact factor religiously regardless.

My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.

There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:

1. “New England Journal of Medicine reviews controversial stent study”: https://www.bmj.com/content/368/bmj.m878

2. “Two retractions highlight long-standing issues of trust and sloppiness that must be addressed”: https://www.nature.com/news/stap-retracted-1.15488

3. “Retraction—Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis”: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)31324-6/fulltext

Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.

The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.

Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.

An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.

Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.

If you’re looking for a bad metric rabbit hole to go down, please enjoy the wide range of controversy both highlighted by and surrounding Beall’s lists: https://beallslist.net/misleading-metrics/

Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.

Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.

I can’t help it, I like to see numbers that go up!

In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.

If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!

Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.

Research integrity: good practice for new PIs!

Everyone loves a fresh start. Founding a research group is an exciting time in anyone’s career, and allows a great opportunity at a clean slate, and to embed good practice within our team right from the get go!

For me, this is my first year as a member of faculty, and I’m hoping to recruit the first members of my research team as soon as covid settles down a bit. I’ve also been lucky enough to get involved in co-leading a postgraduate module on research methodologies this year, for which I am developing content on research integrity alongside a Professor of evidence based medicine. He has a wealth of knowledge on these topics, and has highlighted a range of evidence-based resources that we’ve been able to incorporate into our teaching. It’s great timing, so I also plan to incorporate these into the training that I provide for my research team, as we hopefully lay the foundations for a happy, productive and impactful few decades of ‘Heavey lab’.

Here are six examples of good practice that I plan to incorporate, along with some links if you’d like to use them in your own teaching/research.

  1. Research integrity: this is key to ensuring that our work is of the utmost quality, that it can be replicated, validated and that it can ultimately drive change in the world. While this is something researchers often discuss ad hoc over coffee, there are also formal guidelines available, and these remove some of the ambiguity around individual versus institutional responsibilities related to this topic. Below you’ll find a link to the UK concordat to support research integrity. It is a detailed summary of the agreements signed by UK funding bodies, higher education institutes and relevant government departments, setting out the specific responsibilities we all have with regard to the integrity of our research. I intend to go through this with my team so they are clear on their own responsibilities as well as mine, and those of our funding bodies and institutes. https://www.universitiesuk.ac.uk/policy-and-analysis/reports/Documents/2019/the-concordat-to-support-research-integrity.pdf
  2. Prevention of research waste: research waste should be actively avoided. This figure is a clear summary, and I’ll keep it visible to my team so that we can all work together to prevent wasting our own time and resources, and maximise the impact of our work. Some of these points force us to really raise the game, and I’m excited to get stuck in.

Figure ref: Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101-104. doi:10.1016/S0140-6736(13)62329-6

3. Prevention of misconduct: The word ‘misconduct’ may strike fear in the heart – but it describes a whole range of things, not just the extreme cases. Misconduct is not always intentional, and should be actively and consciously avoided rather than assuming ‘we’re good people, I’m sure we’re not doing anything wrong’. Here’s a quick checklist that you can use as a code of practice, to keep track of your research integrity and prevent research waste or misconduct. It’s not as detailed as the last link, and I plan to use it with each member of my team before, during and after our projects, to help us to consciously avoid misconduct. https://ukrio.org/wp-content/uploads/UKRIO-Code-of-Practice-for-Research.pdf

4. Prevention of ‘questionable research practices’: This figure below, from another blog, does a great job of highlighting many of the ‘grey areas’ in research that border on misconduct. Sadly, we’ve all seen some of these – from data secrecy (often due to laziness or lack of understanding rather than malice) to p-hacking (where someone runs as many statistical tests as they need to until they find/force a ‘significant’ result), or maybe it’s manipulating authorships for political gain, or playing games with peer review to win a perceived race. The ethical questions around these practices are often brushed aside as we try to ‘pick our battles’ and avoid conflict, but they can only be stopped if we’re open about them, and discuss the ramifications to the field and the wider world. I plan to display this figure and share anecdotes of bad past experiences with my team, so that they can learn from others’ bad practice in the same way I have. Unfortunately some lessons are best learned as ‘how not to do it’.  

https://blogs.lse.ac.uk/impactofsocialsciences/2015/07/03/data-secrecy-bad-science-or-scientific-misconduct/

5. Making documentation visible: To adhere to our own personal responsibilities around research integrity, we need to be clear on which rules and regulations we are each beholden to. I will keep ethics procedure documents, protocols, patient information sheets and consent forms visible and easily accessible to those who are authorized. I want my staff and students to know exactly what they can and can’t do in their research practice. I will also ensure they are familiar with the intricacies of each project’s approval, which can vary significantly. This sounds like a no-brainer – but ask yourself, have you ever worked on a project where you couldn’t access the latest full version of the ethics approval? Where maybe you had laid eyes on a draft or an approval letter, but not the full application? This happens far more often than it should, and leaves researchers unable to adequately adhere to their own personal responsibilities under the concordat linked above. It’s required, it’s an easy win, and I will make sure it’s the case for my team.

6. Safe space: I believe it’s crucial to encourage a safe environment where team members can ‘speak up’ about any of the above. This requires extra effort in the world of academia, which often discourages this. The life of an early career researcher is fragile, as you bounce from contract to contract, always worrying about stability and fighting for the next grant, the next authorship. The slightest knock to your reputation can seriously affect your future career, and this conscious fear can lead to team members not feeling safe to call out questionable practice. It’s not going to be easy to foster an environment where the whole team feels comfortable speaking up about questionable practice without it leading to a conflict, but I’m going to try my best to achieve this. I aim to make it abundantly clear to my team that they will not face any retaliation for calling out others’ questionable practice or identifying their own – no matter the consequence, even if it means ultimately we have to scrap a massive project, I will thank them. I would much rather know that something has gone wrong so I can correct it, retract it or edit it, rather than continue on not knowing. Anyone who comes to me with an honest concern will be treated with gratitude.

These six measures are of course not exhaustive, and I aim to continue to appraise the literature on good research practices, so that as well as starting on a solid foundation, we can also build better and better practice as we go.

Onwards and upwards!

Particular thanks to Prof Kurinchi Gurusamy for pointing me towards some of these great resources!

How much does cancer research cost?

Times are strange due to #Covid19 – so we’re coming to you not from our lab, but on a virtual blackboard instead, from home! This video aims to give a whistle-stop tour of the costs involved in carrying out cancer research. We get asked about this a lot – so we’re here to show you where those valuable funds raised in pub quizzes, sponsored walks & raffles all go! Do you have a guess at how much it costs to carry out a full PhD? Watch the video to find out!

Hello world!

After adamantly refusing to blog for a very long time… it’s time to give in.

Let me introduce myself. I’m Susan. I’m a cancer researcher. My passion is understanding how to exploit vulnerabilities within tumours so that we can find better ways to treat the disease.

Over the last 13 years I’ve been developing my skills, learning more and more about cancer, and working towards the ultimate goal of starting my own research lab.

Now, it is finally happening!

As I work towards building ‘Heavey Lab’ in University College London, where I’ve recently been appointed as a Lecturer in Translational Medicine, I’ll endeavor to pop in now and then, chronicling each of the ‘firsts’ that come along with being a brand new member of faculty.

I’ve enjoyed communicating my research over the years, both online and in the real world, so that cancer patients, advocates, carers and students alike can get a taste of what the world of cancer research is really like. A lot of this #scicomm activity has been through Cancer Research Demystified, which I co-founded and run. I’ll share some of the material that we created for CRD here too, with brief introductions on why we wanted to share these aspects of our work with the world.

I’ll also share our publications, along with plain English explanations of what we found, why it was interesting to us, and with the benefit of hindsight – what happened next.

That’s all for now.

Stay curious!