My love/hate relationship with impact metrics.

Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.

Checkbox - Graph Going Up Icon PNG Image | Transparent PNG Free Download on  SeekPNG
Image from SeakPNG

But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.

Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!

There are problems.

They are noteworthy.

But we still use impact factor religiously regardless.

My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.

There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:

1. “New England Journal of Medicine reviews controversial stent study”: https://www.bmj.com/content/368/bmj.m878

2. “Two retractions highlight long-standing issues of trust and sloppiness that must be addressed”: https://www.nature.com/news/stap-retracted-1.15488

3. “Retraction—Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis”: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)31324-6/fulltext

Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.

The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.

Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.

An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.

Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.

If you’re looking for a bad metric rabbit hole to go down, please enjoy the wide range of controversy both highlighted by and surrounding Beall’s lists: https://beallslist.net/misleading-metrics/

Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.

Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.

I can’t help it, I like to see numbers that go up!

In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.

If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!

Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.

Research integrity: good practice for new PIs!

Everyone loves a fresh start. Founding a research group is an exciting time in anyone’s career, and allows a great opportunity at a clean slate, and to embed good practice within our team right from the get go!

For me, this is my first year as a member of faculty, and I’m hoping to recruit the first members of my research team as soon as covid settles down a bit. I’ve also been lucky enough to get involved in co-leading a postgraduate module on research methodologies this year, for which I am developing content on research integrity alongside a Professor of evidence based medicine. He has a wealth of knowledge on these topics, and has highlighted a range of evidence-based resources that we’ve been able to incorporate into our teaching. It’s great timing, so I also plan to incorporate these into the training that I provide for my research team, as we hopefully lay the foundations for a happy, productive and impactful few decades of ‘Heavey lab’.

Here are six examples of good practice that I plan to incorporate, along with some links if you’d like to use them in your own teaching/research.

  1. Research integrity: this is key to ensuring that our work is of the utmost quality, that it can be replicated, validated and that it can ultimately drive change in the world. While this is something researchers often discuss ad hoc over coffee, there are also formal guidelines available, and these remove some of the ambiguity around individual versus institutional responsibilities related to this topic. Below you’ll find a link to the UK concordat to support research integrity. It is a detailed summary of the agreements signed by UK funding bodies, higher education institutes and relevant government departments, setting out the specific responsibilities we all have with regard to the integrity of our research. I intend to go through this with my team so they are clear on their own responsibilities as well as mine, and those of our funding bodies and institutes. https://www.universitiesuk.ac.uk/policy-and-analysis/reports/Documents/2019/the-concordat-to-support-research-integrity.pdf
  2. Prevention of research waste: research waste should be actively avoided. This figure is a clear summary, and I’ll keep it visible to my team so that we can all work together to prevent wasting our own time and resources, and maximise the impact of our work. Some of these points force us to really raise the game, and I’m excited to get stuck in.

Figure ref: Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101-104. doi:10.1016/S0140-6736(13)62329-6

3. Prevention of misconduct: The word ‘misconduct’ may strike fear in the heart – but it describes a whole range of things, not just the extreme cases. Misconduct is not always intentional, and should be actively and consciously avoided rather than assuming ‘we’re good people, I’m sure we’re not doing anything wrong’. Here’s a quick checklist that you can use as a code of practice, to keep track of your research integrity and prevent research waste or misconduct. It’s not as detailed as the last link, and I plan to use it with each member of my team before, during and after our projects, to help us to consciously avoid misconduct. https://ukrio.org/wp-content/uploads/UKRIO-Code-of-Practice-for-Research.pdf

4. Prevention of ‘questionable research practices’: This figure below, from another blog, does a great job of highlighting many of the ‘grey areas’ in research that border on misconduct. Sadly, we’ve all seen some of these – from data secrecy (often due to laziness or lack of understanding rather than malice) to p-hacking (where someone runs as many statistical tests as they need to until they find/force a ‘significant’ result), or maybe it’s manipulating authorships for political gain, or playing games with peer review to win a perceived race. The ethical questions around these practices are often brushed aside as we try to ‘pick our battles’ and avoid conflict, but they can only be stopped if we’re open about them, and discuss the ramifications to the field and the wider world. I plan to display this figure and share anecdotes of bad past experiences with my team, so that they can learn from others’ bad practice in the same way I have. Unfortunately some lessons are best learned as ‘how not to do it’.  

https://blogs.lse.ac.uk/impactofsocialsciences/2015/07/03/data-secrecy-bad-science-or-scientific-misconduct/

5. Making documentation visible: To adhere to our own personal responsibilities around research integrity, we need to be clear on which rules and regulations we are each beholden to. I will keep ethics procedure documents, protocols, patient information sheets and consent forms visible and easily accessible to those who are authorized. I want my staff and students to know exactly what they can and can’t do in their research practice. I will also ensure they are familiar with the intricacies of each project’s approval, which can vary significantly. This sounds like a no-brainer – but ask yourself, have you ever worked on a project where you couldn’t access the latest full version of the ethics approval? Where maybe you had laid eyes on a draft or an approval letter, but not the full application? This happens far more often than it should, and leaves researchers unable to adequately adhere to their own personal responsibilities under the concordat linked above. It’s required, it’s an easy win, and I will make sure it’s the case for my team.

6. Safe space: I believe it’s crucial to encourage a safe environment where team members can ‘speak up’ about any of the above. This requires extra effort in the world of academia, which often discourages this. The life of an early career researcher is fragile, as you bounce from contract to contract, always worrying about stability and fighting for the next grant, the next authorship. The slightest knock to your reputation can seriously affect your future career, and this conscious fear can lead to team members not feeling safe to call out questionable practice. It’s not going to be easy to foster an environment where the whole team feels comfortable speaking up about questionable practice without it leading to a conflict, but I’m going to try my best to achieve this. I aim to make it abundantly clear to my team that they will not face any retaliation for calling out others’ questionable practice or identifying their own – no matter the consequence, even if it means ultimately we have to scrap a massive project, I will thank them. I would much rather know that something has gone wrong so I can correct it, retract it or edit it, rather than continue on not knowing. Anyone who comes to me with an honest concern will be treated with gratitude.

These six measures are of course not exhaustive, and I aim to continue to appraise the literature on good research practices, so that as well as starting on a solid foundation, we can also build better and better practice as we go.

Onwards and upwards!

Particular thanks to Prof Kurinchi Gurusamy for pointing me towards some of these great resources!

Can cancer research be done from home?

Naturally, when the COVID-19 lockdowns began, our laboratory based research had to take a pause, and we had to stay at home.

Is it possible to work from home as a scientist?

Yes!

I made this video a couple of weeks into lockdown, where I explained that there is still plenty of science that can be done without a lab. I also promised to check in later with how things went, so I’ll do that here now!

It’s now about five months later, and things have largely stayed the same…

Pubs and restaurants have reopened but I haven’t ventured into one just yet. I’m still going out for walks, and almost always wearing a mask, even in open spaces (except during the occasional isolated picnic!)

A few weeks ago, our labs began to reopen, but at very limited capacity. I haven’t been back yet – I am leaving the space to those that need it most – the final year PhD students!

I have repeatedly found myself thanking my lucky stars that I am not trying to finish a PhD this year. For those of you that are, I am thinking of you, and if there is any way that I can help you, please let me know!

I have been busy preparing for the upcoming semester, when I’ll be delivering teaching online to our undergraduate and postgraduate students. Being a module lead is a new experience for me, so leading not one, not two, but THREE modules and adapting them for online learning is going to be quite a challenge! I am so lucky that the rest of our teaching staff have been so accommodating and helpful in showing me the ropes. I hope the students enjoy my modules…

Research still ticks along, with some data getting analysed, some thesis projects getting written up, and some papers getting published, but still no laboratory work.

My current plan is to focus on honing my teaching skills, writing and project planning this semester, and then if all goes well, get stuck back into some lab work in the new year, hopefully with some new students alongside me!

Time will tell whether this goes to plan or not!