I’m currently working on a new video for Cancer Research Demystified, where I’m going to attempt to answer this lofty question. What is the biggest challenge in cancer research today?
For the video, I’ll summarise a few different perspectives on this: the patients, the advocates, the funders, the institutions, the public, and the researchers ourselves. The most common answer so far is of course ‘there’s more than one!’ so I’ll cover as many as I can, and give my two cents on what could be considered the one single greatest challenge.
The NCRI cover their top priorities here – (of which there are of course more than one!) and you can see similar lists from many other groups. But what is the biggest one?! I’ve been asking around on Twitter, Instagram and Facebook, and I’ve gotten 24 responses so far, mostly from other cancer researchers, but some from patients & funders too. Before I compile, compare & contrast these, I wanted to ask you too – what do you think is the single greatest challenge in cancer research today? I’ll give you a head start by saying that the answers I’m getting are falling into two few common themes: biology & barriers.
Does one of these jump out at you as being a bigger challenge than the others? Do you have something to add? Comment below or DM me on Twitter/Facebook/Instagram/Reddit/LinkedIn and I’ll discuss your thoughts (anonymised if via DM) in our upcoming video!
A quick blog this week as I’m in the midst of lots of teaching & grant writing! On this week’s teaching agenda I’ve got research reporting, research presentation skills, in vitro, in vivo, and in silico research, acute & chronic inflammation, image analysis and drug efficacy. I thought I’d share with you some of the resources we are using in one of these lessons (not compiled by me), as frankly – they’re quite useful!
Research reporting – something we all need to get right!
According the declaration of Helsinki, researchers and authors have a duty to make their results available publicly using accepted guidelines for ethical reporting.
Naturally we’ll be teaching our students general tips on which types of content should be included in the different sections of a general research paper. We also discuss why it’s important to report our research fully, and what can go wrong when we don’t!
We also give the students a list of guidelines for specific types of research reports. Some of these are slightly peripheral to my own research interests, and I found them quite interesting, so I thought you might too! If you’re new to research reporting, perhaps a bit rusty, or trying to remember one of those many many reporting acronyms, then here’s an overview that might be helpful for you.
EQUATOR have also developed a wizard that can be useful to help decide on how to report your research. This tool asks what type of research you are conducting, and identifies useful checklists to make sure you are include the required information in your report: https://www.goodreports.org/
The list! (Courtesy of Prof Kurinchi Gurusamy):
•Consolidated Standards Of Reporting Trials (CONSORT) – www.consort-statement.org –Design, analysis and interpretation of the RCT.
•Strengthening the Reporting of Observational studies in Epidemiology (STROBE) – www.strobe-statement.org –Reporting of observational studies
•Standards for Reporting Studies of Diagnostic Accuracy (STARD) – www.stard-statement.org –Reporting of diagnostic accuracy studies
•Quality assessment of diagnostic accuracy studies (QUADAS 2) –www.bris.ac.uk/quadas –Quality assessment of diagnostic accuracy studies
A quick blog this week! I wanted to take a moment to introduce one of our favourite Cancer Research Demystified videos. Here, we give a tour of our lab so that cancer patients, carers, students and anyone with an interest can see what cancer research really looks like!
During our first couple of years meeting with cancer patients, myself and Hayley noticed that for a lot of them, their main frame of reference for what a science lab looked like was ‘the telly’. Whether it was CSI, or even a particularly slick BBC News segment, it was clear that research labs were expected to be minimalist, futuristic, and full of coloured liquids.
The occasional person would describe the opposite picture – dark wooden cabinets filled with dusty glass specimen jars, stained benches, blackboards, worn-off labels on mystery chemicals, and that strong, ambiguous, smell.
Of course, neither are accurate. Real cancer research labs are somewhat modern, sure, but even the most expensive and ‘futuristic’ equipment typically looks more like a tumble dryer than an interactive hologram, and though much of our equipment does use lasers – they are hidden deep inside rather than scanning the lab for spies! Blackboards are long gone, replaced with white boards, dusty unlabeled jars are disposed of due to strict health and safety protocols, although stains on benches….? Well, some of those remain.
We did initially face some mild resistance when we first attempted to film this video. A senior member of staff advised us that patients want the comfort of knowing that the best brains in the world are working on a cure, using the best technology and most impressive workspaces. That’s why, we were told, we need to clear out so much lab mess before the camera crews come in for a news segment.
But frankly – those perfect, sterile, swish labs are out there – if someone wants to see a scientist in a never-before-worn white coat pipetting some pink liquid into a plate, all they need to do is turn on the news. We wanted to show something different – and frankly, more honest – warts and all!
The video we ended up with is a little on the nose perhaps, but we felt it needed to be. We show the reality of what it’s like to work in a lab (well, close to reality anyway – we filmed after hours to avoid getting in people’s way, so it is unusually quiet). Some of the difference between day-to-day lab work versus office work are highlighted, such as not being able to eat, drink or touch up your make up within the lab, and having to wear appropriate PPE.
I came back to this video during lockdown because I missed the lab. I still haven’t been back in there, and I’m not sure when I next will be. Other people are back there now though, under strict covid protocols, with significantly reduced capacity and masks. I hope to join them one day, but for now I’m minding my asthmatic lungs at home!
If you’re a cancer patient or carer – here’s a real look at where we’re carrying out the research to build better diagnostics and therapeutics. If you’re a student thinking about doing a medical/biology based research project – this is the sort of place you’ll find yourself working. Please enjoy!
For more Cancer Research Demystified content, here’s where you can find us:
Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.
But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.
Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!
There are problems.
They are noteworthy.
But we still use impact factor religiously regardless.
My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.
There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:
Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.
The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.
Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.
An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.
Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.
Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.
Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.
I can’t help it, I like to see numbers that go up!
In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.
If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!
Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.
Everyone loves a fresh start. Founding a research group is an exciting time in anyone’s career, and allows a great opportunity at a clean slate, and to embed good practice within our team right from the get go!
For me, this is my first year as a member of faculty, and I’m hoping to recruit the first members of my research team as soon as covid settles down a bit. I’ve also been lucky enough to get involved in co-leading a postgraduate module on research methodologies this year, for which I am developing content on research integrity alongside a Professor of evidence based medicine. He has a wealth of knowledge on these topics, and has highlighted a range of evidence-based resources that we’ve been able to incorporate into our teaching. It’s great timing, so I also plan to incorporate these into the training that I provide for my research team, as we hopefully lay the foundations for a happy, productive and impactful few decades of ‘Heavey lab’.
Here are six examples of good practice that I plan to incorporate, along with some links if you’d like to use them in your own teaching/research.
Research integrity: this is key to ensuring that our work is of the utmost quality, that it can be replicated, validated and that it can ultimately drive change in the world. While this is something researchers often discuss ad hoc over coffee, there are also formal guidelines available, and these remove some of the ambiguity around individual versus institutional responsibilities related to this topic. Below you’ll find a link to the UK concordat to support research integrity. It is a detailed summary of the agreements signed by UK funding bodies, higher education institutes and relevant government departments, setting out the specific responsibilities we all have with regard to the integrity of our research. I intend to go through this with my team so they are clear on their own responsibilities as well as mine, and those of our funding bodies and institutes. https://www.universitiesuk.ac.uk/policy-and-analysis/reports/Documents/2019/the-concordat-to-support-research-integrity.pdf
Prevention of research waste: research waste should be actively avoided. This figure is a clear summary, and I’ll keep it visible to my team so that we can all work together to prevent wasting our own time and resources, and maximise the impact of our work. Some of these points force us to really raise the game, and I’m excited to get stuck in.
Figure ref: Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101-104. doi:10.1016/S0140-6736(13)62329-6
3. Prevention of misconduct: The word ‘misconduct’ may strike fear in the heart – but it describes a whole range of things, not just the extreme cases. Misconduct is not always intentional, and should be actively and consciously avoided rather than assuming ‘we’re good people, I’m sure we’re not doing anything wrong’. Here’s a quick checklist that you can use as a code of practice, to keep track of your research integrity and prevent research waste or misconduct. It’s not as detailed as the last link, and I plan to use it with each member of my team before, during and after our projects, to help us to consciously avoid misconduct. https://ukrio.org/wp-content/uploads/UKRIO-Code-of-Practice-for-Research.pdf
4. Prevention of ‘questionable research practices’: This figure below, from another blog, does a great job of highlighting many of the ‘grey areas’ in research that border on misconduct. Sadly, we’ve all seen some of these – from data secrecy (often due to laziness or lack of understanding rather than malice) to p-hacking (where someone runs as many statistical tests as they need to until they find/force a ‘significant’ result), or maybe it’s manipulating authorships for political gain, or playing games with peer review to win a perceived race. The ethical questions around these practices are often brushed aside as we try to ‘pick our battles’ and avoid conflict, but they can only be stopped if we’re open about them, and discuss the ramifications to the field and the wider world. I plan to display this figure and share anecdotes of bad past experiences with my team, so that they can learn from others’ bad practice in the same way I have. Unfortunately some lessons are best learned as ‘how not to do it’.
5. Making documentation visible: To adhere to our own personal responsibilities around research integrity, we need to be clear on which rules and regulations we are each beholden to. I will keep ethics procedure documents, protocols, patient information sheets and consent forms visible and easily accessible to those who are authorized. I want my staff and students to know exactly what they can and can’t do in their research practice. I will also ensure they are familiar with the intricacies of each project’s approval, which can vary significantly. This sounds like a no-brainer – but ask yourself, have you ever worked on a project where you couldn’t access the latest full version of the ethics approval? Where maybe you had laid eyes on a draft or an approval letter, but not the full application? This happens far more often than it should, and leaves researchers unable to adequately adhere to their own personal responsibilities under the concordat linked above. It’s required, it’s an easy win, and I will make sure it’s the case for my team.
6. Safe space: I believe it’s crucial to encourage a safe environment where team members can ‘speak up’ about any of the above. This requires extra effort in the world of academia, which often discourages this. The life of an early career researcher is fragile, as you bounce from contract to contract, always worrying about stability and fighting for the next grant, the next authorship. The slightest knock to your reputation can seriously affect your future career, and this conscious fear can lead to team members not feeling safe to call out questionable practice. It’s not going to be easy to foster an environment where the whole team feels comfortable speaking up about questionable practice without it leading to a conflict, but I’m going to try my best to achieve this. I aim to make it abundantly clear to my team that they will not face any retaliation for calling out others’ questionable practice or identifying their own – no matter the consequence, even if it means ultimately we have to scrap a massive project, I will thank them. I would much rather know that something has gone wrong so I can correct it, retract it or edit it, rather than continue on not knowing. Anyone who comes to me with an honest concern will be treated with gratitude.
These six measures are of course not exhaustive, and I aim to continue to appraise the literature on good research practices, so that as well as starting on a solid foundation, we can also build better and better practice as we go.
Onwards and upwards!
Particular thanks to Prof Kurinchi Gurusamy for pointing me towards some of these great resources!
I’ve always been a fan of writing ‘To Do’ lists – they’re great for keeping tracks of small bits of work that could slip between the cracks during a busy day or week, and they’re also great for a little dopamine burst when you tick off an item.
Of course the drawback is the list always grows longer, and never gets completed!
Recently, as part of my transition into life as a member of faculty, I’ve started occasionally writing the opposite version, which I’ve taken to calling my ‘To Did’ list. Yes, I realize some people go with ‘To Done’ – but it’s on my ear now and I’m sticking to it!
The list consists of things that I have taken care of in a given day or week, and forces me to take a few minutes to acknowledge the work that I have managed to get done, rather than always focusing on the mountain ahead.
It also allows me to visualise the spread of different types of work that I’ve done, to see if it aligns roughly with how I intended to balance my time between research, teaching, and other tasks.
This is useful, because I’ve received warnings from quite a few academics that in my first year as a lecturer I would likely end up doing virtually all teaching, and virtually no research, and that I should try to make sure my research isn’t neglected if at all possible.
I always wondered whether this early research-teaching imbalance is real, or whether us academics maybe just convince ourselves that this balance is shifted farther towards teaching than it really is. I suspect this could happen, because we have a tendency toward feeling perpetually behind on our research, and teaching ‘To Do’ jobs usually have harder deadlines than research ones, so we often feel like we’re being forced to spend time on teaching tasks instead of research ones…. Maybe it’s just a trick of the mind, and we are actually doing a bit more research than we think? Or maybe it’s true, and my research will take a huge hit in year one, that I should actively work to prevent?
Of course, with covid-era teaching requiring significant extra hours from teaching staff, and preventing new research experiments from being carried out within the lab during lockdown, I suspected that I might fall victim to this potential research-teaching imbalance even more than your average first year PI.
And given I am a scientist, the urge to collect data to answer this question was strong.
Hence the ‘To Did’ list.
Did it identify a huge imbalance toward teaching?
No, not really!
I’m writing this in the evening, having just written out my ‘To Did’ list for today. It seems nicely varied, with eight items that I spent roughly equal time on. The two most time consuming items (by only a small margin) were pure teaching, one item sat nicely on the teaching-research border, four items were pure research, and the smallest one was ‘other’.
Over the summer, before I brought in the ‘To Did’ list, I started going through old ‘To Do’ lists and highlighting research items yellow, teaching items green, and everything else blue, to try to collect similar data on how I was balancing these types of work. I found that yellow and green were almost perfectly equal, with blue less common. Which to me, seems ideal – between the results of the ‘To Do’ & ‘To Did’ lists, I am reassured things seems to be relatively well balanced so far!
An unexpected positive was that the ‘To Did’ list also highlighted for me how international my work has become, which hadn’t really clicked for me. Increasing my international network will (I hope) help my research career, and so it was exciting to notice items related to collaborations with Ireland, Finland, India and the US all in there alongside my main work in the UK.
Aside from the broad overview the ‘To Did’ list gives me of the variety of work I’m doing, it does also provide the same sort of dopamine release that ticking off a ‘To Do’ list does, only in this case, for me at least – it’s even better! Everything on my ‘To Did’ list is complete, even if it’s just a small step in a bigger picture. It’s something I’ve done that day, something I’ve accomplished, and something that is not hanging over me anymore.
One rule of my ‘To Did’ list, is that I do not allow myself to write ‘wrote/read emails’ as an item on the list. This is because I’ve had a bad habit in the past of putting myself down by saying ‘all I did all day was emails’, when in actual fact I may have been troubleshooting research problems, liaising with collaborators, submitting proposals, planning projects or reviewing papers – email was purely the vehicle. Calling those items ‘emails’ is a bit like spending three days on a wet lab experiment and saying ‘all I did the last few days was move stuff with my hands’ or teaching all day and saying ‘all I did today was speak!’ Writing these kinds of items on the list with verbs like liaised/reviewed/edited has made me acknowledge the reality of the work being done, and also helped me to feel better about previously perceived lack of productivity during lockdown, while I was really missing the lab!
So whether you’re trying to collect data on how you break up your time, or just looking for reassurance that you’re still getting s#!t done during the pandemic, I whole heartedly recommend writing a ‘To Did’ list.
I guess I can now add a 9th item to today’s list – writing this blog!
It was the final year of my PhD, and I was presenting a poster at a conference, alongside my supervisor Dr Kathy Gately. We were showing off our new panel of PI3K inhibitor resistant lung cancer cell lines, which we had developed and begun to characterize. We were excited to tease out which signalling pathways might be playing a role in resistance to these drugs.
Along came Dr Michael O’Neill, the co-founder of Inflection Bioscience, who had recently licenced a drug that targeted the PIM kinases. At the time, I had never heard of PIM. He saw our poster, and suggested we should test their drug in our cell lines. It seemed straight forward enough.
After a couple of quick ‘look see’ experiments, we ended up submitting a grant.
Then some student projects.
Before we knew it, this ‘quick win’ was becoming a driving interest for Kathy, and she was gathering researchers along the way (notably Dr Gillian Moore). I had left Kathy’s lab at this stage, but as a wider team we were beginning to build up a picture of how best we could potentially develop these drugs in the lung cancer space.
PIM research didn’t stop for Kathy, and it didn’t stop for me either.
When interviewing for a postdoc position in University College London with Dr Hayley Whitaker, I was asked ‘if you had access to human prostate cancer specimens, what would you do with them?’ On a whim, and with interview pressure weighing down on me, I responded ‘well there’s this really exciting drug target called PIM in lung cancer, I think it looks like it might be promising in prostate cancer too, so I’d probably run some experiments on that’.
I arrived home to Dublin that night, exhausted after a long day of travel & interviewing, and found out immediately that I’d been invited to a second round interview. This was great – but it would be in London again, in just a few days! I purchased a second pair of flights, cried over my bank balance for a moment, and then hunkered down in our basement office for the weekend, trying to pull together a presentation that had been assigned for the second round. The challenge that had been set was of course ‘if you had access to human prostate cancer specimens, what would you do with them?’ How could I present on anything other than PIM after suggesting it in my previous interview?!
I rushed a project pitch, which by chance turned out quite promising. There were a good few papers looking at PIM in prostate cancer, but not many looking at drug treatments, and none looking at the same co-targets that we were working on in lung cancer. I checked with Kathy if it was ok with her for me to present this, while rushing out of the building to get to the airport – but our conversation got slightly side-tracked when she told me she was expecting a baby! Safe to say PIM got a bit overlooked that lovely day.
The presentation went well, I got the job, and to my delight I was offered the chance to actually work on the project that I had pitched in the interview. What a wonderful opportunity for a postdoc to be given that level of freedom!
In order to differentiate my new prostate cancer project from the work Kathy was leading on, I set out to investigate a wider panel of drugs, including the PIM inhibitors but also quite a few others. The aim was to test promising late stage pre-clinical drugs in human prostate cancer tissue, using ex vivo culture and new omics technologies. I gathered some preliminary data and submitted it as a fellowship proposal, trying to position myself as someone who worked on drug development in general. Thankfully, I was successful.
It wasn’t mean to be a ‘PIM project’. But as luck would have it, PIM wasn’t going away.
One by one, the other drugs dropped off for one reason or another. Some couldn’t be investigated in an ex vivo model because they needed to be metabolised within the body, some needed to build up for a few weeks before an effect would be seen, some failed during concurrent animal testing, and some just showed disappointingly little activity in my model. By the time the work was close to publication, we were down to just 4 different treatments, and they were a very similar panel to what Kathy was leading on in lung cancer. I hope she forgives me!
Now, years later, we’ve just had our first original article come out on PIM in prostate cancer1. This is our first ‘flag in the sand’ where we put forward the idea of co-targeting PIM with the PI3K pathway. There are bigger and more detailed works to come from this in the future. If you’d like to read about the paper itself, I wrote a tweetorial that you can read unfurled here: https://threadreaderapp.com/thread/1300721602854871040
This paper came off the back of a couple of reviews on PIM as a drug target3,4, and there is of course more on the way.
Now, plans are brewing for wider PIM collaborations, and who knows, maybe PIM will stick around in my world even longer.
Did I ever set out to become a PIM researcher? No, not particularly.
But I suppose the lessons learned here are to say yes to opportunities, and to follow the data – if something isn’t your ‘plan A’ but it might make a difference to cancer patients in the future, then why wouldn’t you follow it?
Extra credit to my friend AJ (@AyoksAJ) for his very inspiring ‘Say Yes’ presentation to our postdoc networking group a few years ago, which still sticks around in my mind, and lead me to say YES to an opportunity that came my way this morning – let’s see where this one goes!
Thank you to Kathy, and to all the PIM friends I’ve made over the years.
Last week I spotted two tweets about opportunities. One said something along the lines of ‘stop telling PhD students you are giving us opportunities when really you’re dumping extra work on us’ and another implied ‘opportunity’ in a euphemism for unpaid labour.
I have to admit, both of these lead me to take a good hard look in the mirror!
Genuine opportunities for early career researchers are something I have always considered to be critically important in academia. Because of this, over the last few years I have consciously tried to offer PhD, MSc and BSc students what I saw as ‘opportunities’, in particular the chance to get their names on my papers.
This is something that is really important to me. I wasn’t lucky enough to get my name on other people’s papers when I was a student (except for one review!) and I would have killed for the chance to run a few PCRs or analyse some data to get an authorship. For me, this didn’t happen until I was 3+ years post PhD, at which point I had enough of my own first author papers that a co-authorship was less of a boost to my CV than it would have been earlier on.
As an undergrad or postgrad, if you’re hoping to become a PI one day, you already know that publications are key. But without the resources, funding, political sway or niche expertise that a more experienced researcher might have, you’re somewhat reliant upon others going out of their way to include you. Or at least that’s how I felt when it was me.
This is the reason I have made sure to include the students around me over the last few years, and as far as I was concerned – it was working! Recently three of my project students have gotten their names on two papers each, as well as a couple of the nearby PhD students getting their names on almost everything I publish. I must admit I’ve been giving myself a pat on the back for this… Sharing my modest success and building the CVs of the talented future PIs around me had felt like a privilege, and a rewarding endeavour.
But was I really just exploiting them?
Looking back, some of those students probably had no interest in building their CV towards a career in academia, and could probably take or leave the ‘opportunity’ to get their name on a paper. They of course each told me that they really wanted the authorships and were really grateful for them. But that’s exactly what I would have said too, regardless of how I felt – maybe they were just being polite!
With one student in particular, I remember getting a sense that perhaps they weren’t that interested in the ‘opportunity’ I had offered. It was someone I knew well, and I was comfortable being candid with them about this. I recall saying directly that there was no pressure to take it on, that I was happy to do it myself and the only reason I was offering it to them was to give them a chance at authorship. They insisted that they were keen to be involved, and went ahead and did the work, which amounted to a full figure and prominent authorship in the resulting paper. It was only afterwards that they admitted they actually had no interest in that particular topic, it was just extra work, and they had only done it as a favour for me.
I won’t lie, it felt like a bit of a gut punch. My ‘good deed’ had been perceived as unpaid labour.
Outside of academia, unpaid labour can be a huge problem, particularly now that social media has become so key in growing people’s businesses & careers. Stories of professional photographers or bakers getting asked to do weddings in exchange for Instagram posts, or artists being asked to create commissions for nothing but the offer of ‘great exposure’ are rampant, with some notable and entertaining examples on the ‘choosing beggars’ subreddit if you want a laugh: https://www.reddit.com/r/ChoosingBeggars/comments/gfkhqv/background_dancer_gets_an_offer_from_a_music/
Could this be the case with students in academia? Is authorship payment enough, or should we only offer to get students involved if we can actually pay them cold hard cash for the experiments they run or analyse? This would certainly vastly reduce the frequency with which I could give students authorships on my papers, as funds are generally not available to pay them with.
Is it enough to frankly say to a student ‘I have no funding to pay you to run X experiment, and you don’t have to do it, but if you do, I will put your name on the paper’, or does that run the risk of them going along with it out of politeness, as happened to me recently?
If I were to stop offering these ‘opportunities’, would keen students who want to be PIs one day end up missing out? Students who would have been just as excited as me to get their name on a paper?
How do we determine whether we’re offering someone an opportunity or purely exploiting them?
I wish I had the answer to this, but as with many other things in my first year of being a member of faculty – I have no idea.
Naturally, when the COVID-19 lockdowns began, our laboratory based research had to take a pause, and we had to stay at home.
Is it possible to work from home as a scientist?
I made this video a couple of weeks into lockdown, where I explained that there is still plenty of science that can be done without a lab. I also promised to check in later with how things went, so I’ll do that here now!
It’s now about five months later, and things have largely stayed the same…
Pubs and restaurants have reopened but I haven’t ventured into one just yet. I’m still going out for walks, and almost always wearing a mask, even in open spaces (except during the occasional isolated picnic!)
A few weeks ago, our labs began to reopen, but at very limited capacity. I haven’t been back yet – I am leaving the space to those that need it most – the final year PhD students!
I have repeatedly found myself thanking my lucky stars that I am not trying to finish a PhD this year. For those of you that are, I am thinking of you, and if there is any way that I can help you, please let me know!
I have been busy preparing for the upcoming semester, when I’ll be delivering teaching online to our undergraduate and postgraduate students. Being a module lead is a new experience for me, so leading not one, not two, but THREE modules and adapting them for online learning is going to be quite a challenge! I am so lucky that the rest of our teaching staff have been so accommodating and helpful in showing me the ropes. I hope the students enjoy my modules…
Research still ticks along, with some data getting analysed, some thesis projects getting written up, and some papers getting published, but still no laboratory work.
My current plan is to focus on honing my teaching skills, writing and project planning this semester, and then if all goes well, get stuck back into some lab work in the new year, hopefully with some new students alongside me!
Times are strange due to #Covid19 – so we’re coming to you not from our lab, but on a virtual blackboard instead, from home! This video aims to give a whistle-stop tour of the costs involved in carrying out cancer research. We get asked about this a lot – so we’re here to show you where those valuable funds raised in pub quizzes, sponsored walks & raffles all go! Do you have a guess at how much it costs to carry out a full PhD? Watch the video to find out!