Blog

We need to reset the balance between the quality of research and impact storytelling

Alex Freeman headshot
by
Dr Alex Freeman

Problems outlined by a recent government report could be tackled by changing one element of research assessment.

Woman working at a laptop in front of a bright window.

In May, the House of Commons Science, Innovation and Technology Committee published its report into reproducibility and research integrity.

For anyone working in academic research or scholarly publishing, it would have held few surprises. This is a shame, because I think that behind the superficially unconnected issues it discusses, are root causes that need to be recognised and tackled.

Five problems

For example, take five of the problems the report identified as causing issues with research integrity and culture:

  1. Inadequate professionalisation of skills, such as statistical and software skills, due to a lack of career pathways and recognition.
  2. Current research outputs are essentially narrative summaries of the work and do not include the underlying research itself (such as data and code), which means the integrity of the work can’t be assured or easily reproduced.
  3. There are not enough outlets for ‘negative’ or ‘confirmatory’ findings.
  4. Peer review is too often seen as a gatekeeping function and as such a stamp of reliability, making it not transparent.
  5. Problems with publications – either accidental or deliberate – are not easy or fast to correct, threatening the integrity of the research record.

One root cause

The report outlines a superficial solution to each of these problems, such as ‘researchers should share their research data’ without considering the incentive structure that makes these sorts of changes difficult.

To my mind, these problems are symptomatic of a single, deeper issue that, if diagnosed and treated, would solve many problems.

Researchers are assessed not on how good they are at research, but almost exclusively on one measure: the number and popularity of the academic papers they have written.

But popularity does not measure quality of research, and incentivising it causes the problems the report discusses. For example, for the five issues above:

1. Inadequate professionalisation of skills

Researchers who specialise in statistical, software or many other technical skills, lack career pathways and recognition because they write little of any resulting academic paper.

This is usually done by people who have specialised in writing well and results in a lack of recognition and resulting career advancement.

2. Good stories lose nuance – and data

The drive for ‘popular’ papers inevitably results in streamlining the story and sidelining detail that casual readers don’t want (such as raw data, detailed methods, analytical code).

Since only the paper itself (and only its popularity) is being assessed by potential funders or employers, researchers have no incentive to prepare data or methodology fit for sharing.

3. Popularity ratings favour drama

In a similar way, ‘negative’ or ‘confirmatory’ findings are not popular for casual readers – they provide little benefit (for considerable effort to write up) for researchers, under current metrics.

4. Increasing demand for free peer review is unsustainable

In a world where publishing papers is seen as virtually the only way to a career, the volume of submissions is increasing continuously.

Since journals mostly either charge their authors or their readers, or both, there is an ever-increasing number of journals to serve this demand (for a fee). But to hit the requirement for a paper to be peer reviewed, journals need to find one, two or even three other academics willing to read and review each submission.

The quality of that review is highly variable: I get requests most weeks to review articles I have absolutely no expertise on. A system with high demand for reviewers, outstripping supply, and no regulation or transparency in who is writing what, encourages poor practice. Making all peer reviews open will be revealing, but not a full solution.

5. Mistakes take years to be corrected

Scholarly publishing developed in the age of the printing press when problems with a printed article could not really be corrected.

In the digital age, they can be – but this requires a whole new editorial infrastructure.

In the world of popularity-driven metrics, setting up a committee to review complaints and act on them is not incentivised. Corrections, if they are enacted at all, typically take years.

Fix this one thing

To get to the root of these symptoms of a system in trouble, we need to change how researchers’ work is assessed.

Rather than measuring the popularity of an article reporting on work done, can we design a system that assesses the quality of that work itself?

What we need is a complete reset of what ‘good’ means in research: what matters is intrinsic quality, not traditional bibliometric style measures of impact or readability.

Impactful, and crucially, robust findings will emerge from our research base if we incentivise high rigour and reproducibility.

The committee's report lays the groundwork for this approach by recommending a ‘registered report’ style approach where experimental work is sent for peer review at the point of a protocol design.

This means the review feedback is a constructive part of the research process and can be taken on board before data is collected. Registered reports also guarantee publication of the work once the methods have been peer-reviewed, meaning it breaks the link between ‘significant findings’ and publication.

However, I think we can go much further.

Institutions and funders – those in charge of researchers’ funding and careers – need to take notice and take control of the incentives that can drive both good and poor practice.

To read how the Octopus platform is aiming to tackle these problems and improve the quality of research across the world, visit the Octopus website.

About the author

Alex Freeman headshot
Dr Alex Freeman
Director of the Octopus publishing platform