Half of Biomedical Research Studies Don't Stand Up to Scrutiny

So, what do we need to do about that?

I Stock 1346675635
iStock

What if I told you that half of the studies published in scientific journals today – the ones upon which news coverage of medical advances is often based – won’t hold up under scrutiny? You might say I had gone mad. No one would ever tolerate that kind of waste in a field as important – and expensive, to the tune of roughly US$30 billion in federal spending per year – as biomedical research, right? After all, this is the crucial work that hunts for explanations for diseases so they can better be treated or even cured.

Wrong. The rate of what is referred to as “irreproducible research” – more on what that means in a moment – exceeds 50%, according to a recent paper. Some estimates are even higher. In one analysis, just 11% of preclinical cancer research studies could be confirmed. That means that an awful lot of “promising” results aren’t very promising at all, and that a lot of researchers who could be solving critical problems based on previously published work end up just spinning their wheels.

So what gives? And how can we fix this problem?

What worms tell us about reproducibility

Although definitions of reproducibility and replication vary somewhat, for a study to be reproducible, another researcher needs to be able to replicate it, meaning use the same data and analysis to come to the same conclusions. There are lots of reasons why a study may not pass the replication test, from flat-out errors to a failure to adequately describe the methodology used. A researcher may have forgotten about a step in the process when he wrote up the methodology, for example, counted data in the wrong category, or written the wrong code for her statistics program.

Faking results is another reason, but it’s not nearly as common as others. Out-and-out fraud like that, or suspected fraud, is the reason for a bit fewer than half of the 400-plus retractions per year. But there are something like two million papers published annually, so the vast majority of studies containing irreproducible data are never retracted. And most scientists would agree that they shouldn’t be; after all, most science is overturned one way or another over time. Retraction should be reserved for the most severe cases. That doesn’t mean irreproducible papers shouldn’t be somehow marked, though.

Here’s a fresh example of a study that turned out not to be reproducible, because the results couldn’t be replicated: as Ben Goldacre relates in BuzzFeed, two economists published a massive study in 2004 claiming that a “deworm everyone” approach in Kenya “improved children’s health, school performance, and school attendance,” even among children several miles away who didn’t get deworming pills. Endorsed by the World Health Organization, it helped set policy that affects hundreds of millions of children annually in the developing world.

But now researchers have published papers describing two failures to replicate the original findings. Many of them just didn’t hold up, although some did.

That, as Goldacre explains, “is definitely problematic.” But the reanalyses were possible only because the original authors “had the decency, generosity, strength of character, and intellectual confidence to let someone else peer under the bonnet” – a rare situation indeed.

The fixes

Researchers are aware of the reproducibility problem, and some are trying to fix it. In response to alarming findings about the reproducibility of basic cancer research, a program called the Reproducibility Initiative has started providing “both a mechanism for scientists to independently replicate their findings and a reward for doing so.” It’s chosen 50 studies for independent validation – or not, since there’s certainly a chance the initial results won’t be reproducible. Those working on the project will perform the same kind of analyses that researchers did in the worm study replications. A similar effort has been ongoing in psychology, and other projects are under way in the social sciences.

All of these efforts will require scientists to share data, as the authors of the deworming study did. That has been a requirement in human studies for some years now, by many funders, and it’s encouraged by many journal editors. And while it’s not met 100% of the time, compliance is growing. Some basic science journals are moving to make it a requirement, too.

Perhaps more important, however, is that researchers – and the public that funds many of them – realize that science is a process, and that all knowledge is provisional. “It’s not just naive to expect that all research will be perfectly free from errors,” writes Goldacre, “it’s actively harmful.” Journalists, take note.

Translated into policy, that means valuing replication efforts, which right now are essentially unfunded and hardly ever published. If we want scientists to validate others’ work, we’ll need to create grants to do that. That means digging up additional funding, but replicating a study costs a tiny fraction of what the original work does. Funding new studies based on those that turn out to be irreproducible…well, now that’s expensive.

This article is republished from The Conversation under a Creative Commons license. Read the original article here: https://theconversation.com/half-of-biomedical-research-studies-dont-stand-up-to-scrutiny-and-what-we-need-to-do-about-that-45149.

More