Science

Stunning Rejection of Scientific Values of Transparency and Skepticism at New England Journal of Medicine

Outside researchers might "even use the data to try to disprove what the original investigators had posited."

|

BrokenScienceImage
Jason Keisling

The headline is taken from a dismayed tweet from prominent open science advocate Brian Nosek. University of Virginia psychologist Nosek is the co-founder of the Center for Open Science. He was also the team leader for the massive project that sought to replicate 100 psychological studies taken from leading journals. The researchers reported in Science that only about one-third of the results from the selected studies could be reproduced.

Nosek is one of the sources I rely upon in my Reason feature article, "Broken Science," in which I analyze the crisis of scientific irreproducibility and some of the solutions that are even now being implemented to address it.

So what provoked Nosek's tweet? The prestigious New England Journal of Medicine has just published an editorial on "Data Sharing" that actually wants to limit data sharing. Why? The most distressing observation from the editorial is that requiring that investigators share their data might mean that other researchers could "even use the data to try to disprove what the original investigators had posited." The bastards!

From the editorial:

NEJMcover
NEJM

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick.

However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have concerns about the details. The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters. Special problems arise if data are to be combined from independent studies and considered comparable. How heterogeneous were the study populations? Were the eligibility criteria the same? Can it be assumed that the differences in study populations, data collection and analysis, and treatments, both protocol-specified and unspecified, can be ignored?

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group's data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as "research parasites."

The concept of data sharing is not just beautiful; it is an essential aspect of doing science.

As I report in "Broken Science," this is not the first time that NEJM's editor Jeffrey Drazen has discounted the issue of scientific replicability. He basically rejected the findings made by Stanford University statistician John Ioannidis in his 2005 seminal article, "Why Most Published Research Findings are False." As I noted:

Initially, some researchers argued that Ioannidis' claims were significantly overstated. "We don't think the system is broken and needs to be overhauled," declared New England Journal of Medicine editor Jeffrey Drazen in The Boston Globe in 2005.

Evidently, Drazen hasn't changed his mind even as evidence has continued to pile up over the past 10 years that researchers are massively producing and publishing false positives.

If NEJM's editors are afraid that other researchers won't understand the choices, procedures, and parameters made by the original researchers, Nosek's Center for Open Science has a solution—use its Open Science Framework.

The Open Science Framework is organized around a free, open-source Web platform that supports research project management, collaboration, and the permanent archiving of scientific workflow and output. The OSF makes it easier to replicate studies because outside researchers can see and retrace the steps taken by the original scientific team.

It should go without saying that "research parasitism" is wrong. Researchers who use the results produced by other investigators should fully acknowledge that fact and give them credit when they publish their additional findings. But the lack of replication and the proliferation of false results in the scientific literature is a far bigger problem than "research parasitism."

Bottom line: Nosek is right to be dismayed by NEJM's rejection of scientific transparency and skepticism. Let's hope that the editors will rethink this step backward toward research secrecy sooner rather than later.