Science & Technology

Academic Publishers Retract More Than 120 Papers After Learning They Were 'Computer-Generated Nonsense'

Dozens of secret Sokals

|

Nature has the lede of the week:

The publishers Springer and IEEE are removing more than 120 papers from their subscription services after a French researcher discovered that the works were computer-generated nonsense.

The researcher is Cyril Labbé, a computer scientist who "has catalogued computer-generated papers that made it into more than 30 published conference proceedings between 2008 and 2013." According to Nature, he

developed a way to automatically detect manuscripts composed by a piece of software called SCIgen, which randomly combines strings of words to produce fake computer-science papers. SCIgen was invented in 2005 by researchers at the Massachusetts Institute of Technology (MIT) in Cambridge to prove that conferences would accept meaningless papers—and, as they put it, "to maximize amusement"….SCIgen is free to download and use, and it is unclear how many people have done so, or for what purposes. SCIgen's output has occasionally popped up at conferences, when researchers have submitted nonsense papers and then revealed the trick.

Labbé does not know why the papers were submitted—or even if the authors were aware of them. Most of the conferences took place in China, and most of the fake papers have authors with Chinese affiliations. Labbé has emailed editors and authors named in many of the papers and related conferences but received scant replies; one editor said that he did not work as a program chair at a particular conference, even though he was named as doing so, and another author claimed his paper was submitted on purpose to test out a conference, but did not respond on follow-up.

Retraction Watch notes that this story undercuts some of the conclusions people have drawn from a feature that Science published last fall. In that report, a researcher posing as a scholar at an imaginary African institute managed to publish nonsense papers in 304 open-access journals, a result touted in some quarters as showing a link between the open-access world and crappy quality control. (Open-access publications allow anyone to read their papers online, while conventional academic outlets charge high fees.) But Bohannon didn't submit his faux study to any traditional journals, so his results don't really allow you to compare the old and new models.

All of the outlets that Labbé exposed were subscription-based. The problem is evidently more extensive than some people thought.

[Hat tip: Bryan Alexander.]