The NY Times made ethics in science a focus this week, after Science retracted a celebrated article on how canvassers could sway opinions on gay marriage: Scientists Who Cheat, NY Times, NY Times Editorial Board. The editorial starts with this quote:
“Cheating in scientific and academic papers is a longstanding problem, but it is hard to read recent headlines and not conclude that it has gotten worse.”
The NY Times also published a list of major, newsworthy scientific retractions that have happened over the past couple of decades: Retracted Scientific Studies: A Growing List, Michael Roston.
But I sensed a logical fallacy–more retractions could mean more liars, or it could mean better policing, or it could just mean that there’s more science in general. I asked Ivan Oransky, founder and editor of Retraction Watch, about this question after he tweeted out the NYTimes editorial:
@ivanoransky Are there data to support this: “it is hard to read recent headlines & not conclude that [cheating in sci] has gotten worse”?
— Katie L. Burke (@_klburke) June 1, 2015
And he answered, citing a recent article he wrote for The Conversation:
— Ivan Oransky (@ivanoransky) June 2, 2015
In short, according to Oransky, an expert on retractions in science: No, there are not clear data to show that more retractions mean more lying. We do know that the increasing retractions are not just a function of increasing science studies. As Oransky notes in the article above, science studies have increased by 50%, but there are 10 times the number of retractions. Still, we do not know how much that is due to more fraud versus better detection of bad science. In fact, more retractions could actually mean better policing of the bad science that does happen, which could mean the process of science is becoming more trustworthy as it becomes more transparent. For the sake of public trust in science, we journalists need to make sure the public understands this distinction, and the NYT did not really do so this week. The question about whether retractions mean more lying, or more policing of bad science is absolutely worth exploring, as is the question about what incentives are built into the process of science that encourage/enable untruthful or sloppy science.
[Edit 6/8/15, to clarify: Added the phrase, “We do know that the increasing retractions are not just a function of increasing science studies. As Oransky notes in the article above, science studies have increased by 50%, but there are 10 times the number of retractions. Still, we do not know how much that is due to more fraud versus better detection of bad science.”]