A couple of days ago, Christie Aschwanden blogged on The Last Word on Nothing about the recent misreported rape story in Rolling Stone. She makes some good points about the case, most notably that it seems the reporters approached their sources with a story in mind that they never fully questioned. I agree with that point, but I was disturbed by the main bent of Aschwanden’s post. It is titled “Journalists Should Act More Like Scientists,” and her thesis seems to rest around her statement: “When a scientific theory comes face to face with new facts, scientists adjust the theory accordingly, and journalists should do the same. It’s OK to go into reporting with a hypothesis, but like a good scientist, a rigorous journalist should work hard to disprove it. (If you fail, that’s evidence that your hunch might be true.)”
As someone who has been a scientist and then a journalist, I immediately had two reactions: (1) There is a great deal of evidence that scientists’ conclusions and publications are often swayed by the story they think is true, too. (2) Scientists are good at data-driven scientific research like hypothesis testing, but when it comes to assessing social problems or history, we can just as easily jump to conclusions without understanding the process a journalist, social scientist, or historian might go through to parse out truths. I wasn’t the only one with this reaction, it turns out. As I wrote this blog post, Ivan Oransky was plugging away at this one, which adds some good context about the origin of Aschwanden’s thesis, as well as stats backing up the idea that scientists are equally fallible. It is especially apt in light of the ongoing debate about how to deal with apparent p-hacking in science–the bias toward p-values under the cut-off for statistical significance of 0.05–which has flared up most recently after the publication of a PLOS Biology study that showed a spike in p-values just under 0.05. This spike demonstrates that, like it or not, that bias exists–even when there is a nice hypothesis tested with a well-designed experiment. American Scientist recently published a column summarizing the problem of p-hacking and potential solutions.
Scientists and journalists are some of the smartest, most inquisitive people I know–people remarkably willing to question their own assumptions and biases and admit when they got something wrong. Although I don’t want to excuse the authors of the Rolling Stone story, let’s also not forget that we are all fallible and have our blindspots, especially when those blindspots are incentivized. I caution against presenting scientists as authority figures that journalists should seek out to learn about their own trade. Scientists know about science, not journalism, and both fields present important processes for truth-seeking.