The book, while scary and disheartening, is truth-seeking and ultimately optimistic. Ritchie doesn’t come to bury science; he comes to fix it. “The ideals of the scientific process aren’t the problem,” he writes on the last page, “the problem is the betrayal of those ideals by the way we do research in practice.”
In a year where scientists seemed to have gotten everything wrong, a book attempting to explain why is bizarrely relevant. Of course, science was in deep trouble long before the pandemic began and Stuart Ritchie’s excellent Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth had been long in the making. Much welcomed, nonetheless, and very important.
For a contrarian like me, reading Ritchie is good for my mental sanity – but bad for my intellectual integrity. It fuels my priors that a lot of people, even experts, delude themselves into thinking they know things they actually don’t. Fantastic scientific results, either the kind blasted across headlines or those which gradually make it into public awareness, are often so poorly made that the results don’t hold up; they don’t capture anything real about the world. The book is a wake-up call for a scientific establishment often too blinded by its own erudite proclamations.
Filled with examples and accessible explanations, Ritchie expertly leads the reader on a journey through science’s many troubles. He categorizes them by the four subtitles of the book: fraud, bias, negligence, and hype. Together, they all undermine the search for truth that is science’s raison d’être. It’s not that scientists willfully lie, cheat, or deceive – even though that happens uncomfortably often, even in the best of journals – but that poorly designed experiments, underpowered studies, spreadsheet errors or intentionally or unintentionally manipulated p-values yield results that are too good to be true. Since academics’ careers depend on publishing novel, fascinating and significant results, most of them don’t look a gift horse in the mouth. If the statistical software says “significant,” they confidently write up the study and persuasively argue their amazing case before a top-ranked journal, its editors, and the slacking peers in the field who are supposed to police their mistakes.
Ritchie isn’t some crackpot science denier or conspiracy theorist working out of his mom’s basement; he’s a celebrated psychologist at King’s College London with lots of experience in debunking poorly-made research, particularly in his own field of psychology. For the last decade or more, this discipline has been the unfortunate poster child for the “Replication Crisis,” the discovery that – to use Stanford’s John Ioannidis’ well-known article title – “Most Published Research Findings Are False.”
Take the example of former Cornell psychology professor Daryl Bem and his infamous “psychic pornography” experiment that opens Ritchie’s book. On screens, a thousand undergraduates were shown two curtains, only one of which hid an image that the students were supposed to find. The choice was a coin toss, as they had no other information to go on. As expected, for most kinds of images they picked the right curtain about 50 percent of the time. But – and here was Bem’s claim to fame – when pornographic images hid behind the curtails, students choose the right one 53 percent of the time, enough to pass for statistical significance in his sample. The road for top-ranked publication was wide open.
When the article came out after passing peer review, the world was stunned to learn that undergrads could see the future – at least when images of a sexual nature were involved. Proven by science, certified by The Scientific Method™, the psychology world was thrown into chaos. The study was done properly, passed peer review, and published in a top field journal, with the same method that underlies all the other well-known results in the field. Still, the result was totally bonkers. What had gone wrong?
Or take the don of behavioral economics, Daniel Kahneman, whose many quirky experiments convinced an entire economics profession of individual irrationality and ultimately earned him the Nobel Prize. The psychological literature on so-called ‘priming,’ part of which is used by behavioral economists, suggested that tiny changes in settings can produce remarkably large impacts in behavior. For instance, subtly reminding people of money – through symbols or the clicking noise of coins – makes them behave more individualistically and less caring of others. “Disbelief is not an option,” wrote Kahneman in his famous best-seller Thinking, Fast and Slow, “you have no choice but to accept that the major conclusions of these [priming] studies are true.”
Beginning in the 2010s, psychologists tried to replicate these famous results and more. When tried elsewhere, with other students, better equipment, or larger samples – or sometimes with the exact same data – the same results wouldn’t emerge. How odd. Lab teams tried to replicate many established findings, coming up way short: “The replication crisis seems,” writes Ritchie, “with a snap of its fingers, to have wiped about half of all psychology research off the map.” There was something structurally wrong in the way that psychology found and displayed knowledge. Some research.
Subscribe to Free “Top 10 Stories” Email
Get the top 10 stories from The Aquila Report in your inbox every Tuesday morning.