THE RESEARCH CRISIS IS REAL
Science is in a replicability crisis – that is, other labs reproduce the same results only half the time at best. And any confirmed results are less significant than originally reported. If you don’t think this affects you, hold on.
Remember Amy Cuddy’s TED talk on power poses? Her study failed to replicate and her rebuttal still does not separate power poses from neutral poses. That is, while expansive body poses are better than contractive poses (e.g., slouching), it could be that sitting up straight would do. Damn.
Similarly, Malcom Gladwell drew conclusions about dyslexia based on a single study claiming that math word problems printed in fonts that were difficult to read made them easier to solve – that fuzzy fonts cued the reader to pay attention. But no one has reproduced that result in 16 attempts.
Gretchen Ruben’s happiness project isn’t immune either – the claim that simply smiling makes us happier just can’t be confirmed.
From book deals to speaking gigs, single-study research is a profitable venture. Replicability isn’t sexy but it matters because consumers, understandably, are moved by the results and buy it up. Literally.
Researchers across the globe have teamed up to repeat these studies, and many more, exactly the same way, with the blessing of the original authors, and with much larger sample sizes. So far, the replicability is no better than chance.
How does this happen?
In recent years, funding agencies and universities are pushing for new and innovative research and at the same time, journals have been publishing only statistically significant results. Researchers, then, have used these and other techniques to keep up with the pressure:
—> p-hacking: re-running statistical analyses looking for and selectively reporting the significant result
—> HARKing: (Hypothesis After Result is Known), which increases the chance of a false positive
In response, a debate exists about increasing the threshold for statistical significance, but in the meantime, some journals are focusing on registered reports where hypotheses are submitted prior to running the data. This allows for non-significant results to be published too, which is important for science.
So what can we do as consumers?
—> Consider new research interesting, but don’t base your life or work on it just yet. Be curious but think critically, please.
—> Expect the opposite result in the future. When my kids were small, giving peanut butter to children under the age of 1 was ill advised, and now, the paediatric guidelines are completely opposite. Likewise, the recent change of urging non-medical face masks for the public is a total inversion of the recommendations months ago.
—> The self-help industry profits from your placebo affect because just doing generally healthy activities, even if you know they are a placebo, improves your functioning. That *new* book or system or class isn’t what helped you – you helped you.
Replicability has been an issue for about a decade or more. Here’s an introduction to the issue from a study on pre-cognition, and how statistical significance in other fields is used: CBC Ideas podcast: Psychologists confront impossible finding, triggering a revolution in the field