Every week we get at least one, but usually more, hype filled press releases & news items about how certain foods, medicines or lifestyle choices will either kill or save you. The vast majority of these weekly claims don't turn out to hold water.
As examples lifted from this actual week, I offer;
As examples lifted from this actual week, I offer;
- Fish oil supplements linked to prostate cancer
Not quite: this was observation of people with existing prostate cancer, in all likelihood influenced by their earlier diagnosis. More analysis here.
- Diet soda drinkers might get fatter and unhealthier than their 'regular' soda counterparts
Not quite: this was an 'opinion' piece full of theoretical ways this might be the case, not backed up by actual research. More analysis here.
If you actually spend time on the press releases and underlying papers (if they even exist!), you often discover that:
- there is no actual (new) research to back up the claims, or
- that the claims bear scant relation to what is in the paper, or
- that the data has been massaged heavily until some correlation popped out (and massaged & weak correlation is pretty far from causation, and most often proof of no causation).
These days, the discerning internet user can find sites that take the time to debunk over-hyped claims, but the brave souls dissecting the research behind the headlines will always be 'late', and secondly, they don't make Fox News or the New York Post.
So, the average person worried or interested in her health is bombarded by multiple confusing and conflicting headlines per week. This does nothing to improve our actual health, and in all likelihood worsens it ("forget that, the story changes every month").
What is behind this avalanche of weak or even bogus results in the news? It goes like this. Scientists perform expensive research, and very often, nothing spectacular comes out. Healthy people are healthier, people that exercise have lower blood pressure, folks that do things in moderation do lots better etc.
Scientists are people too, and they have to justify their work, so they start the first round of trawling the data. And if you've measured enough, some interesting correlation always pops up! To counter this, Bonferroni correction should be applied to statistics, but not doing is so a common but helpful oversight. I mean, the research was expensive enough, something should come out!
So we have a claim, for example: 'Overweight post-menopausal women with pre-diabetes who eat fifth quintile amounts of avocados have lower insulin resistance'. This is typically what you'll find in a research paper, and where such a claim (had it survived Bonferroni correction, which it would likely have not) actually is worth reporting. Meanwhile, the claim is flagged with 'p < 0.05' which means the result is statistically significant; in actual effect, the impact can still be clinically insignificant (and often is).
Next, the research institute also wants to look good, so its PR department takes the paper, speaks with the scientists and writes a press release: "Benefit of eating avocados on insulin resistance, preventing diabetes". Note that they lobbed off all the qualifications, plus extrapolated the claim into preventing disease.
Finally, journalists fed this press release are eager for clicks on their articles, so they liven up the press release with some further human interest quotes and headline the piece: 'Scientists say: Eat avocados to ward off diabetes'.
And there we go - from an investigation with no really significant results, we end up with a pretty stonking headline with incorrect advice.
So what do we do?
Here's an odd idea. Zappos, an online shoe store, has a 'quit now' bonus for new hires. If after training you decide to leave, the company pays you $3000. The net effect of this is that people have an incentive to leave if they feel Zappos is not going to be a great place for them.
And, although I don't know how it works in practice, in theory this should be a big win - anyone who stuck around against their will but thus inticed to leave will 1) not be a drag on Zappos 2) be able to move on to better pastures all the quicker.
The relevance to our scientists feeling pressured to publish should be obvious. Launch a fund, perhaps at department or institute level, or make it a national prize, for researchers honest enough to claim 'no significant results' from their research if there were none.
Compare the (at best misleading) headline 'Eat avocados to ward off diabetes' with 'Different levels of fruit consumption did not meaningfully change levels of diabetes among 3500 randomly selected staff of healthcare institutes'.
The latter headline would admittedly not make the evening news. But it would allow investigators to move on to new research, and not further confuse the public. And very importantly, it would also make sure that even negative or null results make it to (the academic) press.
As Ben Goldacre of www.alltrials.net often points out, not reporting unwelcome results leads to a statistical excess of positive results, thus "proving" that ineffective treatments actually work!
Now, I admit the details of this 'Zappos prize' would be daunting, and it would also require a significant fund to have any impact. It would need prestige too - scientists (who, as noted above are people too), are less swayed by money than most.
But something has to change. Today, mediocre research grabs the headlines while researchers honest with themselves struggle to get their voices heard!
Your thoughts are more than welcome.