Some while ago I posted a pretty funny take on the Science News Cycle courtesy of PHD Comics. Every now and then I link to it when posting about some “recent study” because it is helpful to remember the distortions that can take place as information works its way from the research lab to the evening news.
I was reminded of that comic again when I read this bit from a story in New Scientist:
“To find out if behaviour in a virtual world can translate to the physical world, Ahn randomly assigned 47 people either to inhabit a lumberjack avatar and cut down virtual trees with a chainsaw, or to simply imagine doing so while reading a story. Those who did the former used fewer napkins (five instead of six, on average) to clean up a spill 40 minutes later, showing that the task had made them more concerned about the environment.”
Surely something has been lost in translation from data to conclusion, no? The author notes that this is from an “unpublished” study which gives me renewed confidence in the peer review process.
Well, that is until I read a somewhat alarming post by neuroscientist Daniel Bor, “The dilemma of weak neuroimaging papers”, which contains this summation and query:
“Okay, so we’re stuck with a series of flawed publications, imperfect education about methods, and a culture that knows it can usually get away with sloppy stats or other tricks, in order to boost publications. What can help solve some of these problems?”
All in all, we do well to proceed with healthy skepticism.