Wednesday, 3 October 2012

Fraud in Science

Shocking headline alert! Tenfold increase in scientific research papers retracted for fraud! A recent report has shown that the percentage of scientific papers that have been retracted due to fraud has increased ten-fold since 1976. Sounds shocking, right!?

Of course, dig a little deeper, and you'll find that the retraction rate was 0.00097% in 1976, and 0.0096% in 2007. So that's a rate of 1 paper per 10,000 that might be fraudulent. Which is pretty good going in my opinion really. I have 800 papers in my 'Papers2' library, so I'll have to read another 9,000 or so to have an odds on chance of reading a fraudulent one.

Shall we compare these statistics to the number of fraudulent builders or mechanics? How do people reckon odds of 1 in 10,000 stack up? How about our elected leaders? I don't know the exact numbers, but I'd guess at least 30 MPs were caught up in the parliamentary expenses scandal. We have about 600 MPs, so that's a rate of 5% (or 1 in 20). So academics are 500 times less fraudulent than our elected leaders. And, what's more, unlike the majority of our leaders, most of us can understand probabilities, which is always helpful when developing evidence-based policies.

So, underneath the awful-sounding headline, I'd argue that the numbers are actually pretty reassuring.

However, these numbers of course only consider the papers where fraud has been discovered and the papers retracted. I have no idea how many fraudulent papers go undetected. I'd imagine it could be pretty easy to massage numbers in a paper to give a more favourable result, and it'd be pretty hard to detect. There's often no real oversight in much of academia. I don't have anyone double checking my figures when I do my research. Even when I write papers with co-authors, they generally assume that my numbers are right and move forward from there. Similarly, when I've been given results from colleagues, I've never doubted their veracity.

Interestingly, there are various statistical methods that can pick up whether data are likely to have been made up. I particularly like this example. However, they are not widely applied - most scientists I know barely have time to fully read every paper they are supposed to, let alone perform statistical re-analysis on all of them.

The pressures to commit scientific fraud are obvious to anyone in academia. Your next job or promotion depends on a continuing output of top-quality papers in high impact journals. Lets say you spend a couple of years developing a new theory or method. If, at the end of it all, it doesn't really work and the results are inconclusive, all that work could end up as one paper in a low-quality journal that never gets read, leading to a huge pause on that career ladder. However, if you could massage a few numbers to get a statistically significant result, the amazing new paper gets published in Nature and you get to jump onto the next rung of the career ladder.

Perhaps the best defence against fraud is the attitude of the academic community. Unlike our MPs, where the attitude seemed to be 'everyone else is doing it, so I might as well', the response of the academic community to fraud is still one of disgust - anyone caught is disgraced and will probably never get to work in academia again.

Most people in academia are there because they want to find out more about the way the world works. Fraudulent data clouds that understanding. So most wouldn't dream of 'cooking the books'. For those that are tempted, I can only hope that the knowledge of the ruin they'd suffer if found out would be enough to deter them. Certainly, the headline numbers (the 1 in 10,000) appear reassuring. But as academics, I think we must be eternally vigilant, because it could become a huge problem if we don't maintain that pressure to always do the right thing for the sake of the community.




1 comment:

  1. "How much remains undetected"

    Answer

    At least everything I do.........

    ReplyDelete