In the context of the commentary about the recent bushfires and the link with climate change, a common remark is that the politicians just need to rely on The Science. I use capital letters on purpose because there is generally an inference that The Science is always correct and that the policy actions that follow are obvious.
It is ironic that in another sphere of discussion, the talk is all about the replication crisis that is overwhelming scientific research to the point that many widely accepted findings are being called into question.
A key figure in this dialogue is John Ioannidis, who is based at Stanford University in the US. According to Ioannidis, “There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. It can be proven that most claimed research findings are false.”
There are a variety of reasons for this, including scientific fraud. But most are subtle and relate to the incentives that scientists face — in particular, to churn out refereed publications to secure grant funding.
Trawling for results and curtailing the examination of the observations when a statistically significant result turns up is a common flaw of many scientific investigations. It is interesting that the authors of statistically significant works are the least likely ones to offer up the data and codes for replication.
Rejigging the hypothesis to fit the observations is also common, as is the perennial issue of small sample sizes. The sloppiness and/or bias in the refereeing process also contribute to the publication of flawed findings. Some observers note that, rather than peer-reviewed, some research is often pal-reviewing.
The replication crisis has hit certain disciplines particularly hard, including psychology and medical sciences. But no discipline has been immune, including climate science.
Australia’s own Chief Scientist, Alan Finkel, has spoken about bad science in Australia. While he asserts that “Australia produces high-quality research that is rigorous and reproducible”, he notes that “there are a significant number of papers that are of poor quality and should never have made it through to publication”.
There are several problems: “Selective publication of results to support a hypothesis. HARKing — hypothesising after results are known. Manipulating data and research methods to achieve statistical significance (is another example).”
A total of 800 papers that lacked credibility — because of incorrect results and even plagiarism — were withdrawn recently from Russian academic publications. A number of publications that have failed to ensure high standards of checking and assessment are on the point of losing their external validation.
And just this year a winner of the Nobel prize in chemistry, Frances Arnold, was forced to withdraw an article published in the prestigious journal Science.
She declared: “I am totally bummed to announce that we have retracted last year’s paper on enzymatic synthesis of beta-lactams. The work has not been reproducible.”
It turned out there were missing contemporaneous entries and raw data for key experiments in one of the lab notebooks.
The fact she owned up quickly is refreshing. But it does raise the question of how many papers in influential journals contain false conclusions based on non-replicable experiments. The fact there is so little funding for scientists to attempt to reproduce important studies adds to the problem. The replication crisis has caused most damage in psychology and medical studies — cancer biology, in particular — but climate science has not escaped.
(Because many climate science studies are not based on hypothesis setting and testing using experimental observations, large chunks of the literature have been untouched by this debate. This said, predictions of future events using large-scale computer models are also contentious.)
A paper published in the influential journal Nature has outlined the results of an attempt to replicate previous studies on the impact on fish behaviour of seawater acidification caused by rising CO2. The authors hail from Australia, Canada, Norway and Sweden, with Timothy Clark of Deakin University as lead author.
The original eight studies were conducted by staff at James Cook University’s Coral Reef Centre. The key findings were that seawater acidification caused small reef fish to lose their ability to smell predators, to become hyperactive, to lose their tendency to swim automatically either left or right, and impaired their vision.
This new paper fails to replicate any of the findings of the original studies. While it is true that the researchers did not observe clownfish — the subject of the JCU studies — videos were released of all the new experiments conducted on other reef fish. Videos of the original studies were not released, which is regarded as best practice in this field. You can expect to fail to replicate some findings, but a 100 per cent failure rate is extraordinarily low. Clark has encouraged others to take up the challenge of finding out “what has caused the stark differences between our findings and theirs”.
So how is this replication crisis to be handled? Finkel has a number of ideas including removing the incentives for scientists to publish as much as they do. He believes scientists should be judged — for the purpose of grant giving, for example — on a handful of outstanding papers across a five-year period. He also recommends accredited integrity training and points to the poor quality of some journals, particularly in the open access space, in which authors effectively pay to be published, and weak or non-existent assessment processes. He proposes a publication quality assurance process that journals would be encouraged to meet.
It’s hard to be optimistic about any short-term benefits, particularly in the context of the incentives scientists face to secure funding and the lack of funding for replication studies.
So let’s regard all scientific findings as tentative. The scientific process is primarily based on the ongoing testing of refutable hypotheses — it’s a journey, not a destination. Policymakers need to use caution in applying science-based findings.