Listen up, Ted Cruz: Here's what you don't understand about climate change and science

The Texas senator says the facts don't back global warming. He doesn't understand how science assesses facts

Published September 5, 2015 12:00PM (EDT)

 (AP/Andrew Harnik)
(AP/Andrew Harnik)

In 2004, science historian Naomi Oreskes did a survey of abstracts published in the previous 10 years matching the search term “global climate change,” and found that none of them rejected the consensus view that humans were at least partly responsible.

In the years since, climate scientists have conducted more extensive surveys. While some objections were raised to the original study, more extensive investigations have uncovered only a small sliver of such papers — only 2-3% — but not only are they very few in number, their authors tend to publish less and be less influential and they've had little impact on the rest of the field.

Contrarians like to claim that's because they're like Galileo, introducing new ideas that the scientific establishment simply can't grasp or appreciate. But that's exactly the opposite of what Norweigan climate scientist Rasmus Benestad saw when he examined one such paper in 2011: their “new” techniques (Fourier transforms and wavelet analysis) were anything but, nor did they understand the limitations and proper use of those techniques.

The papers' authors simply ignored thousands of years of data which didn't fit with their “explanation” of natural cycles. Now a team Benestad assembled has just published a study, "Learning from mistakes in climate research," which casts a much wider net, looking not just for individual mistakes in stand-alone studies, but for patterns of mistakes across contrarian studies, uncovered by systematically trying to replicate their results. Not surprisingly, it found quite a number of them.

As explained in its abstract:

Our replication reveals a number of methodological flaws, and a pattern of common mistakes emerges that is not visible when looking at single isolated cases. Thus, real-life scientific disputes in some cases can be resolved, and we can learn from mistakes. A common denominator seems to be missing contextual information or ignoring information that does not fit the conclusions, be it other relevant work or related geophysical data.”

A more detailed description of some of the failings found is given in the results section. These included starting with false assumptions; ignoring relevant physical interdependencies and consistencies; insufficient model evaluation, including “over-fitting,” or “curve-fitting” — “where a model involves enough tunable parameters to provide a good fit regardless of the model skill;” false dichotomy — “for example,when it is claimed that the sun is the cause of global warming, leaving no room for GHGs even though in reality the two forcings may coexist” and “cherry picking” — “ignoring tests with negative outcomes.”

There were also a set of problems specific to explanations based on cycles:

One common factor of contrarian papers included speculations about cycles, and the papers reviewed here reported a wide range of periodicities. Spectral methods tend to find cycles, whether they are real or not, and it is no surprise that a number of periodicities appear when carrying out such analyses. Several papers presented implausible or incomplete physics, and some studies claimed celestial influences but suffered from a lack of clear physical reasoning: in particular, papers claiming to report climate dependence to the solar cycle length (SCL). Conclusions with weak physics basis must still be regarded as speculative.

In short, examining a body of contrarian papers as a whole and subjecting them to replication testing did not produce any sort of strong challenge to the existing consensus, as the “we're just like Galileo” crowd would have it. Rather, the group examination made their underlying weaknesses dramatically more visible.

When Salon asked Benestad what led him to take the leap from examining the problems with a single paper to taking a much broader approach, he said it was “curiosity,” adding: “The paper was inspired by an article and a book called 'Agnotology' (knowing why we do not know things). I was wondering whether there was a common trait. Also, I thought that since the selected papers were at odds with the mainstream, I probably would find some mistakes somewhere, and we could probably learn from those.”

The term "agnatology" is not that well-known, but it was neatly explained in a 2010 paper that applied it to climate change, “Agnotology as a Teaching Tool: Learning Climate Science by Studying Misinformation,” which put it like this: "While epistemology is the study of knowledge, how and why we know things, agnotology is the study of how and why we do not know things."

That paper was concentrated on teaching college students, on issues such as well-documented strategies used to confuse the public about the existence of the overwhelming global warming consensus. This focused on issues such as the confusion of peer-reviewed research with expressions of opinion, or settled areas of science at the core of global warming research with more peripheral areas that are still in doubt, “such as the extent to which hurricanes have already strengthened due to anthropogenic climate change.” In contrast with such lay-oriented issues, Benestad's concern was with practicing scientists, and forms that anatology might take in this case had not been documented in advance, except on a case-by-case basis.

But Benestad also traced the study back to his aforementioned 2011 criticism of a single deeply flawed paper: “That paper was the seed of this story, and what prompted a plea from an educator for me to write a formal rebuttal of the paper,” he said. “My criticism was that they had removed a large chunk of the data that did not fit their conclusions, and only kept the record that described the last 4,000 years. They furthermore used a very naive approach called 'curve-fitting' to find some cycles which matched the wiggles in the data curve and attributed those to the gravitational pull of celestial objects. But the cyclic character in their data only started 4,000 years ago, and was absent in the 6,000 preceding years after the end of the last Ice Age.”

Summing up the results of the new study, Benestad said the most common issue was that “they exclude important and relevant aspects,” effectively biasing the whole process from the beginning:

Typically, they do not discuss other work on the same problem, other than those supporting their own conclusions. One reason for this may be that most papers were written by 'newcombers' in climate research, lacking the comprehensive overview and tacit knowledge about models and methods. Often there is knowledge that has not been written down. In a sense, our paper is about this kind of tacit knowledge - 'everybody' knows that the selected papers are questionable - and represents an effort to put it on the record.

In theory, of course, it's entirely possible that newcomers would see important things that had previously been overlooked. Outsiders do revolutionize stagnating fields from time to time, it's not unheard of. So this sort of detailed replication work is precisely what's needed to see if that theory holds up ... which it does not.

Critics might ask: Why only look at contrarian papers? “There are probably a number of papers supporting the consensus view with flaws, but the most important of these look watertight,” Benestad responded. “Many of these have already been subject to scrutiny (e.g. Mann), and are still standing.”

That's not a call for complacency, however. “Science is a field with moving goal posts,” he added, “and studies that were high quality a few decades ago are maybe now obsolete because we now have far better observations and models. We know that the early work by Arrhenius and Rossby was very simplistic and contained some errors, but the big picture they provided was nevertheless correct.”

There were two distinctive aspects to this study: First, it analyzed a set of potentially related papers, and second, it attempted to actively replicate their findings, not just passively criticizing their results.

Given that science is based on the notion of reproducible results, one might think that such a study would be an obvious step forward, so why hadn't it been done before? “Good question,” Benestad said. “That's the strange thing. One would think that this would be the first thing in all sciences. Maybe it has been done in other disciplines like medicine ('alternative medicine'), but I don't know.”

Co-author Katharine Hayhoe, a Texas climatologist I previously interviewed here, had some thoughts about that, rooted in the demands for novelty: “In today’s world of sound bytes it is becoming increasingly difficult to find the time and resources to reproduce research. I’ve never seen a research grant application to replicate even the most controversial research approved for funding,” she said, though that doesn't mean it never happens. “In my own field, I rarely see papers aimed at replicating previous research. And when we wrote this paper — which replicates the results of 38 different studies contradicting thousands of other peer-reviewed analyses of climate change — we had a very challenging time finding anyone who was willing to publish it. It just wasn’t new or hot enough.”

This led her to the next question: “So if people don’t value replication, should we even be doing it?” she asked, and then answered enthusiastically, “Yes, absolutely! Replication is at the heart of the scientific method. Physical science operates on the principle that nature obeys rules — rules that can be measured and verified independently. If something can’t be replicated, why should it be believed?

“Climate change is quite possibly the greatest challenge confronting us today,” Hayhoe added. “To minimize or even reject the role of humans in causing climate change based on unsound research could lead us badly astray. Science can’t tell us what we should do; that’s a values choice. But good science is essential input to making good decisions. And that’s why we need replication.”

In addition to the replication issue, there are also concerns related to peer review. “Peer review is a spam filter that works well, but imperfectly, in differentiating between ideas that are supported by evidence and those that are not,” said Sephan Lewandowsky, another co-author, whose work Salon has covered (here, here, and here). “It is therefore not surprising that the occasional flawed paper is published, even in reputable journals. Science is hard and mistakes are not always easy to spot.”

Which is another reminder of why replication matters.

“Vice versa, it can happen that a valuable idea is rejected by peer review because it is too controversial or it differs too much from established wisdom,” Lewandowsky added. “This is what happened to our paper along the way, as a few journals did not want to publish it because it reported a very novel and potentially controversial way of scrutinizing dissenting work.”

“I think that the long-winded process is typical of science being conservative, which is a kind of quality control mechanism. The paper improved through all the steps of the reviewing process,” Benestad said. “But I also think that this conservative world view may be a problem for the scientific community as well, since it needs to consider its relationship with the wider society as well. Chris Mooney has written some books about this issue, which make some interesting points.”

So, the issue of communicating vital, useful truths to the wider society remains in its infancy. But this paper may signal a more marked development within science, as Lewandowsky suggested with his final thoughts: “The important thing to realize is that peer review is only a first step in creating new scientific knowledge: Once an article is published, it also has to withstand the test of time and it has to attract the attention of other researchers who find it worthwhile to cite,” he said. “It is pretty clear that the dissenting papers we looked at have not withstood the test of time, and it remains to be seen whether our work will pass that test — I am very confident that it will because most scientists are keen to learn from mistakes.”

The prospect of learning in much bigger chunks — as this study promises — may not easily apply in every field. But where it does, it should prove very attractive, indeed.


By Paul Rosenberg

Paul Rosenberg is a California-based writer/activist, senior editor for Random Lengths News, and a columnist for Al Jazeera English. Follow him on Twitter at @PaulHRosenberg.

MORE FROM Paul Rosenberg


Related Topics ------------------------------------------

Climate Change Editor's Picks Global Warming Science