In the run-up to Earth Day this year, two major reports were released by the UN's Intergovernmental Panel on Climate Change, the largest such body in the world. On March 31, Working Group II released its report, Climate Change 2014: Impacts, Adaptation, and Vulnerability, and on April 13, Working Group III released its report, Climate Change 2014: Mitigation of Climate Change. Both reports cited substantially more evidence of substantially more global warming and related impacts than past reports have, and they did so more lucidly than in past iterations.
As climate scientist and communicator Katharine Hayhoe told Salon, “This time around, to its credit, the IPCC has gotten a lot more serious about improving its ability to communicate the report’s message, through graphics and other ancillary products.” There was also a greater sophistication in how to conceptualize, measure and compare things, even where substantial uncertainties are involved. And there was a substantial list of more than 90 major impacts already recorded on every part of the planet.
Yet, one of the most disturbing stories to emerge around the reports was the New York Times report that language about the need for $100 billion in crisis funds to aid poor nations was removed from the Working Group III executive summary for policymakers during the final round of editing. The action neatly encapsulated the yawning gap between the growing danger of climate change -- and growing maturity of climate scientists -- on the one hand, and the utter lack of political will on the other.
But the growing sophistication of the scientific community is a cause for continued hope — if they can accelerate their learning curve, and follow the right path. They no longer mistakenly assume that the facts can “speak for themselves,” and they've gotten much better at developing ways to communicate lucidly about complex challenges and uncertainty. But the entrenched denialist, do-nothing opposition is still winning when it comes to writing the checks. If that's to change, Australian psychologist Stephan Lewandowsky will almost certainty be part of the reason why.
One reason global warming opponents still have the upper hand is basic confusion over the nature and significance of uncertainty. “There are numerous instances in which politicians and opinion makers stated that 'there is still so much uncertainty, we shouldn’t invest money to solve the climate problem,'” Lewandowsky explained to Salon. Now he has just co-authored two related articles on scientific uncertainty and climate change -- “Part I. Uncertainty and unabated emissions” and “Part II. Uncertainty and mitigation” -- which show this thinking is completely backwards. “This is shown to be wrong by our analysis, because uncertainty can never be too great for action. On the contrary, uncertainty implies that the problem is more likely to be worse than expected in the absence of that uncertainty.”
It's a simple fact that your typical scientist already knows intuitively: Uncertainty grows with risk, exposure and potential loss, especially with complex nonlinear systems, like the global climate system. In fact, it's not even possible to calculate how much damage could come from worst-case climate scenarios, as Working Group III lead co-author Christopher Field pointed out at the press conference for their report. The relationship between greater uncertainty and risk is both obvious to those in the know and invisible to those who aren't. So it's never been properly talked about — or even rigorously analyzed — until now.
“Basically, we tried a new mathematical approach that is called 'ordinal,'" Lewandowsky said. “An ordinal method allows us to address questions such as: 'What would the consequences be if uncertainty is even greater than we think it is?' That is, ordinal questions refer to the order of things, such as 'greater than' or 'lesser than,' but don’t address absolute questions such as 'how much.'”
So it doesn't tell you how much worse things will get — which would certainly be nice to know — but it does tell you that they will get worse the more uncertain things are. In short, it gets you oriented in the right direction — 180 degrees away from where so-called “common sense” would take you. It puts you on the right path, asking the right kinds of questions, taking the right kinds of first steps, and avoiding getting lost in the confusion, mistakenly thinking that uncertainty means less to worry about. It's hard to imagine a more basic finding.
“Using that approach we showed that as uncertainty in the temperature increase expected with a doubling of CO2 from pre-industrial levels rises, so do the economic damages of increased climate change,” Lewandowsky continued. “Greater uncertainty also increases the likelihood of exceeding 'safe' temperature limits and the probability of failing to reach mitigation targets. Likewise, in the context of sea level rise, larger uncertainty requires greater precautionary action to manage flood risk."
As for the impact on policy, Lewandowsky said, “We show that the adverse effects of uncertainty are 'leveraged' and hence amplified by more emissions. It follows that to reduce the adverse effects of uncertainty, we should curtail emissions. This is a pretty strong imperative, but our papers don’t prescribe an exact target for emissions. As I noted above, we cannot answer 'how much' questions, we can only say 'less (pollution) is better.'”
Acceptance and Rejection of Science
But these two recent papers are only side of the story of what Lewandowsky has been up to. He's also written a series of papers dealing with how people process information, either rejecting or accepting science. When I asked about the relationship, Lewandowsky replied,”The underlying 'theme' — if there is one — is how people respond to uncertainty or ambiguity. What the uncertainty papers analyze is how people should respond to uncertainty if they were mathematically-optimal. What my rejection-and-acceptance work shows is how people actually respond to things they perceive to be uncertain. So the two streams of research are opposing sides of the coin — what reality actually means and how people interpret it.”
There's also quite a bit more controversy surrounding this work — some of it intellectual, some anti-intellectual. In a 2012 paper, “The pivotal role of perceived scientific consensus in acceptance of science,” Lewandowsky and his co-authors reported on two studies. The first showed that acceptance of several scientific propositions — including the acceptance of HIV causing AIDS, smoking causing lung cancer, and human CO2 emissions causing global warming — were all manifestations of a common factor, which in turn is correlated with a factor reflecting perceived scientific consensus. In short, the more that people perceived scientific consensus, the more they accepted scientific findings.
The second study demonstrated a causal relationship, showing that acceptance of human-caused (anthropogenic) global warming (AGW) increases when the scientific consensus is highlighted. Subjects were divided into two groups, the “control” group subjects “greatly underestimated the AGW consensus” at 67 percent, while the second group, exposed to a graph and text highlighting the 97 percent consensus among climate scientists, still underestimated the consensus, but by much less — placing it at 88 percent. Not only did acceptance of global warming increase, the most dramatic finding was the neutralization of the effect of worldview, which otherwise had a significant impact. While there were significant differences due to worldview in the control group, the differences were "effectively nonexistent" for those exposed to consensus information, even though that information was only partially absorbed.
The impact of worldview can informally be seen in partisan trend polling data on global warming evidence perception from Pew (graph here), as well as snapshot data showing Tea Party Republicans as significant outliers, with views significantly different from other Republicans, whose views are surprisingly close to average. Academics take a more rigorous approach toward worldviews, though their specific methods may vary. In this case, worldview was measured via five items expressing attitudes about the free market.
Dan Kahan is a leading researcher who's skeptical of the implications of this paper. He's done extensive research into the resilience of perceptions, regardless of contrary information, and finds strong evidence of “identity-protective cognition” among people of all different worldviews. (He uses a two-dimensional model, as opposed to a more common one-dimensional left/right one.) Kahan was the subject of Ezra Klein's first cover story at Vox.com, and I first wrote about his work in a three-part series at Open Left in January 2010. Not only has Kahan shown that people are resistent to information that challenges their identities and the worldviews that support them, he's shown that more information tends to drive people apart in their views, rather than lead to convergence. People use more information to rationalize what they already believe, rather than to question and reformulate it. Kahan doesn't doubt the results Lewandowski reported, he just thinks they won't stand up in the real world, in part because they haven't so far. (Kahan and Lewandowski debated each other on the Inquiring Minds podcast co-hosted by Salon contributor Chris Mooney — write-up and podcast here.)
“The reason 'consensus' has not appeared to work in society at large to date isn’t because it’s ineffective — it’s because there is a well-funded countermovement out there that takes every opportunity to mislead the public into thinking that there isn’t a consensus,” Lewandowsky told me, in response to this argument.
But Kahan stresses it as an inescapable fact. “The real world has counter-messaging in it That's one reason the experimental studies aren't externally valid, in my view,” he said. “Unless those counseling 'broadcast 97 percent' have a plan for stifling all the culturally grounded cues that real people will be exposed to that motivate them to discount the '97 percent' message — as they've been doing for 10 years — it is bad advice, in my view, to tell communicators that they can expect '97 percent' to work in the world as it does in the lab."
This is clearly the challenge — how to take Lewandowsky's results out of the lab. He doesn't have a plan, per se, but he does have a sense of direction, and a track record, and a growing community of like-minded collaborators to work with. Kahan and Lewandowsky differ most on how to reach conservatives and libertarians, but there's potentially more acheivable progress to be made within the ranks of the reality-based community itself. To get the full picture of what this means, we first need to develop two historical threads — the growing understanding of the scientific process and its interactions with the wider world, which involves a widening circle of other experts, and the growing understanding of what stands in its way, which will bring us back to another emerging aspect of Lewandowky's work.
To begin the first thread, we need to go back to 2004. That's when science historian Naomi Oreskes first published an initial survey of global warming literature, "Beyond The Ivory Tower: The Scientific Consensus on Climate Change." Climate scientists had long known that there was an overwhelming consensus on anthropogenic global warming, but Oreskes was the first to take a scientific approach to studying that consensus — just as Lewandowsky, more recently, has been the first to formalize what scientists informally knew about uncertainty.
Oreskes analyzed “928 abstracts, published in refereed scientific journals between 1993 and 2003, and listed in the ISI database with the keywords 'climate change.'” She found that 75 percent of papers accepted the consensus view “either explicitly or implicitly,” while “25 percent dealt with methods or paleoclimate,” and took no position on AGW. “Remarkably, none of the papers disagreed with the consensus position,” she found. Later studies have found a small sliver of dissenting views, but the more the consensus has been studied, the sturdier it appears, while the dissenting literature is dogged with repeated problems.
For example, a 2010 paper, “Expert credibility in climate change,” recomfirmed the 97 percent consensus figure, and found that “the relative climate expertise and scientific prominence of the researchers unconvinced of ACC [or AGW] are substantially below that of the convinced researchers." A 2013 paper, “Quantifying the consensus on anthropogenic global warming in the scientific literature,” examined “11,944 climate abstracts from 1991–2011” and found that “97.1 percent endorsed the consensus position,” while a parallel self-rating survey found that “97.2 percent endorsed the consensus.”
One of the co-authors of the 2013 study, Dana Nuccitelli, co-writes the Guardian's "Climate Consensus -- The 97%" blog. In an April 14 entry, he showed how far the study of climate science itself can take us in understanding the workings of anti-science as well. He detailed "4 ways contrarian papers get published: (1) Flawed paper gets into credible journal, (2) Flawed paper gets into off-topic journal, (3) Contrarian editor gets into journal, (4) Vanity journal created by contrarians." But that wasn't the primary topic of his column. His primary topic was the climate contrarian backlash against Lewandowsky's work on the second thread mentioned above — the growing understanding of what stands in the way of science, which began with worldviews and branched out from there.
Conspiracist Ideation — Better than Science at Playing Its Own Game
Taking up that second thread, in 2013, Lewandowsky co-authored two papers probing more deeply into the role of what's known as "motivated reasoning" as an obstacle to scientific knowledge. In "The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science," he approached the influence of worldviews and conspiratorial thinking (AKA "conspiracist ideation") as related types of motivated reasoning which interfere with scientific truth-seeking. In this instance, worldviews included both liberals vs. conservatives and free market orientation. Conspiracist ideation can be found across the ideological spectrum — even in the center, much to the chagrin of some — but it functions in similar patterns to resist unwanted information, regardless of subject matter.
Lewandowsky and his co-authors took a closer look at conspiracism specifically in the other paper, "NASA Faked the Moon Landing — Therefore, (Climate) Science Is a Hoax: An Anatomy of the Motivated Rejection of Science." The second paper touched a real nerve, producing a great deal of conspiracy theorizing about Lewandowsky himself, his co-authors and others. So, naturally, being a good scientist, Lewandowsky decided to study that as well. The result was a third paper, "Recursive Fury: Conspiracist ideation in the blogosphere in response to research on conspiracist ideation," which was subsequently retracted by the publisher, following sharp attacks from climate contrarians — even though the publisher found nothing scientifically or ethically wrong with the paper. Britain's notoriously lax libel law (changed just this year) was supposedly the reason. Following a further retreat by the publisher, three editors with the journal resigned. Nuccitelli provides a good account in his column (as does Lewandowsky himself, here), where he notes that this is just the latest example of a pattern that's played out before, but we need to go back and unpack the insights into conspiracist ideation that lie at the core of this work.
In the paper on conspiracist ideation and worldviews, the authors wrote, “The prominence of conspiracist ideation in science rejection is not unexpected in light of its cognitive attributes.” For one thing, it provides an out for people who don't like what the consensus says. “If you are faced with agreement among scientists, you have two choices,” Lewandowsky told me. “You either accept that they are on to something or… You think they all conspire to create a hoax for some nefarious reason. There aren’t too many other options, are there?"
“When you look at the history of science denial, there is plenty of evidence that a scientific consensus drives deniers into postulating such a conspiracy — from tobacco to AIDS to climate.”
A second reason conspiracist ideation crops up in resisting science is that it has greater explanatory reach than science, because it's not constrained by “the criteria of consistency and coherence that characterize scientific reasoning.”
“In the case of climate, this is — humorously -- known as the 'Quantum theory of denial,'” Lewandowsky told me. “Deniers will claim in the same breath (or within a few minutes) that (a) temperatures cannot be measured reliably, (b) there is definitely no warming, (c) the warming isn’t caused by humans, and (d) we are doing ourselves a favor by warming the planet. The four propositions are incoherent because they cannot all be simultaneously true — and yet deniers will utter all those in close succession all the time.”
Finally, conspiracist ideation is also typically immune to falsification, “because contradictory evidence (e.g., climate scientists being exonerated of accusations) can be accommodated by broadening the scope of the conspiracy (exonerations are a whitewash), often with considerable creativity.”
“One good example for this is Jim Sensenbrenner, a Republican congressman who called the exonerations of climate scientists after ‘climategate’ a ‘whitewash,’” Lewandowsky said. “This happens all the time, and sometimes takes on rather baroque forms, e.g., when the United Nations is invoked.” Lewandowksy sometimes refers to this as the “self-sealing” property of conspiracist ideation. It can be absolutely maddening to try to argue against.
When I asked about other aspects of conspiracist ideation, I questioned whether it didn't reflect a quest for meaning, at the expense of information, along the lines of the mythos/logos distinction drawn by Karen Armstrong in "The Battle For God." Lewandowsky agreed.
“One of the aspects of conspiratorial thinking is — paradoxically — that it gives people a sense of control because it gives meaning to apparent randomness. It may be more comforting to some people to think that 9/11 was an “inside job” than accepting that it was a fairly random event triggered by a few fanatics.” Even more in line with Armstrong's thinking, he added, “I also think that there is a lot of identity politics in this, e.g., if Republicans generally think that climate change is a hoax, then it becomes a ‘tribal totem’ for others to pick up on this.”
As a further refinement, I noted that conspiracist ideation thrives on creating specific malicious others as a particuarly powerful form of meaning-making. “Yes, absolutely,” Lewandowsky responded. “There is this tension between ‘victim’ and ‘hero’ within the conspiracist worldview that leads to those contradictory positions. On the one hand (the ‘hero’ frame) it is permissible to accuse scientists of fraud and harass them, but by the same token (‘victim’ frame) scientists must do nothing to cast aspersions on the accusers or to defend themselves. Arthur Koestler has referred to those people as ‘mimophants.’ It is crucial for the public to understand this.”
With these insights in mind, the experience of what happened with “Recursive Fury” becomes a tremendous learning opportunity — though, as Nuccitelli points out, it's an opportunity that's been repeatedly missed in the past.
Lessons to Learn
But there's a pattern here on the positive side as well. In 2004, Oreskes reframed the informal experience of climate consensus as a subject for formal knowledge — and that has helped climate scientists gain a much better self-understanding of what they're doing, through a succession of further studies. Likewise, just this past month, Lewandowsky's uncertainty papers have taken the informal knowledge that uncertainty means more unkown risk, and reframed uncertainty as the subject for scientific study. It's too soon to know what will happen as a result, but the potential is obvious if we can put an end to knee-jerk do-nothing arguments based on uncertainty.
Likewise, the dysfunctional impact of conspiracist ideation is just one example of how the scientific community itself has an underdeveloped immune system — as does the much broader “reality-based community,” which takes science seriously as a primary source of information. There would be great potential for any project that would take this informally recognized (by some) reality and make it the subject for rigorous scrutiny, just as Oreskes did in 2004 or as Lewandowksy did this year. That's a tall order, but it's possible that the two of them just might do something like that. They are currently working together on a paper on the effects of denial on the scientific community. It's obviously way too soon to say what it will look like, much less what impact it will have. But we do know that bringing “peripheral,” informally known subject matter to light can be an incredibly powerful way of moving human understanding forward. And there's no field of human understanding that needs that more than the science of saving our planet — and, not incidentally, ourselves.