In a recent article for Quillette, the “Intellectual Dark Web’s” online safe space, Harvard psychologist Steven Pinker offers some reflections on his most recent book, "Enlightenment Now," one year after it was published. Pinker notes that EN (as I will abbreviate it), a pollyannaish paean to Enlightenment "progressionism," has been the target of “irate attacks from critics on both the right and the left.”
Some have pointed out that modern racism more or less originated in the Enlightenment — contra Pinker — while others have accused the professor of various forms of scholarly malpractice by offering readers a skewed presentation of the facts. As the Princeton historian David Bell puts it, EN “makes use of selective data, dubious history, and, when all else fails, a contempt for ‘intellectuals’ straight out of Breitbart.”
Since my primary area of academic research is “existential threats,” and since Pinker has an entire chapter dedicated to this issue — aptly titled “Existential Threats” — I decided to pull out my exegetical microscope and take a closer look at what Pinker had to say. What I found was startling, but not altogether surprising in light of Bell’s observations: Mined quotes, cherry-picked data, false dichotomies, misrepresented research, misleading statements and outright false assertions on nearly every page.
The problems were so pervasive that I ended up writing an extended critique of just a few pages of the chapter. (Examining the entire book would have been simply overwhelming.) In running a fine-tooth comb through single sentences and individual citations, I discovered a veritable treasure-trove of infractions, ranging from the trivial — for example, easily avoidable spelling mistakes — to the egregious — for example, attacking straw men and then declaring intellectual victory. In short, my reading of EN revealed not merely that the author (or his researchers) is fallible, but that EN consists, at least in places, of blatantly bad scholarship.
Let’s focus on just a few of these epistemic sins, beginning with the more amusing. In a passage seemingly intended to embarrass those who worry about “The End of the World,” Pinker writes the following: “As the engineer Eric Zencey has observed, ‘There is seduction in apocalyptic thinking. If one lives in the Last Days, one’s actions, one’s very life, take on historical meaning and no small measure of poignance.’”
I was intrigued by this quote, so I decided to track down its origin. The reference that Pinker provides is to none other than a Reason article written by Ronald Bailey, a libertarian and advocate of free-market solutions to climate change. Since Pinker strives not only to reach a wide audience with his books but for those books to exemplify good scholarly work — he is, after all, a professor at arguably the most prestigious institute of higher learning in the world — I was surprised that he didn’t cite the original, which would be normal scholarly practice.
Much more suspicious, though, is that Bailey doesn’t provide a citation for the Zencey quote in his article. After some digital sleuthing, I found Zencey’s information and sent him an email about the quote. First, it turns out that Zencey isn’t an engineer, as Pinker claims, but a political economist. That’s a small mistake, but in a highly influential bestseller by one of the most visible brainiacs in the world, one should expect better. Furthermore, Zencey was quite exasperated by the misleading way that Pinker employs his quote. As he said to me via email:
I appreciate your effort to nail down the source, and I especially appreciate the opportunity to set the record a great deal straighter than it has been. That quotation has bedeviled me. It is accurate but taken completely out of context. … You’d be doing me a service if you set the record straight.
The original source is a quite contemplative article titled “Apocalypse and Ecology” that Zencey published in 1988. It describes how Zencey once anticipated a “coming transcendence of industrial society,” a sort of “apocalyptic redemption” that would bring about a new era marked by “the freedoms we would enjoy if only political power were decentralized and our economy given over to sustainable enterprises using renewable fuels and minimizing resources.” This view actually expressed a hopeful form of apocalypticism, which is why Zencey describes it in the quote that Pinker uses as “seductive.” As Zencey writes, “we were optimists, filled with confidence in the power of education.” Thus, as Zencey told me,
too many people use that quotation [about “apocalyptic thinking”] to make it seem that I line up against the idea that we face an ecological apocalypse. If on reading ["Apocalypse and Ecology"] you think I wasn’t sufficiently apocalyptic about the damage humans are doing to the ecosystems that are their life-support system, I can only plead that in 1988 we knew far less than we know now about how rapidly our ecological problems would foreclose upon us, and I wanted the ecology movement to reach an audience, not leave itself vulnerable to being apparently disproven in the short run.
In brief, Pinker borrowed a quote from Bailey, who didn’t cite the original source and who lifted the quote from its original context to mean the opposite of what Zencey had intended. This led Zencey to confess to me, “how this guy [i.e., Pinker] managed to become a public intellectual in fields so far removed from his expertise is something to wonder at.”
If this were a single misdeed, one could perhaps forgive it. But it’s not the only error of this sort within just one page in EN. Pinker also misuses a quote from a New York Times book review written by Kai Bird. Pinker, who observes that “humanity has a finite budget of resources, brainpower, and anxiety,” quotes Bird as saying, “these grim facts [about different doomsday scenarios] should lead any reasonable person to conclude that humanity is screwed.” To which Pinker adds that “if humanity is screwed, why sacrifice anything to reduce potential risks?”
But in context, Bird’s quote doesn’t mean that there’s no hope, nor does it apply to risks in general — as Pinker implies in EN— but rather to the specific threat posed by nuclear conflict. Indeed, I sent Bird an email in which I summarized what I thought he actually meant, versus how Pinker uses his quote. He responded: “I think you are accurately reflecting my views — I didn’t mean to say Armageddon is inevitable.” (Bird adds that he also believes “that the use of [nuclear] weapons is more likely than not.”)
A similar problem arises when Pinker writes that “as long as we are entertaining hypothetical disasters far in the future, we must also ponder hypothetical advances that would allow us to survive them, such as growing food under lights powered with nuclear fusion.” This piqued my curiosity because I know of several “existential risk” scholars working on precisely these issues — and sure enough, Pinker here cites a book written by my colleagues David Denkenberger and Joshua Pearce, titled "Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe."
The problem is that, when one looks at the citation, Pinker seems to be implying that Denkenberger and Pearce endorse growing food under lights powered by nuclear fusion. But this is not the case, since, as Denkenberger and Pearce note, growing food under lights with currently available electricity would be too inefficient and pricey. When I contacted Denkenberger about Pinker’s statement, he responded that “we never said nuclear fusion because we were only talking about currently available tech (e.g., nuclear fission), so you can definitely correct Pinker on that.”
Even worse is Pinker’s discussion of advanced artificial intelligence. In making his case that worries about superintelligence are unfounded, he describes a computer scientist at Berkeley, Stuart Russell, as one of the “AI experts who are publicly skeptical” that “high-level AI pose[s] the threat of ‘an existential catastrophe.’” But, in reality, Russell is actually one of the most prominent AI experts sounding the alarm that poorly designed AI systems could bring about the total annihilation of humanity! This is a bit like calling Darwin a creationist, or saying that Einstein believed that “God really does play dice.” It’s baffling that such a flagrant mistake made it to print — indeed, Russell himself told me via email that he’d “seen this and I agree it’s an incorrect characterization.”
Furthermore, Pinker seriously misunderstands the arguments for why creating superintelligent machines could have disastrous consequences. First, he somewhat disparagingly refers to this scenario as the “Robopocalypse,” which is misleading, since no serious AI experts are worried about an uprising of robots. Rather, they are worried about the ability to maintain control of what would almost certainly be the most powerful technology ever invented: Smarter-than-human algorithms.
Second, Pinker’s criticisms of AI-doomsday scenarios completely miss the mark. In the wake of EN, some AI risk scholars have even started talking about “AI denialism,” on the model of “climate denialism,” in peer-reviewed academic papers, explicitly linking this phenomenon to Pinker’s wrongheaded conception of the subject in EN and elsewhere. As the "riskologist" Seth Baum writes, “Pinker recently articulated a superintelligence skepticism that some observers have likened to politicized climate skepticism.”
In a separate paper titled “Countering Superintelligence Misinformation,” Baum states that Pinker’s remarks “will occasionally be used [in Baum’s paper] as an example of superintelligence misinformation because they are so clearly false.” Later on, Baum notes that “some statements about superintelligence are clearly false. For example, this statement from Steven Pinker: ‘As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious, but also because the concept is barely coherent.’” Here, “AGI” means “artificial general intelligence,” and it’s just plain wrong that there are no projects to build AGI: There are at least 45, including one at Pinker’s own university. Once again, Pinker showed up to class without having done his homework.
The computer scientist Roman Yampolskiy echoes this sentiment in a message to me, saying that
denialism is very common, but unlike other types, AI risk denialism is not limited to fringe elements and conspiracy theorists. In fact, a large number of accomplished and brilliant public intellectuals suffer from it. Dr. Pinker is no exception, and he uses his deep knowledge of psychology to make pseudo-profound statements about AI risk such as: “It’s a projection of evolved alpha-male psychology onto the concept of intelligence …” If only he could explain what would prevent an advanced artificial intelligence from making catastrophic mistakes or being explicitly programmed to cause maximum damage, all of us in the AI risk community would be able to sleep much better.
Finally, and perhaps most significantly, Pinker stumbles on the very first step of his “Existential Threats” chapter by framing the debate as being between the optimists and the pessimists, where the latter category includes scientists worried about human extinction or civilizational collapse. But this completely mischaracterizes the situation. In reality, many of the secular scholars who are the most worried about existential threats are also “techno-utopians” who believe that, if only we survive, our descendants could colonize the known universe, eliminate all disease, reverse aging, upload our minds to computers, radically enhance our cognition and so on. The result could be the realization of astronomical amounts of value — and this is why studying existential threats, even improbable ones, with an eye toward ensuring that they never happen matters so very much.
In fact, the Oxford philosopher who coined the term “existential risk” in 2002 is also an outspoken “transhumanist” who once published an article titled “Letter From Utopia.” In this missive, a future post-human being describes its life of extraordinary happiness and “surpassing bliss.” But as the fictional character declares, “to reach Utopia, you must first discover the means to three fundamental transformations.” One is to “elevate well-being” and another to “upgrade cognition,” but the most important is to “secure life!” This gets at the crucial point: Many optimists — people who see a possible techno-utopian state in our future — are also worried about the insecurity of life today, in the 21st century, since an existential disaster would by definition prevent techno-utopia from becoming techno-reality.
So the truth is exactly opposite of what Pinker would have you believe: There aren’t two distinct schools of thought, two antithetical traditions battling it out about whether “we are at the most dangerous moment in the development of humanity,” to quote Stephen Hawking in 2016. Rather, some people are worried about existential threats precisely because they believe that the future could be unimaginably marvelous.
In a tepid review of Malcolm Gladwell’s book "Outliers: The Story of Success" published in 2009, Pinker writes that, “the reasoning in 'Outliers,' which consists of cherry-picked anecdotes, post-hoc sophistry, and false dichotomies, had me gnawing on my Kindle.” Pinker then limns Gladwell as a “minor genius who unwittingly demonstrates the hazards of statistical reasoning and who occasionally blunders into spectacular failures.” I can relate to these frustrations, because I feel them about EN’s “minor-genius-level” discussion of existential threats. Indeed, the examples above are just a few that I could have mentioned.
Let me end with a call for action: Don’t assume that Pinker’s scholarship is reliable. Comb through particular sentences and citations for other hidden — or perhaps intentionally concealed — errors in "Enlightenment Now." Doing so could be, well, enlightening.