Nate Silver: "Mostly I was getting credit for having pointed out the obvious -- and most of the rest was luck"

My “perfect” 2012 forecast was fortuitous -- but contributed to the perception that statisticians are soothsayers

Published February 22, 2015 7:00PM (EST)

Nate Silver       (MSNBC)
Nate Silver (MSNBC)

Excerpted from “The Signal and the Noise”

At about the time “The Signal and the Noise” was first published in September 2012, “Big Data” was on its way becoming a Big Idea. Google searches for the term doubled over the course of a year, as did mentions of it in the news media. Hundreds of books were published on the subject. If you picked up any business periodical in 2013, advertisements for Big Data were as ubiquitous as cigarettes in an episode of Mad Men.

But by late 2014, there was evidence that trend had reached its apex. The frequency with which “Big Data” was mentioned in corporate press releases had slowed down and possibly begun to decline. The technology research firm Gartner even declared that “Big Data” had passed the peak of its “hype cycle.”

I hope that Gartner is right.

Coming to a better understanding of data and statistics is essential to help us navigate our lives. But as with most emerging technologies, the widespread benefits to science, industry and human welfare will come only after the hype has died down.

I worry that certain events in my life have contributed to the hype cycle. On November 6, 2012, the statistical model at my website FiveThirtyEight “called” the winner of the American presidential election correctly in all 50 states. I received a congratulatory phone call from the White House. I was hailed as “lord and god of the algorithm” by The Daily Show’s Jon Stewart. My name briefly received more Google search traffic than the Vice President of the United States.

I enjoyed some of the attention, but I felt like an outlier – even a fluke. Mostly I was getting credit for having pointed out the obvious -- and most of the rest was luck.

To be sure, it was reasonably clear by Election Day that President Obama was poised to win reelection. When voters went to the polls on election morning, FiveThirtyEight’s statistical model put his chances of winning the Electoral College at about 90 percent. A 90 percent chance is not quite a sure thing: Would you board a plane if the pilot told you it had a 90 percent chance of landing successfully? But when there’s only reputation rather than life or limb on the line, it’s a good bet. Obama needed to win only a handful of the swing states where he was tied or ahead in the polls; Mitt Romney would have had to win almost all of them.

But getting every state right was a stroke of luck. In our Election Day forecast, Obama’s chance of winning Florida was just 50.3 percent – the outcome was as random as a coin flip. Considering other states like Virginia, Ohio, Colorado and North Carolina, our chances of going 50-for-50 were only about 20 percent. FiveThirtyEight’s “perfect” forecast was fortuitous but contributed to the perception that statisticians are soothsayers – only with computers rather than crystal balls.

This is a wrongheaded and rather dangerous idea. American presidential elections are the exception to the rule -- one of the few examples of a complex system in which outcomes are usually more certain than the conventional wisdom implies. (There are a number of reasons for this, not least that the conventional wisdom is often not very wise when it comes to politics.) Far more often, as this book will explain, we overrate our ability to predict the world around us. With some regularity, events that are said to be certain fail to come to fruition -- or those that are deemed impossible turn out to occur.

If all of this is so simple, why did so many pundits get the 2012 election wrong? It wasn’t just on the fringe of the blogosphere that conservatives insisted that the polls were “skewed” toward President Obama. Thoughtful conservatives like George F. Will and Michael Barone also predicted a Romney win, sometimes by near-landslide proportions.

One part of the answer is obvious: the pundits didn’t have much incentive to make the right “call”. You can get invited back on television with a far worse track record than Barone’s or Will’s -- provided you speak with some conviction and have a viewpoint that matches the producer’s goals.

An alternative interpretation is slightly less cynical but potentially harder to swallow: human judgment is intrinsically fallible. It’s hard for any of us (myself included) to recognize how much our relatively narrow range of experience can color our interpretation of the evidence. There’s so much information out there today that none of us can plausibly consume all of it. We’re constantly making decisions about what website to read, which television channel to watch and where to focus our attention.

Having a better understanding of statistics almost certainly helps. Over the past decade, the number of people employed as statisticians in the United States has increased by 35 percent even as the overall job market has stagnated. But it’s a necessary rather than sufficient part of the solution.

From “The Signal and the Noise” by Nate Silver, published on February 3, 2015 by Penguin, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright by Nate Silver, 2015.

By Nate Silver

MORE FROM Nate Silver

Related Topics ------------------------------------------ Books Editor's Picks Excerpts Jon Stewart Nate Silver Skewed Polls