Nate Silver has a Donald Trump problem: Where does data journalism go now?

The press either abetted Trump's rise or got it all wrong. But the stakes seem higher for one-time oracles at 538

Published May 15, 2016 3:59PM (EDT)

Donald Trump, Nate Silver   (Reuters/L.E. Baskow/MSNBC/Photo montage by Salon)
Donald Trump, Nate Silver (Reuters/L.E. Baskow/MSNBC/Photo montage by Salon)

In 2012, when I first saw FiveThirtyEight founder Nate Silver making the talk show rounds to tout his site, I was excited. He talked about bringing critical thought to data, striving for better polling analysis, and renewing our collective faith in statistics. He spoke with confidence about the power of pure data analysis as a predictive tool, and I bought every line.   FiveThirtyEight has lived up to some of the early promise, but it is also beginning to see disastrous missteps that are leading some to ask whether there is even a place for data journalism going forward.

Listening to FiveThirtyEight’s Election Podcast the message you’ll hear Silver and his crew repeat the most is a warning against overconfidence in polls. Polls, he explains, are of varying quality and must be gathered together with other variables to make predictions. This data aggregation, however, has led FiveThirtyEight to commit their own deadliest sin: in abstracting real information into a predictive figure they have come up with a model in which they have much too much faith.

Wikipedia refers to FiveThirtyEight as a “polling aggregation website,” but FiveThirtyEight has become a cultural phenomenon. Its articles aren’t just graphs and charts, but intelligent political analyses with confident conclusions backed by in-depth statistics. As big data transforms the business and technical world, we are very ready for it to transform our cultural conversations . As the cultural landscape becomes more chaotic, we are desperately seeking order. The problem is, that chaos is also making the handful of historical elections we have  good polling data for less predictive of the future.

Perhaps the data renaissance emerged at the wrong time. It might have established more credibility by now if it had emerged in the more predictable '90s. But in truth, any statistician could tell you that the sample size of elections in the past 50-100 years is not large enough to reach meaningful conclusions, and FiveThirtyEight’s writers must have known that. Separate out the math, and the problem is even more obvious. Could any predictive algorithm take into account the major events that occur throughout a campaign? Could any models using data from pre-Civil War elections have maintained any predictive power in the period directly after?

This is a major problem with the data approach to politics - there are so many invisible variables, feedback loops and dependencies that even a model you can adjust over time will inevitably fail in a spectacular manner. Does this mean there isn’t a place for data journalism, or even predictive algorithms? Absolutely not. Just as we aren’t going to stop taking polls, we shouldn’t stop trying to use them to assess the chances of different outcomes. But we do need to drastically change the way data is presented for it to be a useful aspect of journalism.

How data journalism can survive after Trump

In many ways the 2016 presidential campaign was set to be the perfect moment for data journalism. Just as brilliant political data journalists at FiveThirtyEight were gaining attention, a perplexing field of sixteen GOP candidates was presented, and an oddly uncompetitive blowout seemed to be brewing in the four-person Democrat field. Traditional journalists had been humiliated  in the last presidential election cycle by repeatedly touting the front-runner status of a GOP candidate only to see them disappear. They were ready to start following FiveThirtyEight and taking their predictions  seriously. FiveThirtyEight, for its part, was well aware of the mishaps of 2012 and determined to improve its predictive model. So when Donald Trump started leading in the polls, they looked at the polls and his high unfavorability ratings and determined that this was a moment and a candidate that would be forgotten by the end of the campaign. The problem was, he didn’t fade into the background like Herman Cain and Rick Santorum. He kept winning, even as they kept predicting his demise. State-by-state predictions of Sanders’ losses proved similarly misguided.

Traditional journalists, meanwhile, were caught off guard. Determined not to play the fool, they didn’t treat Trump like a serious candidate, perhaps in part, because the smart folks at FiveThirtyEight claimed, again and again, that he was about to hit an approval ceiling and that the possibility of his ultimate success was historically unprecedented on every level. In light of these continued predictive failures, they have been contrite and introspective, but they may have learned the wrong lessons.

The New York Times’ Jim Rutenberg recently published a biting defense of “shoe-leather reporting” and broadside against FiveThirtyEight’s ilk. In it he goes so far as to imply that data journalism’s skepticism of Trump led to underestimation of his candidacy by other media. In response, the site provided their rebuttal in their Elections podcast. Silver explained that Rutenberg’s self-righteousness and FiveThirtyEight’s commitment to self-improvement are why data journalism will win in the end. He and colleague Harry Enten highlighted the small sample sizes they had to work with and committed themselves to a more concrete algorithm that could therefore be accountably improved going forward. But small sample sizes are a problem they already knew about, and should have made clear when writing under headlines like “Dear Media, Stop Freaking Out About Donald Trump’s Polls” (where they lead with Trump’s chances on the betting market being 20%). The commitment should not just be to have better methods, but to present their conclusions with more transparency and less certainty even as their predictive power improves.

Things may look dire for data journalism now, and Silver’s tone in his impassioned rebuttal certainly seemed duly concerned, but the truth is there is enough blame to go around. Journalists should depend on more than one source of data analysis (including in-house data journalism), and should take early predictions lightly. In fact, there is a good argument to be made that traditional journalists should pass on data without trying to interpret its significance for their readers.  Anyone who makes it on stage for a major party’s presidential debate deserves serious and careful consideration. Further, while FiveThirtyEight is vilified in Rutenberg’s attack, it was in fact a somewhat prescient voice of reason regarding Trump as early as last December. Repeatedly, FiveThirtyEight sent mixed messages about Trump’s chances - they warned about trying to predict the outcome early on while simultaneously talking about a supporter ceiling based on favorability polls (which he blew past) and repeatedly pointing out how little Trump resembled previous historical successes. The GOP leadership, the media, and much of the public all underestimated the possibility of Trump securing the nomination, and must all reconsider how much they can rely on historic precedent.

I for one, want to see data journalism increasingly integrated into more traditional reporting because there is a need to present both the big picture and the more unquantifiable variables that come from examining the multifaceted individuals involved in the political process. Both of these aspects need to be presented simultaneously, because when they are separated they can paint a misleading picture. I want to see FiveThirtyEight continue to be a part of this and I believe their tone will change going forward. Here’s to a future full of knowledge, and devoid of oracles.


By Justin Taylor

Justin Taylor is the co-president of MIT-based Academics of the Future of Science

MORE FROM Justin Taylor