Another election, another round of poll bashing. Is that fair?

Forecasts once again substantially underestimated the extent of support for Trump. But does that mean they failed?

Published November 7, 2020 7:45AM (EST)

Donald Trump, Joe Biden, and the NYT poll dials (Photo illustration by Salon/Getty Images/NYT)
Donald Trump, Joe Biden, and the NYT poll dials (Photo illustration by Salon/Getty Images/NYT)

This article originally appeared on Undark.

Four years ago, polls indicated that then-Democratic presidential nominee Hillary Clinton would handily beat her Republican opponent, Donald J. Trump. Based on those polls, one prominent election forecaster, Princeton University neuroscience professor Sam Wang, even called the race for Clinton several weeks before Election Day, promising to eat an insect if he was wrong.

Wang ate a cricket on CNN, and in May of 2017, a committee at the American Association for Public Opinion Research, or AAPOR, released a post-mortem of the polls' performance. The report acknowledged shortcomings and suggested reforms to "reduce the likelihood of another black eye for the profession."

Many pollsters and forecasters did make changes before the 2020 election. Once again, though, polls pointed toward big Democratic wins in key states, fueling optimism among progressives. And on Tuesday night, as it became clear that polls and forecasts had once again substantially underestimated the extent of support for Trump, the backlash was quick.

"We should never again put as much stock in public opinion polls, and those who interpret them, as we've grown accustomed to doing," Washington Post media critic Margaret Sullivan wrote on the morning after Election Day. The discipline, she continued, "seems to be irrevocably broken, or at least our understanding of how seriously to take it is." A headline in The Atlantic's Ideas section declared a "polling crisis." A disgruntled columnist for The Daily Beast joked that it was time "to kill all the pollsters."

Election forecasters did predict that Biden would win the electoral college, which — as of this writing — appears to be correct, though disputes over that outcome may well take weeks or months to settle. But as with 2016, the odds of such a close race were again considered small, and polling errors were particularly pronounced in Florida and parts of the Midwest. For example, polling averages at The New York Times had indicated that Biden was up by 10 percentage points in Wisconsin and 8 in Michigan, and that Ohio would narrowly go to Trump. Complex models at The Economist and FiveThirtyEight produced similar projections. Instead, the Trump campaign has vowed to ask for a recount in Wisconsin, Biden appears to have only narrowly won Michigan, and Trump won Ohio by around 8 points. (All of these tallies are still provisional.)

Does this mean that something is fundamentally broken about political polling? Some experts say that's not entirely fair. "I think it's clear that there were some problems with polling this year, but a lot of this reaction strikes me as very premature," said Courtney Kennedy, the director of survey research at the Pew Research Center and the chair of AAPOR's 2016 post-mortem committee, in a Thursday afternoon interview.

"It's like, my goodness, let's pump the brakes," she added. "The election is not even over, there are still millions of votes to be counted."

* * *

Kennedy and other experts acknowledge that the 2020 election polls raise questions for pollsters about some of their methods. But many pollsters also argue that the backlash reflects misguided conceptions about what polls actually do — and that some blame lies with the wider ecosystem of pollsters, poll aggregators, and forecasters that has blossomed in the past 15 years.

At the base of this informational food chain are the pollsters, who use a range of information-gathering methods — including interviewer phone calls, online questionnaires, automated calls, and sometimes text messages — to reach samples of voters and to gauge their feelings on a variety of issues. Poll aggregators then take large numbers of those surveys and average them together, in the hopes of getting more reliable figures than any single poll. Many aggregators are also forecasters, feeding their figures into complex computer models that attempt to actually predict election outcomes. This is the methodology that would eventually make Nate Silver's FiveThirtyEight operation famous, and it is what many large media companies, from The New York Times to CNN, now emulate as a matter of election-year routine.

Andreas Graefe, an election forecasting researcher, said that these forecasting models have become more sophisticated, and they've improved to include and account for a wide array of potential errors. But, he added, "I wouldn't say that that really helped accuracy." Graefe has helped run PollyVote, an academic election forecasting project, since 2007. Over that time, he said, he has seen election forecasting become a big business. "What has changed, definitely, is forecasts as a media product," he said.

Silver worked as a baseball analyst before he began forecasting elections. The two roles, he wrote in 2008, "are really quite similar," and he has risen to prominence at a time when politics coverage has come to more closely resemble sports media. (CNN president Jeff Zucker, who has pushed the network to model elements of its election coverage on ESPN, told The New York Times in 2017 that "the idea that politics is sport is undeniable, and we understood that and approached it that way.")

Remarkably on-target predictions in the 2008 and 2012 elections helped propel poll aggregators and forecasters like Silver to new national prominence. Then, in the 2016 presidential election, most forecasting models consistently and substantially underestimated Trump's success. While a few were more cautious, some popular forecasters had projected that Clinton's odds of winning were close to 100 percent, feeding confidence among progressives.

The subsequent AAPOR report analyzed why some of the polls underlying these forecasts had been so wrong. Part of the issue, the committee concluded, was that pollsters had sampled too few White voters without a college education, who had turned out heavily for Trump. In addition, the AAPOR committee found, many voters had made up their minds shortly before the election, perhaps after responding to polls — and most of those went for Trump.

The committee also examined the argument that some people simply lied to pollsters about their intention to vote for Clinton, perhaps because they were embarrassed about their decision to support Trump instead. (This has sometimes been called the "shy Trumper hypothesis.") The polling association found little evidence for this, but some experts continue to argue that the effect may play a role in what one scholar described as Trump's "unpollability."

Whatever the constellation of reasons, many pollsters made adjustments in 2020 in an attempt to correct for potential blind spots. But, again, Trump outperformed the polls in many closely watched swing states, and some experts say it's now apparent that some aspects of the electorate's mood and tendencies are not being captured by existing survey methods. "It's pretty clear that some people in the country" – perhaps especially Republicans and people without college degrees – "are less likely to take surveys," said Kennedy. "We knew that. But the feeling was, that's okay, we can overcome that as long as the pollster is really responsible and skilled and weights their data properly."

Essentially, these pollsters, drawing on other sources of data, fine-tune their samples to reflect the estimated number of self-identified Republicans in a given area. That way, even if Republicans are, say, systematically less likely to answer the phone than Democrats, they would be adequately represented in the sample. But, Kennedy said, it's now unclear if weighting methods like this worked. It's also possible, she added, that the Republicans who agree to be interviewed for polls are not very good proxies for Republicans who don't take surveys. "That's kind of an alarming possibility," she said.

* * *

Still, Kennedy and other pollsters point out that even the best polls are just estimates. So are forecasts. After the 2016 election, prominent poll aggregators tried to do more to convey the fuzziness in their models. FiveThirtyEight, for example, started showing a range of possible outcomes at the top of its forecast, instead of leading with a specific number.

Whether those changes were effective at preventing unwarranted confidence among election watchers is unclear. "The forecasters can do great work on communication in their own right, but once they've put it out there, they've lost control of it," said Natalie Jackson, the director of research at the Public Religion Research Institute, a nonprofit research institute that conducts public opinion polls on a variety of different topics with a special emphasis on religion. Even people who are well aware of the limits of election forecasts, she said, seem to check them constantly — maybe in search, she thinks, of comfort or clarity.

"Humans are just really bad with uncertainty," she said.

Indeed, Jackson and other experts argue that the buzz around predicting elections — along with some of the backlash when those predictions seem to fall short — misses the actual function of opinion polling. "The purpose of polls is not to predict an election," said Ashley Kirzinger, associate director of public opinion and survey research at the Kaiser Family Foundation.

"What we're trying to do," she added, "is provide some insight into what voters are thinking about in the months and weeks leading up to an election."

Pollsters ask voters about their opinions on key issues, not just their vote. But, said Kirzinger, "I think the public, unfortunately, has taken polls to mean just 'things that feed aggregators,' and not actually dig into what the polls are actually telling us."

Aggregators and forecasters, meanwhile, suggest that the public needs to better learn what aggregation and forecasting actually do, which is to model and then give odds to a variety of potential outcomes, not to predict a winner. When FiveThirtyEight gave Trump, in the waning days of the 2020 campaign, a 10 percent chance of victory, for example, the site very prominently contextualized those odds: "A 10 percent chance of winning is not a zero percent chance," the site reminded visitors. "In fact, that is roughly the same odds that it's raining in downtown Los Angeles. And it does rain there."

Whether or not that's a comfort to voters who feel that the forecasts failed them again, Graefe says there's a place for this sort of pre-election survey data crunching. "I'm an advocate of trying to inform voters, and decision makers in general, and giving them the best information possible," he said. The models in 2020, he pointed out, suggested, seemingly correctly, that Biden could win even if there was a large polling error — information that, he said, has value.

Not everyone agrees. Jackson, the PRRI researcher, was the senior polling editor at HuffPost during the 2016 election. That year, the site's popular aggregation and forecasting tool gave Clinton a 98 percent chance of victory — exceedingly high odds that Nate Silver and then-HuffPost Washington bureau chief Ryan Grim had sparred over prior to the election. Grim later publicly admitted that HuffPost's models had failed.

This experience, Jackson said, pushed her to think more critically about how she was using data. After the 2008 and 2012 elections, the tools had seemed reliable, she recalled. "I wasn't thinking about it necessarily in public-good terms. I was thinking about it in terms of what an interesting use of the data, and what an interesting statistical modeling exercise this is," she said. After 2016, she began to reevaluate the reasons for the work. Now, she said, "I don't particularly see that forecasting serves the public good in a very convincing way."

Speaking on Thursday afternoon, Jackson acknowledged looming issues with polling methods in the 2020 election. But the culture around forecasting, she suggested, has created impossible expectations for the field. "We now expect that there are a lot of polls," she said. "We put them together and make pretty aggregates. Forecasters use them to project what's going to happen. All of those things combined create an expectation that polls will be predictive in a way that they're just not ever going to be able to do perfectly."

Like Kirzinger, Jackson thinks that process misses the real benefit of polling, which is "to tell the story of what people think." But the demand for predictions remains strong. "We've gotten to a point in society where everything is at our fingertips, 24 hours a day," Jackson said. "We think the answer to who's going to win an election should be, too."

This article was originally published on Undark. Read the original article.


By Michael Schulson

MORE FROM Michael Schulson


Related Topics ------------------------------------------

2020 Election Donald Trump Joe Biden Politics Polling Voting