"Ready for dinner"
“May I be the first to say ‘Mr. President’?” Bob Shrum, one of the John Kerry’s chief campaign advisors, beamed to the senator shortly before the East Coast polls closed on Election Day 2004. Shrum’s excitement, if premature, was understandable. Kerry’s aides realized the race would be close, but during much of Election Day they were buoyed by positive exit poll results flowing out of key states — Ohio, critically — showing their man headed for a win. It wasn’t only Kerry’s people who were excited by the exits. The poll numbers, which weren’t officially released during the vote, but which floated around online as freely as Paris Hilton’s sex video, seduced just about everybody on the left into thinking a new day had dawned.
In the end, of course, Ohio went red and liberals were blue. But even before Kerry offered his concession, some on the left began pointing to the exit polls as proof that George W. Bush stole the election. To this day, they claim that the exit polls — which are compiled through interviews with voters just after they’ve cast their ballots — tell us that most Americans attempted to vote for John Kerry. What is off, they say, is the official vote count, corrupted by paperless electronic machines and other methods of chicanery.
Exit poll results were just one item in a long bill of election-fraud particulars that folks began passing around in the aftermath of the election. But over the past seven months, the exits have proved more enduring to the election-was-stolen movement than many of the other early indicators of fraud. Lefty bastions like Democratic Underground are aflame with discussions purporting to prove how the exits show Bush didn’t really win.
But a clear consensus among experienced pollsters is finally emerging on what happened with the exits. Last month, at an annual conference of opinion pollsters in Miami Beach, Warren Mitofsky, the veteran pollster who conducted the exit poll for the networks, offered a detailed and convincing explanation of what went wrong with the polls. The reason the exits were off, Mitofsky said, is that interviewers assigned to talk to voters as they left the polls appeared to be slightly more inclined to seek out Kerry voters than Bush voters. Kerry voters were overrepresented in the poll by a small margin, which is why everyone thought that Kerry was going to win. The underlying error, Mitofsky’s firm said in a report this January, is “likely due to Kerry voters participating in the exit polls at a higher rate than Bush voters.”
There’s another interesting wrinkle in the exit poll discussion. During the past several months, some of the early “fraudsters” — an initially derogatory term that some in the election-was-stolen camp have embraced — who once suspected that the exit polls pointed to election fraud, have begun to change their minds. One of these is Bruce O’Dell, a computer engineer in Minneapolis and one of the founders of US Count Votes, the group that has been leading the charge to show that exit polls prove Kerry won. After initially signing on with this view, O’Dell now thinks it’s impossible to say whether the exit polls suggest that Bush stole the election. O’Dell also thinks Mitofsky’s explanation — that Kerry voters were overrepresented in the poll — is plausible.
“In my opinion, we’ve been sidetracked,” O’Dell says of the fraudsters’ months-long focus on exit polls. He adds that the kind of exit poll analysis that he and others have been working on is a distraction from their pursuit of real election reform, such as making sure that electronic voting machines get paper trails, and that voters in Democratic precincts aren’t forced to wait in line for hours and hours in order to cast their ballots. O’Dell is critical of his compatriots, some of whom routinely suggest that a “corrupted vote count” is the only explanation for the odd exit poll results. “It’s impossible that they have actual evidence that vote fraud must have occurred,” he says. “They’re overstating their data — I think it’s crying wolf or chicken little big time to proclaim you have evidence of vote fraud when actually you don’t.”
Before reviewing the problem with the exit polls, let’s look at how wrong they were. According to a report released by pollsters on Jan. 19 (click here for a PDF copy), the exit polls tended to predict Kerry doing better than he ultimately did, both nationally and in many states. In 26 states, the exit poll overstated Kerry’s share of the vote by a significant amount, more than what statisticians call “one standard error.” There were only four states in which the exit polls overstated Bush’s share of the vote by more than one standard error.
Whether you think these errors are important depends on how you were using the exit polls on Election Day. If you were a TV news producer, Mitofsky points out, the exit polls didn’t throw you off. No news network — which are the main consumers of poll data — used the poll to make an incorrect projection in any political race, which, as we all know, is something of a major improvement over what happened in 2000. Mitofsky stresses that the poll wasn’t intended for mass distribution. He suggests that people who began planning their victory parties on the basis of leaked polling data simply didn’t understand the subtleties of polling. The polls may have pointed to a Kerry win but they also showed a close race. Only in four states that Bush won — Ohio, Iowa, Nevada and New Mexico — did the polls show Kerry ahead. In Ohio, the exit polls’ final estimate revealed Kerry getting 53.2 percent of the vote, while the final vote showed Bush with 50.8 percent to Kerry’s 48.7 percent.
Mitofsky says it’s impossible to say precisely why more Kerry voters than Bush voters participated in exit polls. Were Kerry voters simply more willing to speak to pollsters? Were pollsters more willing to speak to Kerry voters? Or, conversely, were Bush voters less willing to talk? Were pollsters less willing to seek out Bush voters? It’s likely that some mix of such “motivational factors” contributed to the biased exit poll, Mitofsky says, but at this point it’s not possible to determine why some voters were willing to be interviewed, why some were not, and what the interviewers were thinking at the time.
But Mitofsky has some clues. Exit polls are conducted by an army of interviewers — usually people just looking for a good short-term job, including many college students; 35 percent of the interviewers were between the ages of 18 and 24, and most were women — who fan out to more than 1,000 pre-selected precincts across the country. Because any poll depends on its respondents being selected randomly, the pollsters are each assigned a number, from 1 to 10, that represents the “rate” at which they’re supposed to attempt to approach a voter. An interviewer given a rate of 1 should attempt to interview every single voter that leaves a voting precinct; an interviewer with a rate of 10, reserved for large precincts with many voters, must only approach every 10th voter for an interview.
What Mitofsky finds most striking about the polling data is that the precincts with the largest “error” tended to be those with the largest interviewing rate. In precincts where pollsters were asked to interview every voter, the error — the difference between the exit poll prediction and the actual vote count — was slight. But as interviewers were given more leeway — if they were asked to interview every fifth voter, say — the error grew larger. “What that means to me was the interviewers were selecting people not in accordance with our instructions but according to their own judgment,” Mitofksy says. “What they’re told is not to deviate from this number, not to make any exceptions.” But as Mitofsky sees it, the interviewers did make exceptions — and because of this, the poll lost some of its randomness. By a slight margin, it included more Kerry voters than Bush voters.
Other factors support Mitofsky’s theory that interviewers may have been using their own judgment in selecting voters to include in the poll. According to Mitofsky’s report, the polling error tended to be larger in precincts where interviewers had been recently hired or reported being insufficiently trained; where precinct officials, lawyers or other vote observers interfered with pollsters’ opportunity to approach the voters as they left the precinct; where pollsters were made to stand far away from the precinct; and where the weather wasn’t great (remember the rain in Ohio?).
Mark Blumenthal, the Democratic pollster who runs the blog Mystery Pollster, says that all of these factors come together to tell a coherent story about what happened on Election Day. “As interviewers had a harder time keeping track of the ‘nth’ voter to approach, they may have been more likely to consciously or unconsciously skip the ‘nth’ voter and substitute someone else who looked more cooperative, or to allow an occasional volunteer that had not been selected to leak through,” Blumenthal has written. “Challenges like distance from the polling place or even poor weather would also make it easier for reluctant voters to avoid the interviewer altogether.”
It’s important to remember that many voters were reluctant to talk to the pollsters. Indeed, about half of all voters the pollsters approached on Election Day refused to talk. In that case, the pollsters were asked to jot down only cursory observations — approximate age, race, sex. Of course, because the voters wouldn’t talk, it’s impossible to know whether they represented Bush and Kerry voters in equal parts, or whether most of them supported a certain candidate.
There is some evidence, though, that Republican voters are more reluctant to talk to pollsters than are Democratic voters. At the polling conference in Miami Beach, Kathy Frankovic, CBS’s polling director, talked about an interesting 1997 study of exit polls, in which pollsters were asked to hold folders imprinted with the logos of national media organizations. What they discovered was that the logos increased the Democratic bias of the polls; Republicans, wary of national media companies, didn’t want to talk to people holding those firms’ logos. Pollsters weren’t holding such folders during this past election. But the results may suggest that Bush voters who knew the polls were being conducted by the media were unwilling to talk to pollsters because they don’t like the media.
Frankovic also pointed out that younger interviewers, in particular, have a harder time getting voters to talk to them. Voters refused to talk to interviewers under the age of 25 more often than they refused to speak to interviewers older than 60, she noted. And when interviewers have difficulty getting voters to talk to them, it’s natural to expect that instead of looking for a purely random sample of voters, they may seek out voters who “looked” more like themselves — that is, Democrats, which would explain the Kerry bias in the polls.
When Mitofsky first released this theory of what went wrong with the exit polls back in January, the fraudsters immediately called his ideas into question. The biggest problem, says Ron Baiman, a researcher at the University of Illinois-Chicago, is that Mitofsky should have divulged more data than he did. He ought to have released raw polling data — the actual exit poll results for every precinct in which the poll was conducted — so that independent researchers could substantiate his claims. (While Mitofsky is releasing a host of data to researchers, exit pollsters have never released the actual survey results for each precinct in an exit poll. The questionnaires people filled out could violate the privacy of voters who participated in the poll, especially in small precincts where such voters might be identifiable through their demographic characteristics.)
In the absence of precinct-level exit poll results, Baiman says, Mitofsky’s theory is just that, a theory, and an “implausible” theory at that. In an official response to Mitofsky’s report that the group released in March, Baiman and 11 other political scientists, statisticians and mathematicians at US Count Votes (USCV) said Mitofsky “did not come close to justifying” the position that Bush voters were underrepresented in the exit polls, and therefore that the exit poll was wrong. This leaves only one explanation for the discrepancy between the official vote count and the exit poll results, they said — that the exits were right, and the official count was wrong. The USCV researchers listed several possible ways in which fraud might have occurred, the main one being rigged voting machines “developed, provided, and maintained primarily by a handful of private vendors with partisan ties.”
USCV found one key bit of data in Mitofsky’s report to support its own position. According to Mitofsky’s data, it appeared that the highest “error” — the difference between the exit poll prediction and the official results — seemed to occur in precincts where Bush did well. What this meant, essentially, was that the exit poll was most wrong in Bush precincts. According to USCV, this didn’t fit with Mitofsky’s theory. Mitofsky, after all, said that the poll was off because it underrepresented Bush voters; in fact, his own data seemed to show that the poll was most off in places with the most Bush voters.
USCV plugged Mitofsky’s data into a complex algebraic equation and determined that in order for Mitofsky to be right, Kerry voters would have had to have been extremely enthusiastic about talking to pollsters in precincts where there were lots of Bush voters. For instance, USCV concluded that in precincts where 80 percent of the people voted for Bush, more than 80 percent of the Kerry voters approached by exit pollsters would have had to have been willing to talk about their votes to fit Mitofsky’s numbers — an unreasonably high response rate.
But, USCV said, there was a simpler explanation for why the discrepancy between the official count and the exit poll prediction was highest in Bush strongholds — that’s where the corruption occurred! Stuffing the ballot box in your own stronghold is a traditional way to steal an election (think of Kennedy in Chicago, 1960). According to USCV, corruption in pro-Bush precincts may be one reason that the exit polls taken there failed to predict the vote count in those areas.
USCV’s charge against Mitofsky’s data isn’t easy to refute. Its analysis did seem to make sense, and some who’d been inclined to believe Mitofsky did see some truth in USCV’s analysis. Then, seemingly out of nowhere, Elizabeth Liddle arrived on the scene. Liddle is a 53-year-old professional musician and graduate student in psychology who lives in the U.K. She is a self-confessed fraudster and says stories she heard about voting problems in many states, especially in Ohio, led her to place very little trust in the outcome of the U.S. election. But as she began to study USCV’s analysis, she soon discovered something that undermined the fraudsters’ position: a math error.
This wasn’t a simple math error, nor is it an especially easy error to explain. But through a series of intriguing calculations, Liddle revealed that the measurement USCV was using to prove that polls were more off in Bush precincts than in Kerry precincts was something of a “bent yardstick.” It turns out, as Liddle proved, that due to a mathematical artifact, the error only appeared to be higher in Bush precincts than in Kerry precincts; actually, the average error across precincts was more or less the same. And that means that USCV’s “implausible” pattern — one that required more Kerry exit poll respondents in Bush precincts — disappeared.
The discovery surprised everyone involved in the debate. Mark Blumenthal, of Mystery Pollster, says that Liddle’s work basically stuck the final nail in the coffin of any theories purporting to show that the exit polls proved the election was stolen. O’Dell, who’d signed on to the original USCV report, was swayed by Liddle’s work into his position that exit poll analysis can’t prove that the election was rigged. At the polling conference in Florida, Mitofsky praised Liddle for her contribution, and he presented new data based on her work that shows there’s no pattern of increasing error in strong Bush precincts.
Not everyone, of course, is convinced. Baiman of USCV says that Liddle and O’Dell are simply wrong, and that he doesn’t see how Liddle’s mathematical proof affects his analysis showing vote fraud. But Liddle says that her work should serve as a guide for the election reform movement. She agrees with O’Dell that focusing on the exits has distracted people from trying to fix how elections are run in the United States. “Why aren’t people looking at voter suppression?” she asks. “Why aren’t people investigating the long lines at the polls?” She adds: “I think the exit polls might have seduced people” into thinking that Kerry could have won. But Kerry didn’t win, and it’s time to fight about the future.
Farhad Manjoo is a Salon staff writer and the author of True Enough: Learning to Live in a Post-Fact Society.More Farhad Manjoo.