Because Robert F Kennedy Jr. based much of the discussion in his Rolling Stone article on interviews with me and on a close reading of my new book, coauthored with Joel Bleifuss, "Was the 2004 Presidential Election Stolen? Exit Polls, Election Fraud, and the Official Count," and because Kennedy cites in his thorough footnotes many of the same key sources we worked from, I feel compelled to address directly several statements that Farhad Manjoo makes about the exit polls, both in his original Salon article and in his response to Kennedy's response to that article -- statements that are either incorrect or based on misunderstandings about exit polls and the 2004 results.
We regret that Manjoo did not request an advance copy of our book before writing his article. Had he done so, I'm confident that many of the basic errors he made could have been avoided.
Are exit polls usually accurate?
Yes, they are. On Nov. 2, 2004, Manjoo's source Mark Blumenthal, the Mystery Pollster, had this to say: "I have always been a fan of exit polls. Despite the occasional controversies, exit polls remain among the most sophisticated and reliable political surveys available." Properly done exit polls are highly accurate. Given the large sample size in U.S. exit polls, they ought to be accurate within 1 to 2 percentage points of the official count.
The 2004 Election Day exit poll was a well-funded effort conducted by the most experienced pollsters in the business, and it represented a broad spectrum of media interests, from Fox to CBS. The sample included 114,559 respondents in the 50 state exit polls, conducted at 1,480 precincts throughout the nation. A subsample of these was selected to provide a sample representative of the U.S. electorate for the national exit poll: 11,719 Election Day voters and 500 absentee and early voters. The National Election Pool, NEP, a consortium of six news organizations (ABC, AP, CBS, CNN, Fox and NBC) pooled resources to conduct a thorough survey of each state and the nation. NEP in turn contracted two respected firms, Joe Lenski's Edison Media research and Warren Mitofsky's Mitofsky International, to conduct the polls.
Prior to 2000, no one even debated the accuracy of exit polls. Scholars, practitioners and critics all agreed. In 1987, Washington Post columnist David Broder wrote that exit polls "are the most useful analytic tool developed in my working life." Political scientists George Edwards and Stephen Wayne, in their book "Presidential Leadership: Politics and Policy," put it this way: "The problems with exit polls lie in their accuracy (rather than inaccuracy). They give the press access to predict the outcome before the elections have been concluded."
An exit pollster himself for more than 20 years, St. Louis University professor of political science Ken Warren has never had an error greater than 2 percent, except one time -- in a 1982 St. Louis primary. In that election, massive voter fraud was subsequently uncovered.
Do the exit polls indicate a Kerry electoral victory?
Yes, as Kennedy reported, they do. Manjoo references a report I had written shortly after the election to refute Kennedy's claim that exit poll data indicated a Kerry victory in Nevada, New Mexico and Ohio.
At that time, the only data available (and these were hard to come by!) were screen shots preserved from the CNN Web site on Election Night (before the data were "corrected" so as to conform to the count). Whether these data indicate a Kerry victory was a matter of debate, but as any of Manjoo's experts should have known, these data have been superseded by the more detailed data released later by the National Election Pool exit pollsters. The detailed 77-page report was released on Jan. 19, 2005, Bush's Inauguration Eve. Reporters who filed stories on it that night had no time to review it properly; they could only summarize the report's conclusion. Their stories appeared under misleading headlines such as MSNBC's "Exit Polls Prove That Bush Won." In fact, the report makes no such claim.
Manjoo -- though not his triumvirate of expert sources -- may be partly excused for his ignorance on this matter. The National Exit Pool unnecessarily complicates the data through secretive processes and misleading terminology. Despite requests from U.S. Congress members and faculty at leading research universities, the National Exit Pool has refused to release or even permit independent inspection of these data that would allow an investigation of suspected fraud. We only had access to "uncorrected" "early" exit poll data because of blogger leaks and a computer glitch. The National Exit Pool intended to, and eventually did, replace these CNN.com numbers with data "corrected" so as to conform to the official count, and implied that the Election Night CNN numbers were merely "early" results, rather than what they really were: end-of-day data reflecting the entire surveyed population.
Could the discrepancy between the exit poll results and the official count have been due to chance or random error?
No, the discrepancy could not have occurred by chance.
The likelihood of the three most significant anomalies -- the dramatic differences between the official count and the exit-poll projections posted on Election Night in Ohio, Florida and Pennsylvania, the three critical swing states -- occurring together and all favoring the incumbent, Bush, is about one in 660,000. These odds are calculated by multiplying the individual likelihoods from each state, which I have calculated from the exit poll data and which we explain much more thoroughly in the book. This is quite relevant, because it means that there must be an explanation for these irrefutable differences between the vote count and the exit polls.
Are we saying that this means that Kerry must have really won the election?
The evidence that Kennedy cites to cast doubts on the election results come from diverse sources. The exit polls have never been cited as primary evidence of fraud, but only as a reason to take that primary evidence to heart. The title of our book is posed as a question: "Was the 2004 Presidential Election Stolen?" In the book, we treat the exit poll discrepancy as, in the words of Rep. John Conyers, "but one indicia or warning that something may have gone wrong -- either with the polling or with the election." We agree with Conyers that the election results should bear greater scrutiny. The discrepancy is an undisputed fact. The question is, What caused it?
There are only two possible explanations for the discrepancy: 1) far more Kerry voters than Bush voters agreed to fill out the questionnaires offered by pollsters, or 2) the votes were counted incorrectly. In our book, we examine these two possible scenarios as thoroughly as possible.
How significant is the discrepancy?
Manjoo, like Blumenthal and Mitofsky, consistently understate the magnitude and improbability of the discrepancy. A close look at the Ohio results proves this. The official count in the 2004 Ohio election credited Kerry with 48.7 percent of the vote. The 10.9 percentage point disparity between the official count and the exit poll results in those same precincts indicates that Bush's exit poll results was 5.45 percentage points lower than his official numbers and that Kerry's exit poll result was 5.45 percentage points higher, or 54.2 percent. A layman's intuition may tell you that the difference between 48.7 percent and 54.2 percent is not large and you might be tempted to write it off "to chance."
But bell-curve mathematics tells us that the expected range, the polling margin of error, should have been within 47.1 percent to 50.3 percent; 95 percent of the area under the bell curve -- 95 percent of the possible results -- is within this range. And 99 percent of the time the result would fall between 46.6 percent and 50.8 percent. If, in fact, 48.7 percent of the voters in the surveyed Ohio precincts had cast their ballots for Kerry, there should be an even probability of his receiving 48.7 percent or less in the exit poll survey.
Yet the exit poll result falls at the 54.2 percent mark. This is well outside the area where all the probability is located. In fact there is virtually no chance that such a survey would produce a result higher than around 51.9 percent. And this is just one state. All told, 26 states had similar anomalous results. The odds are astronomical that the exit poll results could have been so far off in the same direction in so many states.
We reiterate that this does not prove that the official vote count was fraudulent. What it does say is that the discrepancy between the official count and the exit polls can't be just a statistical fluke, but commands some kind of systematic explanation: Either the exit poll was deeply flawed or else the vote count was corrupted.
How do we measure the discrepancy?
This is the most technical part of the analysis, and it is explained at some length in our book. The Edison/Mitofsky report includes a particularly useful statistic, what the pollsters called "Within Precinct Error" and what we called "Within Precinct Disparity," as "error" implies "mistake" rather than "difference." In order to understand the discrepancy between the exit poll results and the official count, the best measure is the rendering of the discrepancy within the precinct itself.
In the book, we compare 1) the exit poll results by state for Bush and Kerry, 2) each state's official vote tally for Bush and Kerry, and 3) their differential. For example, in Nevada, the official count for Bush is 50.5 percent. The official count for Kerry is 47.9 percent. The difference between the two, the official margin of victory for Bush, is 2.6 percent of the vote.
In Nevada, the exit poll result calculated for Bush was 45.4 percent. The exit poll result calculated for Kerry was 52.9 percent. The difference in these exit poll results is a 7.5 percent margin of victory for Kerry. The Nevada differential -- the shift between the official count result, a 2.6 percentage point win for Bush, and the exit poll result, a 7.5 percentage point win for Kerry, was a huge 10.1 percentage points, as reported by the pollsters.
Because it is based on the precinct-level exit poll results, we call this the "Within Precinct Disparity." This is the difference between how people said they voted as they walked out of the voting booth, and the way those votes were officially recorded.
In New Mexico, there was a 7.8 percentage point disparity; and in Ohio, 10.9 percentage point disparity. Given respective official victory margins of 2.6, 0.8, and 2.1 percentage points in these states, we can say with a very high degree of certainty that exit poll results indicate a Kerry victory. Had Kerry won these states (or even just Ohio), he would have won the presidency.
Have the exit pollsters provided a "clear and convincing explanation" for the exit poll discrepancy?
No, they have not. Manjoo relies on a "hypothetical completion rate of 50 percent for Bush voters and 56 percent for Kerry voters" mentioned in the Edison/Mitofsky report to "explain" the discrepancy. Unfortunately, what I said to Kennedy is absolutely true: "The data presented to support the claim not only fails to substantiate it, but actually contradicts it." All independent indicators on poll participation suggest not lower, but higher response rates among Bush voters. One of these is that response rates are higher, not lower, in precincts where Bush voters predominated as compared to precincts where Kerry voters predominated. In precincts where Bush got 80 percent or more of the vote, an average of 56 percent of people who were approached volunteered to take part in the poll, while in precincts where Kerry got 80 percent or more of the vote, a lower average of 53 percent of people were willing to be surveyed.
Manjoo and the pollsters feel justified in ignoring these indicators based on fanciful possibilities put forward by a aggressive defender of the election, political scientist Mark Lindeman. Manjoo writes:
"For instance, in the Bush strongholds -- where the average completion rate was 56 percent -- it's possible that only 53 percent of those who voted for Bush were willing to be polled, while people who voted for Kerry participated at a higher 59 percent rate. Meanwhile, in the Kerry strongholds, where Mitofsky found a 53 percent average completion rate, it's possible that Bush voters participated 50 percent of the time, while Kerry voters were willing to be interviewed 56 percent of the time. In this scenario, the averages work out to the same ones Kennedy cited."
Unfortunately, even beyond the fact that there is no evidence at all to support the suppositions, Lindeman is flat-out wrong in his calculations. Claiming that the average of a 59 percent response rate for Democratic voters and a 53 percent response rate for Republican voters is 56 percent (59 plus 53, divided by two) neglects the fact that we know that there are at least four times as many Republican voters as Democratic voters in this sample -- because it comes from the set of precincts identified by the pollsters as precincts where 80 percent or more of the voters voted for Bush.
The correct calculation would be that the response rate among Kerry voters had to be at least 68 percent to balance out four times as many Bush voters responding at a 53 percent rate.
Data in the Edison/Mitofsky report informs us of the WPD rates by precinct partisanship is a whopping 10.0 percentage points in these Bush strongholds (as compared to virtually zero in the Kerry strongholds). US CountVotes analysts reconciled these two sets of numbers (the math is not difficult, but more than I'll take on here, although I do explore this in the book), and calculated that response rate among Kerry supporters would have to be about 84 percent in Bush strongholds to reconcile the numbers.
All of which might leave you wondering why so many Democrats would be willing to stick out their necks when they're in enemy territory, surrounded by Republicans, but not willing to respond to the poll in friendlier territory, where their response rate is only 56 percent. Of course, the converse dilemma presents itself in Kerry strongholds.
What about the historic overrepresentation of Democrats in the exit polls?
Democratic overrepresentation, or overstatement, in the exit polls is the same thing as Democratic undercount in the vote tallies. And, as we point out in the book, a Democratic undercount is historically established. The undercount is the votes that are discarded, such as overvotes, undervotes and uncounted provisional ballots. In each presidential election a documented 2 to 3 percent of total votes are discarded.
What about flaws in the exit polls?
The pollsters do say in their report that the exit poll results were not due to "sampling error," which means that they did choose the right representative precincts for the state and national surveys. Manjoo cites the "interviewer characteristics" the report examines as another source of exit poll error. The report sorts and evaluates poll results by examining interviewer characteristics of the poll-takers: completion rates, age, gender, level of education, date of hire, amount of training, and interactions between poll-takers. The pollsters conclude that the disparity is greater under four conditions:
Now, in no way can we rule out the possibility of interviewer effects, but we do point out, first, that this explanation is at best unlikely to provide a complete explanation for the discrepancy. It is significant to note that discrepancies were high for all interviewer characteristics (for example, the disparity is higher when the interviewer is farthest away, but even when the interviewer was inside the polling place there was a 5.3 percentage point disparity). So even if it is right to attribute polling error to interviewer characteristics, it is unlikely that such error could account for all of the discrepancy.
But none of these correlations explain the disparity between the exit polls and the official count. It's understandable that there might be more errors when the interviewer is farther away from the polls, but these errors should balance out, sometimes favoring Kerry, sometimes Bush.
The exit pollsters assume that groups with lower mean Within Precinct Disparities (WPDs) are most accurate. But the data belie that assumption. In fact, interviewers with advanced degrees had lowest miss rates and lowest refusal rates, suggesting that their results are likely the most accurate. And those with the least education had the highest absolute error, meaning that their results were all over the place. Their results were the least accurate.
The flip side to this lack of a "clear and convincing" polling explanation is that the exit pollsters have failed to explain or even consider many indicators highly suggestive of fraud: The 10.0 percentage point WPD in Bush strongholds is an astounding number in and of itself. It means that in precincts where according to the official count Bush received 90 percent and Kerry 10 percent, exit polls indicated that, on average, Bush would get 85 percent and Kerry 15 percent. In other words, in Bush strongholds across the country, Kerry, on average, received only about two-thirds of the votes that exit polls predicted. In contrast, in Kerry strongholds, exit polls matched the official count almost exactly.
And this is just one in a series of indicators of fraud. An analysis of state-by-state differentials in WPD indicates that discrepancies are higher in battleground states, higher where there were Republican governors, higher in states with greater proportions of African-American communities, and higher in states where there were the most Election Day complaints.
- - - - - - - - - - - -
I appreciate the efforts of Rolling Stone and Salon to bring this issue to public attention. Given the many transgressions and statistical improbabilities in the 2004 presidential election, we have an obligation to question it. And those responsible have an obligation to investigate.
Absence of scrutiny does not make a democracy function; democratic processes do. In the case of the 2004 presidential election, the absence of reporting on the election controversy has left the public highly suspicious. A Zogby Interactive online poll one month after the election revealed that 28.5 percent of respondents thought that questions about the accuracy of the official count in the election were "very valid," and another 14 percent thought that concerns were "somewhat valid." In other words, 42 percent of all Americans had immediate concerns about what had happened on Nov. 2, 2004. So long as the suspicions are left to fester, the role of elections to confer legitimacy on elected officials has already been lost.