Does even Nate Silver have limits? What Big Data can -- and can't -- tell us

Our election guru gets how groups are most likely to act over the long run. What else can we determine that way?

Published April 14, 2014 4:50PM (EDT)

Nate Silver                              (AP/Nam Y. Huh)
Nate Silver (AP/Nam Y. Huh)

Excerpted from "A Sense of the Enemy," in which historian Zachary Shore argues that we can know our enemies best, not from their pattern of past behavior, but from their behavior during pattern breaks. Pattern breaks are those times when the routine norms are completely overturned. Shore demonstrates that this is when people reveal their underlying motives. Here is what he writes about Nate Silver's notions of predicting what people will do."

The rush is on to quantify as much as possible and let the algorithms tell us what the future holds. While this method offers obvious advantages, it is not without serious pitfalls. In many realms of prediction, we often go astray when we focus on the facts and figures that scarcely matter, as Nate Silver has shown in his thoughtful, wide-ranging study, The Signal and the Noise. Silver is America’s election guru. He has rocketed to prominence for his successful forecasts of U.S. primary and general election results. In his book, Silver concentrates on those predictions reliant on large, sometimes massive, data sets—so-called “big data.” Silver himself dwells mainly in the realm of number crunchers. He quantifies every bit of data he can capture, from baseball players’ batting averages to centuries of seismologic records, from poker hands to chessboard arrangements, and from cyclone cycles to election cycles. In short, if you can assign a number to it, Silver can surely crunch it.

After four years of intensive analysis, Silver concludes that big data predictions are not actually going very well. Whether the field is economics or finance, medical science or political science, most predictions are either entirely wrong or else sufficiently wrong as to be of minimal value. Worse still, the wrongness of so many predictions, Silver says, tends to proliferate throughout academic journals, blogs and media reports, further misdirecting our attention and thwarting good science. Silver contends that these problems mainly result from our tendency to mistake noise for signals. The human brain is wired to detect patterns amidst an abundance of information. From an evolutionary perspective, the brain developed ways of quickly generalizing about both potential dangers and promising food sources. Yet our brain’s wiring for survival, the argument goes, is less well-suited to the information age, when too much information is inundating us every day. We cannot see the signal in the noise, or, more accurately put, we often fail to connect the relevant dots in the right way.

Silver urges us to accept the fallibility of our judgment but also to enhance our judgment by thinking probabilistically. In short, he wants us to think like a “quant.” A quant—someone who seeks to quantify most of the problems in life—adheres to an exceedingly enthusiastic belief in the value of mathematical analysis. I use the term quant with respect, not simply because mathematical agility has never been my own strength and I admire this ability in others but also because I recognize the tremendous value that mathematics brings to our daily lives.

Naturally, not everything is quantifiable, and assigning probabilities to nonquantifiable behaviors can easily cause disaster. Part of what makes Silver’s book so sensible is that he freely admits the value in combining mathematical with human observations. In his chapter on weather forecasts, he observes that the meteorologists themselves can often eyeball a weather map and detect issues that their own algorithms would be likely to miss. And when discussing baseball players’ future fortunes, Silver shows that the best predictions come when quants and scouts can both provide their insights. Software programs as well as human observations can easily go awry, and errors are most likely to occur when either the computer or the person is focused on the wrong data. If the software is designed to project a minor league pitcher’s future strike-outs but fails to include information on the weakness of the batters that pitcher faced, then the pitcher will be in for a rough ride when he reaches the major leagues. By the same token, scouts who assess a player’s promise by the athlete’s imposing physique might overlook some underlying flaws. Though he does not state it directly, Silver finds that scouts do better when they focus on pattern breaks. “I like to see a hitter, when he flails at a pitch, when he takes a big swing and to the fans it looks ridiculous,” one successful scout told Silver, “I like to look down and see a smile on his face. And then the next time—bam—four hundred feet!” There’s no substitute for resilience, and it can best be seen at those times when things don’t go as planned.

While prudent, thoughtful quantification can serve us well in many areas, it cannot be applied in every area. As a case in point, toward the close of his book, Silver turns to intelligence assessments, drawing specifically on the failure to predict the attacks on Pearl Harbor and 9/11. On the one hand he advocates that intelligence analysts must remain open to all possibilities, particularly by assigning probabilities to all imaginable scenarios, no matter how remote they might seem. On the other hand, he assumes that analyzing individuals is a less profitable endeavor. Silver writes: “At a microscopic level, then, at the level of individual terrorists or individual terror schemes, there are unlikely to be any magic bullet solutions to predicting attacks. Instead, intelligence requires sorting through the spaghetti strands of signals . . .” Of course it is true that we have no magic bullets. Statesmen do, however, possess ways of improving their odds. Rather than mining the trove of big data for patterns in their enemies’ behavior, or sorting through a sticky web of conflicting signals, statesmen can focus instead on the moments of pattern breaks. Again, it is obvious that this will not guarantee successful predictions, but it can help illuminate what the enemy truly seeks.

As a quant, Silver is understandably less comfortable analyzing how individuals behave. His forte is calculating how groups of individuals are likely to behave over the long run most of the time. Here then is a crucial difference between the type of predictions made by Silver and his fellow quants and those predictions made by statesmen at times of conflict. Quantitative assessments work best with iterative, not singular, events. The financial investor, for example, can come out ahead after years of profits and losses, as long as his overall portfolio of investments is profitable most of the time. Depending on the arena, a good strategy could even be one that makes money just 60 percent of the time, as is a common benchmark in personal finance. The same is true of the poker player, baseball batter, or chess master. When the game is iterative, played over and over, a winning strategy just has to be marginally, though consistently, better than that of a coin flip. But leaders, in painful contrast, have to get it right this one time, before lives are lost. In the dangerous realm of international conflict, statesmen must be 100 percent right when it matters most. They cannot afford to repeat again and again the Nazi invasion of Russia or the American escalation in Vietnam. Unlike in competitive poker, the stakes in this setting are simply too high.

The political scientist Bruce Bueno de Mesquita is arguably the king of quants when it comes to predicting foreign affairs. Frequently funded by the Defense Department, Bueno de Mesquita insists that foreign affairs can be predicted with 90 percent accuracy using his own secret formula. Of course, most of his 90 percent accuracy likely comes from predictions that present trends will continue—which typically they do.

The crux of Bueno de Mesquita’s model rests largely on the inputs to his algorithm. He says that in order to predict what people are likely to do, we must first approximate what they believe about a situation and what outcomes they desire. He insists that most of the information we need to assess their motives is already available through open sources. Classified data, he contends, are rarely necessary. On at least this score, he is probably correct. Though skillful intelligence can garner some true gems of enemy intentions, most of the time neither the quantity nor the secrecy of information is what matters most to predicting individual behavior. What matters is the relevant information and the capacity to analyze it.

The crucial problem with Bueno de Mesquita’s approach is its reliance on consistently accurate, quantifiable assessments of individuals. A model will be as weak as its inputs. If the inputs are off, the output must be off—and sometimes dramatically so, as Bueno de Mesquita is quick to note on his own website: “Garbage in, garbage out.” Yet this awareness does not dissuade him from some remarkable assertions. Take for example the assessments of Adolf Hitler before he came to power. Bueno de Mesquita spends one section of his book, The Predictioneer’s Game, explaining how, if politicians in 1930s Germany had had access to his mathematical model, the Socialists and Communists would have seen the necessity of cooperating with each other and with the Catholic Center Party as the only means of preventing Hitler’s accession to Chancellor. He assumes that Hitler’s opponents could easily have recognized Hitler’s intentions. He further assumes that the Catholic Center Party could have been persuaded to align against the Nazis, an assumption that looks much more plausible in a post–World War II world. In 1932, the various Party leaders were surely not envisioning the future as it actually unfolded. Their actions at the time no doubt seemed the best choice in a bad situation. No mathematical model of the future would likely have convinced them otherwise. Assessments are only as good as the assessors, and quantifying bad assessments will yield useless, if not disastrous, results.

None of this means that all efforts at prediction are pure folly. Bueno de Mesquita’s larger aim is worthy: to devise more rigorous methods of foreseeing behavior. An alternative approach to his quantitative metrics is to develop our sense for how the enemy behaves. Though less scientific, it could be far more profitable, and it is clearly very much in need.

Quants are skilled at harnessing algorithms for spotting pattern recognition and also pattern breaks. But their methods work best when their algorithms can scan big data sets of iterative events, focusing on the numbers that truly count. Anyone who has ever received a call from a credit card company alerting her to unusual activity on her account knows that MasterCard and Visa employ sophisticated algorithms to identify purchasing patterns and sudden deviations. This is a realm in which computers provide enormous added value. But in the realms where human behavior is less amenable to quantification, we must supplement number crunching with an old-fashioned people sense. It is here that meaningful pattern breaks can contain some clues. Perhaps surprisingly, within the heart of America’s defense establishment, one man and his modest staff have spent decades refining their strategic empathy. Their successes, as well as their failures, offer useful tips for those who would predict their enemies’ behavior.

Excerpted from "A Sense of the Enemy: The High-stakes History of Reading Your Rival's Mind" by Zachary Shore. Published by Oxford University Books. Copyright 2014 by Zachary Shore. Reprinted with permission of the author and publisher. All rights reserved.


By Zachary Shore

MORE FROM Zachary Shore