You don't like what you think you like: Bad taste, manipulated choices and the new science of decision-making

It's not about merit: In politics and in our own lives, what we know about choices others make can drive our own

Published January 19, 2015 9:30PM (EST)

St. Vincent, Taylor Swift      (AP/Owen Sweeney/Reuters/Carlo Allegri)
St. Vincent, Taylor Swift (AP/Owen Sweeney/Reuters/Carlo Allegri)

Excerpted from "Wiser: Getting Beyond Groupthink to Make Groups Smarter"

Researchers have long known that errors in groups can be amplified if their members influence one another. Of course, the human animal is essentially sociable, and human language may be the most subtle and engaging social mechanism in the animal kingdom. The brain is wired to help us naturally synchronize with and imitate other human beings from birth. Emotions are contagious in our species; obesity appears to be contagious too, and the same may well be true for happiness itself. (We know a behavioral economist who offers what he calls a law of human life: “You cannot be happier than your spouse.”) It is no exaggeration to say that herding is the fundamental behavior of human groups.

If you are doubtful, consider a brilliant study of music downloads. Sociologist Matthew Salganik and his coauthors conclude that there is a lot of serendipity in which music succeeds and which fails, and that small differences in early popularity make a major difference in ultimate success. In business, many people are aware of this point— but not nearly aware enough. They underrate the extent to which success or failure depends on what happens shortly after launch and frequently overrate the contributions of intrinsic merit.

Here’s how Salganik’s study worked. The researchers created a control group in which people could hear and download one or more of forty- eight songs by new bands. In the control group, intrinsic merit and personal tastes drove the choices. Individuals were not told anything about what anyone else had downloaded or liked. They were left to make their own independent judgments about which songs they liked. To test the effect of social influences, Salganik and his coauthors also created eight other subgroups. In each of these subgroups, each member could see how many people had previously downloaded individual songs in his or her particular subgroup.

In short, Salganik and his colleagues were testing the relationship between social influences and consumer choices. What do you think happened? Would knowledge of others’ choices make a difference, in terms of ultimate numbers of downloads, if people could see the behavior of others?

The answer is that it made a huge difference. While the worst songs (as established by the control group) never ended up at the very top and the best songs never ended up at the very bottom, essentially anything else could happen . If a song benefited from a burst of early downloads, it could do really well. If it did not get that benefit, almost any song could be a failure. As Salganik and Duncan Watts later demonstrated, you can manipulate outcomes pretty easily, because popularity is a self-fulfilling prophecy. This means that if a site shows (falsely) that a song is being downloaded a lot, that song can get a tremendous boost and eventually become a hit. John F. Kennedy’s father, Joe Kennedy, was said to have purchased tens of thousands of early copies of his son’s book, Profiles in Courage. The book became a best seller. Smart dad.

There’s a lesson here both for businesses that seek to market products and for foolish groups whose leaders often announce a preference for a proposed course of action before the groups have gathered adequate information or aired possible outcomes. The lesson seems obvious, but it now has a solid empirical foundation: if a project, business, politician, or cause gets a lot of early support, it can turn out to be the group’s final preference, even if it would fail on its intrinsic merits without that support. Both small and large groups can be moved in this way. If the initial speakers in a group favor a particular course of action, the group may well end up favoring that position, even if it would not have done so if the initial speakers had been different.

When products succeed, we often think that their success was inevitable. Wasn’t the Mona Lisa bound to be one of the most famous and admired paintings in the world? Isn’t her portrait uniquely mysterious and irresistible? Weren’t the Beatles destined for success? The Harry Potter series is one of the most popular in the history of publishing. The books are great; how could it be otherwise? Beware of this way of thinking, because inevitability is often an illusion. We can’t prove it here, but it’s true: with a few twists of fate, you would never have heard of the Mona Lisa , the Beatles, or even Harry Potter. (In fact, each of these now iconic works had inauspicious beginnings, but were bumped into the limelight by unpredictable bursts of popularity.)

For their part, many groups end up with a feeling of inevitability, thinking that they were bound to converge on what ultimately became their shared view. Beware of that feeling too, because it is often an illusion. The group’s conclusion might well be an accident of who spoke first— and hence of what we might call an incidental side effect of the group’s discussions. An agenda that says “bosses go first” might produce very different outcomes from one that says “subordinates go first.”

Savvy managers are often entirely aware of this point and organize the discussion so that certain people speak at certain times. Within the federal government, some of the most effective leaders are masters of this process. They know that if, at a crucial juncture, they call on the person with whom they agree, they can sway the outcome. Lesson for managers: devote some thought to the speakers with whom you agree, and get them to speak early and often. Another lesson for managers: don’t do that if you don’t know what the right answer is.

Up-Votes and Down-Votes

Other research supports our central point (and also helps to dispel the illusion of inevitability). Lev Muchnik, a professor at Hebrew University of Jerusalem, and his colleagues carried out an ingenious experiment on a website that displays a diverse array of stories and allows people to post comments, which can in turn be voted up or down. With respect to the posted comments, the website compiles an aggregate score, which comes from subtracting the number of down- votes from the number of up- votes. To put a metric on the effects of social influences, the researchers explored three conditions: (1) “ up- treated,” in which a comment, when it appeared, was automatically and artificially given an immediate up- vote; (2) “ down- treated,” in which a comment, when it appeared, was automatically and artificially given an immediate down- vote; and (3) “control,” in which comments did not receive any artificial initial signal. Millions of site visitors were randomly assigned to one of the three conditions. The question: What would be the ultimate effect of an initial up- vote or down- vote?

You might well think that after so many visitors (and hundreds of thousands of ratings), a single initial vote could not possibly matter. Some comments are good, and some comments are bad, and in the end, quality will win out. It’s a sensible thought, but if you thought it, you would be wrong. After seeing an initial up- vote (and recall that it was entirely artificial), the next viewer became 32 percent more likely also to give an up- vote. What’s more, this effect persisted over time. After five months, a single positive initial vote artificially increased the mean rating of comments by a whopping 25 percent! It also significantly increased turnout (the total number of ratings).

With respect to negative votes, the picture was not symmetrical— an intriguing finding. True, the initial down- vote did increase the likelihood that the next viewer would also give a down- vote. But that effect was rapidly corrected. After the same period of five months, the artificial down- vote had zero effect on median ratings (though it did increase turnout). Muchnik and his colleagues conclude that “whereas positive social influence accumulates, creating a tendency toward ratings bubbles, negative social influence is neutralized by crowd correction.” They think that their findings have implications for product recommendations, stock-market predictions, and electoral polling. Maybe an initial positive reaction, or just a few such reactions, can have major effects on ultimate outcomes— a conclusion very much in line with Salganik’s study of popular music.

We should be careful before drawing large lessons from one or two studies, particularly when no money was on the line. But there is no question that when groups move in the direction of some products, people, political initiatives, and ideas, the movement may not be because of their intrinsic merits, but because of the functional equivalent of early up- votes. There are lessons here about the extraordinary unpredictability of groups— and about their frequent lack of wisdom. Of course, Muchnik’s study involved very large groups, but the same thing can happen in small ones. In fact, the effect can be even more dramatic in small groups, as an initial up- vote— in favor of some plan, product, or verdict— has a large effect on other votes.

How Many Murders?

Here’s a clean test of group wisdom and social influences. The median estimate of a large group is often amazingly accurate. But what happens if people in the group know what others are saying? You might think that knowledge of this kind will help, but the picture is a lot more complicated.

Jan Lorenz, a researcher in Zurich, worked with several colleagues to learn what happens when people are asked to estimate certain values, such as the number of assaults, rapes, and murders in Switzerland. The researchers found that when people are informed about the estimates of others, there is a significant reduction in the diversity of opinions— a result that tends to make the crowd less wise. (Note, however, that even with diminished diversity, the crowd is still somewhat more accurate than a typical individual.) Lorenz and his coauthors found another problem with the crowd, which is that because people hear about other estimates, they also become more confident. Notably, people received monetary payments for getting the right answer, so their mistakes were consequential— not just an effort to curry favor with others. The authors conclude that for decision makers, the advice given by a group “may be thoroughly misleading,” at least when group members are interacting with one another.

Notwithstanding their differences, the Salganik, Muchnik, and Lorenz studies have one thing in common: they all involve social cascades. A cascade occurs when people influence one another, so much so that participants ignore their private knowledge and rely instead on the publicly stated judgments of others. Corresponding to our two accounts of social influences, there are two kinds of cascades: informational and reputational. In informational cascades, people silence themselves out of respect for the information conveyed by others. In reputational cascades, people silence themselves to avoid the opprobrium of others.

Informational Cascades

Cascades need not involve deliberation, but deliberative processes and group decisions often involve cascades. The central point is that those involved in a cascade do not reveal all that they know. As a result, the group does not obtain important information, and it often decides badly.

Informational Cascades in Action

To see how informational cascades work, imagine a company whose officials are deciding whether to authorize some new venture. Let us assume that the group members are announcing their views in sequence, a common practice in face- to- face teams and committees everywhere. Every member has some private information about what should be done. But each also attends, reasonably enough, to the judgments of others.

Andrews is the first to speak. He suggests that the venture should be authorized. Barnes now knows Andrews’s judgment. It is clear that she, too, should vote in favor of the venture if she agrees independently with Andrews. But suppose that her independent judgment is otherwise. Everything depends on how much confidence she has in Andrews’s judgment and how much confidence she has in her own. Suppose that she trusts Andrews no more and no less than she trusts herself. If so, she should be indifferent about what to do and might simply fl ip a coin. Or suppose that on the basis of her own independent information, she is unsure what to think. If so, she will follow Andrews.

Now turn to a third member, Carlton. Suppose that both Andrews and Barnes have argued in favor of the venture, but that Carlton’s own information, though inconclusive, suggests that the venture is probably a bad idea. Here again, Carlton will have to weigh the views of both Andrews and Barnes against his own. On reasonable assumptions, there is a good chance that Carlton will ignore what he knows and follow Andrews and Barnes. After all, it seems likely, in these circumstances, that both Andrews and Barnes had reasons for their conclusion. Unless Carlton thinks that his own information is better than theirs, he should follow their lead. If he does, Carlton is in a cascade.

If Carlton is quite savvy, he might consider the possibility that Barnes deferred to Andrews’s judgment and did not make any kind of independent judgment. If so, Carlton might ignore the fact that Barnes agreed with Andrews. But in the real world, many group members do not consider the possibility that earlier speakers deferred to the views of still earlier ones. People tend to think that if two or more other people believe something, each has arrived at that belief independently. This is an error, but a lot of us make it.

Now suppose that Carlton goes along with Andrews and Barnes, and that group members Davis, Edwards, and Francis know what Andrews, Barnes, and Carlton said and did. On reasonable assumptions, they will do exactly what Carlton did: favor the venture regardless of their private information (which, we are supposing, is relevant but inconclusive). This will happen even if Andrews initially blundered. Again, Davis, Edwards, or Francis might be able to step back and wonder whether Andrews, Barnes, and Carlton really have made independent judgments. But if they are like most people, they will not do that. The sheer weight of the apparently shared view of their predecessors will lead them to go along with the emerging view.

If this is what is happening, we have a now- familiar problem: those who are in the cascade do not disclose the information that they privately hold. In the example just given, decisions will not reflect the overall knowledge or the aggregate knowledge of people in the group— even if the information held by individual members, if actually revealed and aggregated, would produce a quite different (and possibly much better) result. The venture will be authorized even if it is a terrible idea and even if group members know that it is a terrible idea. The simple reason is that people are following the lead of those who came before. Subsequent speakers might fail to rely on, and fail to reveal, private information that actually exceeds the information collectively held by those who started the cascade.

Here’s an example of an informational cascade in jury deliberation. One of us (Hastie) has conducted dozens of mock- jury studies, with thousands of volunteer jurors, many with citizens from big- city jury pools. In these studies, the volunteers allowed themselves to be videotaped while deliberating to verdicts on difficult but typical cases. In many juries (mock and real) the deliberation begins with a straw vote, taken just to see where everyone stands.

In dozens of juries, we observed a scenario like the following. The straw vote would circle the jury table and often would start with a small cascade of two or three jurors favoring, with increasing confidence, the same verdict. Because the researchers collected predeliberation private ballots, we knew which verdict was privately preferred by each of the jurors at the table. Let’s suppose that Jurors 1, 2, and 3 endorsed second- degree murder, privately and publicly in the straw vote. But we knew that Juror 4 had voted for
not guilty and had indicated the highest level of confidence on the predeliberation ballot.

So, what did Juror 4 do, when confronted with the solid line of three murder verdicts? He paused for a second and then said, “Second degree.” At this point, Juror 7, an undecided vote, suddenly spoke up and asked, “Why second degree?” A momentary deer- in- the- headlights expression flitted across Juror 4’s face, and then he replied, “Oh, it’s just obviously second degree.” This scenario stands out as an iconic example of an informational cascade, and we have no doubt that this scenario is played out every day in jury rooms, board rooms, and political conference rooms all over the world.

Anxiety and Cascading

People who are humble, pliable, or complacent are especially likely to fall into a cascade. But anxious people might well shatter it, certainly if it reflects a high degree of optimism. Nancy- Ann DeParle is a gold- medal shatterer of cascades, simply because she asks tough, skeptical questions that force people to rethink. Every group needs some people like that, who wonder: If lots of people share an opinion on a hard question, might it be because they are following the lead of one or two blunderers? Why are there no dissenters? (Recall the fiasco at the Bay of Pigs.)

It is important to understand that in relying on the statements or actions of their predecessors, group members are not acting irresponsibly. In fact, informational cascades can occur when members are following rigorously rational thought processes. Group members might well be reacting sensibly to the informational signals they receive. If most people think that the venture is a good idea, it’s probably a good idea. You should feel, sensibly, that you need a pretty strong counterargument if you are going to disagree with what your colleagues have said.

But we should not underestimate our tendencies to rely on confidence, our own and others’, as a cue for what information deserves the most attention (independent of the validity of that information). One of the most insidious side effects of group decision making is that people believe in wrong group decisions more than they believe in incorrect individual decisions. The social proof
resulting from cascades and (conformity more generally) amplifies everyone’s trust in the incorrect outcome. And inputs into the decision process from highly confident or dominant personalities have more impact and increase the esteem accorded to those individuals, regardless of the quality of their contributions.

Reprinted by permission of Harvard Business Review Press. Excerpted from "Wiser: Getting Beyond Groupthink to Make Groups Smarter" by Cass R. Sunstein and Reid Hastie. Copyright 2015. All rights reserved.


By Cass Sunstein

Cass R. Sunstein is the Karl N. Llewellyn Distinguished Service Professor at the law school and the department of political science at the University of Chicago.

MORE FROM Cass Sunstein

By Reid Hastie

MORE FROM Reid Hastie


Related Topics ------------------------------------------

Bay Of Pigs Books Cass Sunstein Harry Potter John F. Kennedy Neuroscience