Undoing Undue Hate: The corrosive role of common false beliefs

Author of "Undue Hate" on how a handful of universal cognitive biases exacerbate perceived divisions

Published May 7, 2023 12:00PM (EDT)

Supporters of President Donald J. Trump before the start of the rally. At the Reading Regional Airport in Bern Township, PA Saturday afternoon October 31, 2020 where United States President Donald J. Trump spoke during a campaign rally for his bid for reelection. (Ben Hasty/MediaNews Group/Reading Eagle via Getty Images)
Supporters of President Donald J. Trump before the start of the rally. At the Reading Regional Airport in Bern Township, PA Saturday afternoon October 31, 2020 where United States President Donald J. Trump spoke during a campaign rally for his bid for reelection. (Ben Hasty/MediaNews Group/Reading Eagle via Getty Images)

Last week, I interviewed media scholar Daniel Kreiss about a paper he co-wrote with Shannon McGregor criticizing what I'd call the issue prioritization of polarization. Kreiss told me, "Polarization becomes a way to talk about politics without talking about politics at all, without actually getting at the underlying issues." 

But if polarization talk obscures underlying issues from above, a new book by behavioral economist Daniel Stone, "Undue Hate," dissects how "objectively false and overly negative beliefs about the other side's character traits" similarly obscures them from below—below our level of consciousness, that is. At first glance, Stone's book might seem to be just the sort of thing that Kreiss and  McGregor were criticizing, and some of the things he says would seem to justify that. But most significantly, he's not pretending to explain everything with an over-arching narrative. What he is doing is providing a carefully qualified account of how a handful of universal cognitive biases play a significant role in exacerbating perceived divisions, along with fueling disaffection in everyday non-political settings—even among friends and family. And, to his credit, he recognizes these biased tendencies in himself. 

On the one hand, this can fit neatly into a bothsides, status-quo-anchored harmony quest. But it can  also serve an  empirically grounded quest for social justice, for Martin Luther King's "beloved community."  Indeed, Stone begins his book with a relatively little-known story of how Shirley Chisholm began the process of turning George Wallace away from the white supremacist path he had blazed so ferociously at such a crucial time in our history. 

While Stone's book doesn't lean into that story the way that I would, it can certainly help us to do so. Ideally, getting rid of the "undue hate" Stone helps us understand would help us address real problems with factually-supported solutions that have broad public support. And those solutions, by a significant preponderance, would make us a much more progressive country than we are today—as I've written about repeatedly in the past. Which is precisely what first drew me to Stone's work.  This interview on his book has been edited for clarity and length.

In chapter one you explain what the book is about, you say it's about why we tend to experience too much affective polarization with one another, so what you mean by affective polarization and what do you mean by too much? 

Affective polarization is a standard term in the social sciences now to refer to to emotional polarization. It tends to be used to refer to hostility or dislike or negative feelings that members of one party feel toward an opposition party.  And one of the key claims the book makes is that it is possible for our our negative feelings to be excessive because our negative feelings, or feelings in general toward other people, are based on beliefs about those people's character traits which have implications for their actions and their opinions. So, our beliefs about their character traits are what drive our feelings and our beliefs can be right or wrong about character traits. 

There are some beliefs about other people that are arguably inherently subjective and impossible to evaluate with respect to accuracy. But if I have a belief that you have a dog and you mistreat it, and only feed it once a week, and you actually treat your dog really well, you feed it every day, my belief is simply wrong. And if my misguided belief makes me feel negatively toward you, then I would probably be feeling excessively negatively toward you. In the language of effective polarization, I would be too affectively polarized toward you because of my false belief about your character and your actions. 

As you just indicated in that example it's not limited to politics, though in the realm of politics it's especially noticeable, but your argument about these mechanisms is a general one, and you do point to non-political examples throughout the book, correct? 

That's right. I'm not the first person to compare political disputes to nonpolitical disputes, but I think I go a bit further than a lot of other research and literature on this topic. I claim that there are a lot of similarities, and we can learn about political polarization by understanding the similarities to interpersonal conflicts in other settings. 

In chapter two, you review several different types of evidence of affective polarization in US politics. Among them you describe three types of false perceptions that I'd like to briefly explain. First of what is false polarization? 

False polarization refers to over-estimation of ideological polarization or over-estimation of the extent to which the parties differ in their ideologies and party positions. So, false polarization implies that we think the other side's views on policies or ideological issues are more extreme and more different from our own than they really are. 

OK. Second, what are false partisan stereotypes? 

That refers to stereotypes about demographic characteristics, like race and income and age. So, if I think that Democrats are 50% African-American and actually it's 25% Democrats that are African-American. That would be an example.

"To show people from South Carolina from California aren't so bad, just give them some time to interact with each other. "

And third, what are false meta-perceptions?

I don't know that that's the best term, but it's been a standard term that might be going out of style. It usually refers to false beliefs about the extent to which the other side dislikes our own side, over-estimation of the other side's negative feelings towards us. But there can be other types of false meta-perceptions. Another term used is second-order beliefs. So, it commonly refers to mistakes about the other side's beliefs about our side, But we could also have false second-order beliefs about the other side's interests in political violence. You could over-estimate how supportive they are of political violence. 

Then, in chapter 3 you describe a number of overarching biases. In a few words, I'd like to describe each of the following. First, over-precision. 

That's a standard term for an important type of overconfidence, overconfidence in beliefs, overconfidence in how much we know. It refers to having overly precise beliefs, to think we understand something more precisely than we do. 

The second is WYSIATI -- "what you see is all there is."

That's a term that Daniel Kahneman coined in his very well-known book "Thinking Fast and Slow."  It refers to neglect of the fact that we almost always (or even always)  only have partial information—so, the mistake of assuming that our information that we observe is the full story. Thinking what we see is all there is one reason that we hold overly precise beliefs and become overconfident. 

Third is a naïve realism. 

That's a term from psychology. It isn't used in economics or behavioral economics much. But it refers to naïvely thinking that we see the world more realistically or objectively than we really do. With all these terms there can be variation in the way they're used, but I think that it refers to both thinking that our beliefs are more objective than they really are, neglecting our biases in our beliefs, and also the thinking that are our tastes are more objective than they really. So, thinking that when we think that a particular type of music is the best music—it might be a matter of taste, and it's something that's impossible to evaluate objectively, but naïve realism will make us think that our favorite music is the, realistically, objectively best music.

OK. Fourth is motivated reasoning. 

That one is very intuitive--it's sort of a fancy term for wishful thinking, but it refers to a bit more than just thinking, because it refers to a bit of reasoning. Motivated reasoning makes us come up with reasons for believing things that we wish to be true. We're motivated in our search for rationales and explanations of things that we observe, we're motivated to to find explanations that lead to the answer that we wish to be true. 

Fifth is lack of intellectual humility. 

That one is one that I don't talk about a ton in the book, but it's worth mentioning because there been several papers linking it to polarization. Intellectual humility is pretty much what it sounds like, which is being comfortable with the fact that we were all wrong sometimes, and that we're all uncertain about things nearly all the time. So being comfortable with uncertainty and being comfortable with acknowledging errors results from intellectual humility. Lack of intellectual humility will lead to intellectual overconfidence and cause us to overestimate what we know, and also refuse to admit it when we're wrong or we should change your mind about something. 

Sixth, which is important because it can be overlooked, unmotivated confirmation bias.


Confirmation bias is something most of us are pretty familiar with, and have heard a lot about. I draw a distinction between motivated and unmotivated. Unmotivated refers to our tendency to confirm beliefs even though we don't particularly wish them to be true. It will make us see ambiguous information in a way that confirms our pre-existing belief, whether it's a belief that we consider desirable. 

A classic example of this would be someone who is overly pessimistic about maybe there their own attributes or the trajectory of their own life that might be depressed and think that they wouldn't amount to anything. I talk about examples of running into someone in the grocery store and blowing you off. Unmotivated confirmation bias would make us think "They don't like me. Nobody likes me," whereas a more accurate interpretation would be, "Maybe they're just in a rush, or they didn't see me," or there million other reasons they might have hurried away without spending time talking. 

In chapter 4, "Tastes and Truth," you adopt Jonathan Haidt's metaphor comparing moral values to tastes at the broad level of universalism versus communitarianism. How does undue hate exacerbate such differences in taste?

So, Haidt's theory is admittedly somewhat controversial, his book I think is very well known and has a lot of fans and has some critics, but... 

Well, I'm a critic, but the universalism versus communitarianism phrasing is less problematic, and that's the one you focus more on, so let's go with that.

Yes, there's recent evidence from this really excellent behavioral economist Ben Enke. So if we take it as true that these differences in values are like differences in taste, meaning that we can't say that one is right and one is wrong, they're just different, that's essentially how I'm using the term taste. In other words, if you like red and I like blue, neither of us is right or wrong, our tastes are just different. So,  if we believe this claim—that differences in moral values are like differences in taste—but then people don't realize this because of their naïve realism, they will think that other people who have different tastes from their own are not just different but there they're just wrong. And they'd be mistaken. So in the same way that if I know you like red and I like blue if I was to think blue was objectively and universally correct, it's not a matter of taste, and I hear that you like red, I would mistakenly infer that you have some character flaw that caused you to choose red. 

Going back to how you begin your book, you start with a little-known story of how Shirley Chisholm came to visit George Wallace in the hospital after he'd been shot in an assassination attempt during the 1972 Dem primary, in which they both were candidates. Wallace was shocked. As you write: "Shirley Chisholm! What you doing here?" Wallace asked. "I don't want what happened to you to happen to anyone," she replied. Wallace came to tears. When it was time for Chisholm to leave, Wallace did not want to let go of her hand."

Ultimately, you later say, a few years later, Wallace publicly renounced racism and asked for forgiveness. I bring this up now because that story *doesn't* seem to primarily relate to undue hate so much as it highlights the asymmetry between Chisholm's universalism and Wallace's communitarianism.  It's inconceivable to me that Wallace could have reached out to Chisholm if their roles had been reversed. So, how do you make sense of that story?

So my interpretation is it shows how, if one were to have made the assumption that Wallace would never renounce racism, that would've been a false assumption. And that might have caused some people to to not visit him in the hospital, to not give him that chance.  Surely, Shirley Chisholm was not subject to that undue hate, but someone who figured Wallace was a lost cause, and there's no way he'll ever admit he was wrong and admit his moral failure, some people might have believed that about Wallace, and that would've been a false belief based on undue hate. So would've been an example, on one side. And absolutely, if Wallace or someone else wouldn't have visited Chisholm in the hospital or underestimated Chisholm's character in any number of ways, they could, they could probably be subject to undue dislike or undue hate toward her.

But you see my point, however, that universalism carries with it a greater capacity or inclination or potential for not believing the worst about people. And communitarianism carries more of that potential. I mean there's an asymmetry, it seems to me.

I do see your point. So you're saying it would be understandable to argue that communitarianism is going inherently cause people to dislike people outside their community more than they should. 

I'm not saying that there's no value in communitarianism, but I'm saying that it's more problematic and needs special attention.

It's a nice point. One response is obviously, this is sort of a huge philosophical issue that I think hasn't been resolved by professional philosophers, so I'm not going to resolve it. To what extent should we favor our own children's welfare over the well-being of the child on the other side of the world, who might be starving to death. So, we all take our kids out to dinner and don't donate that money to kids [we don't know]. So we all exhibit communitarianism to different degrees.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

Right. I'm not condemning it outright. Very few people would.

So, just because we focus on our communities doesn't mean we necessarily have negative beliefs of other communities. So that's one reason why communitarianism wouldn't inherently cause undue hate. It could just cause undue neglect. There is a difference between focusing our attention and efforts on our community versus thinking our community's better than other communities.  

Let's move on to chapter 5. You deal with conflicts fueled or generated for strategic reasons and you employ the repeated prisoner's dilemma as a benchmark model. But you argue that the greater complexity of the real world is itself a major cause of biased dislike. So how can we best understand the role of undue hate here in strategic conflicts? 

A game theorist would say we play repeated games with other people, in reality, all the time. We're constantly interacting, we make our own choices, they make choices and the interaction of our choices affects our well-being and their well-being. But we don't play a game with other people just once, we see them again and again and again. Now, the exact game almost always evolves, and changes, and is a little bit different every day and every time we interact. And the exact game is never as simple as a prisoner's dilemma.

But it's still a useful model. Admittedly, some people have said it's an overuse model, but it's used so much because it is so insigthful, because in a prisoner's dilemma a player has essentially a choice between a self-interested action, which is bad for the group, and a socially beneficial action, which is is not best for ourselves. So, we can take the selfish action, which is often called "defect," or the socially beneficial action, "cooperate." 

In a one-shot prisoner's dilemma, in theory, everyone is supposed to defect, but in a repeated prisoner's dilemma, we can cooperate because we know our defection would be punished by the other side's defection in the future, and our current cooperation would be rewarded by their cooperation in the future. So, in theory, if you see the other side defect—take the selfish action—in repeated prisoner's dilemma you're supposed to punish them in response, to defect. An appropriate punishment can get both sides back on track, make both cooperate forever after. You have to punish to keep them in line. 

So, that's the theory. But reality isn't so simple, right?

In reality, it's often very unclear where the defection is. Sometimes we defect and don't realize it. meaning we take an action that actually is selfish and socially harmful, but we don't even realize it, we think our action was cooperation. And when we do that and the other side sees us defect, they're going to punish us. And when we see them punish us, but we think that we acted cooperatively, we think our action was good, we think that their punishment is inappropriate. So, if they're punishing us inappropriately we'll then feel entitled to punish them. 

So, what does that lead to?

The noise and ambiguity about what cooperation and defection is can cause two players who potentially could have repeated cooperation get off track in a few ways. One is one player thinking their own defection—using motivated reasoning or whatever bias to overly optimistically interpret their own actions—is cooperation. Another possibility is mistakenly interpreting the other side's cooperation as defection. So it's possible the other side did cooperate but we mistakenly see it as a defection. Another possibility is limited memory. Suppose I defect, not even realize I defected, and they punish me after this, and then I see them punish me, but I actually forgot about my transgression that caused them to punish me, I have such a short memory that I just see them act badly and forget about the reasons.

That might sound implausible, but memories are surprisingly short in a lot of ways. We tend to punish people the other side for old sins, we have long memories for their sins, and very short memories for own. So you see how the ambiguity, plus bias, plus complexity can make the potentially beneficial instinct to punish bad actions lead to trouble. 

Anything else?

There's at least one other important factor here I should mention, which is escalation. In a simple theoretical prisoner's dilemma the actions are binary in each round. In reality, the degree of defection is often unconstrained and we can retaliate more strongly, we can defect twice or three times, we can escalate and, of course, that's another thing that can lead to trouble. 

Chapter 6 "Information," covers a lot of ground, but the two most significant things that stood out to me were first that offline polarization is significantly larger than online and second that robust real world contact can be an effective antidote to polarization---as shown, for example in the experience of "America in One Room" which I've written about before. Could you sum up what this tells us about how we got here & how we might get ourselves unstuck? 

I think it's it's underappreciated that, I think a useful term is ideological segregation, is much higher off-line than online. On Twitter and on the web we tend to run into the other side in various ways, but it's really off-line in our neighborhoods and in our places of employment and then our friend networks and families where we really tend to have ideologically like-minded groups. So, we think "I just don't know anyone that voted for Trump" or "I don't know anyone that voted for Biden" and that can make us think, "Well, everyone I know voted for Trump, Trump must've won. Biden must've stolen the election." 

But also, since we're all naturally communitarian to some degree, not knowing, it's easy to assume the worst about others and we have reasons for sharing negative information with our social networks even offline. So when we get together at a bar with a friend or family and we talked politics and we often naturally talk about the latest horrible thing the other side did, and there's no one there to stick up for them and say, "Well actually here's they had a halfway decent reason for doing this. So it wasn't as bad as it seemed." 

So even if they're not getting their information from cable news directly they're getting from a friend, we tend to bash the other side, not get the whole story. And if we were completely rational economic agents we would take into account this skewed information, but, of course, we don't, so ideological segregation offline is going to skew us towards thinking the other side is worse than they are.  

And what about robust real world contact?

When we actually do have contact with the other side, when we get together in a room, especially when it's in person, we see that they're human after all. And the evidence shows that it tends to warm up our feelings. So the American In One Room Project is a prominent, really impressive study that showed this, but there are number of other studies, there have been several since I had to make my last submission to the publisher. 

It's not that direct contact always improves relations, the conditions have to be not too competitive, and they have to be reasonably constructive. But in general, my interpretation of the evidence is that when we get to know the other side a little bit better and we spent a little bit of time talking to them in just a normal setting if you just put two people with different views together for half an hour and gave them something to drink, it doesn't have to be alcoholic, just something to be sure they're not too hungry or thirsty, and they're in a good mood, they're going to get to know each other a bit better and they're going to realize that the worst aspects of their views about each other were misguided. Even if they don't change each other's minds about anything, they'll see that they're human and they will respect each other and learn to be more tolerant of each other. 

Finally, what's the most important question I didn't ask what's the answer?

Well, you kind of asked it in your previous question which was how do we actually solve this.

The question of how we solve the problems in America's politics is too big to even attempt to tackle here. But one view that I mentioned in the book pretty briefly towards the very end is that I think we should be pressuring the top officials to be talking about this more and thinking about actions they can take. I think people have given up because they think it's just a lost cause to think that the president and Senate Majority Leader would talk about reducing polarization, and undo affective polarization as national problems, but I think that they are leading national problems and so it's kind of absurd that we we don't keep pressing our leaders to address them. 

Anything else?

I think we should consider some seemingly wacky ideas, third-party mediation or political negotiations because we know you know it's really hard for two people who don't see eye to eye to figure out who's at fault. So who the third party would be, there is no neutral third party in the U.S. really, so you might have to look abroad. And then I talk about bias training for politicians—I know that sounds far-fetched, too but I think we just have to think about it.  Similarly, I do think social media is a significant part of the problem, even though it's not everything, and so I think pressuring the platforms to really be taking strong proactive steps to cut down on algorithmic amplification of misleadingly negative content, and to even think about ways to improve—not necessarily to falsely inflate people's beliefs about the other side but just to help us be better informed and to see the other side—amplify people when they act reasonably and decently rather than so negatively. 

You might ask, "What's their profit incentive to do that?" One response is in the long run, if democracy falls apart in America, that's not good for corporate profits. So we need corporations to be forward-looking and to consider that politics and economics are closely intertwined and so if our political system falls apart, it is bad for business in the long run. So businesses need to step up. A lot of businesses are thinking about this more and more.

There are all kinds of neat ideas, specific ideas, people are talking about national service, this didn't make it in the book, but like a mandatory national service program for young adults, it doesn't have to be military service, it could be community service, and you improve contacts, to show people from South Carolina from California aren't so bad, and just give them some time to interact with each other. 

By Paul Rosenberg

Paul Rosenberg is a California-based writer/activist, senior editor for Random Lengths News, and a columnist for Al Jazeera English. Follow him on Twitter at @PaulHRosenberg.

MORE FROM Paul Rosenberg

Related Topics ------------------------------------------

Author Interview Interview Polarization