As was widely predicted, there's been a great deal of chaos since Elon Musk purchased Twitter: Advertisers fleeing, mass firings, hate speech spiking, a plague of fake accounts, even talk of bankruptcy. At this point it's easy to forget the early warning signal when Musk tweeted a link to a baseless anti-LGBTQ conspiracy theory about the Paul Pelosi attack from a known misinformation website that had once pushed a story that Hillary Clinton died on 9/11. But it was precisely the sort of telling, seemingly minor and idiosyncratic act that poets and playwrights since time immemorial have locked onto as character portents of destiny.
That came just a few news cycles after Musk assured advertisers that "Twitter obviously cannot become a free-for-fall hellscape, where anything can be said with no consequences," promising that "our platform must be warm and welcoming to all." Musk deleted the Pelosi link after it had already gotten more than 24,000 retweets and 86,000 likes — in other words, after the damage had already been done. Needless to say, there were no consequences for Musk, at least not right away.
That made me think of Chris Bail's book, "Breaking the Social Media Prism" (Salon interview here) and his ideas about how to build a better platform — meaning both one more civil and more likely to produce reliable information. If Musk genuinely wanted Twitter to "become by far the most accurate source of information about the world," he'd listen to Bail, a leader in the growing community of social science researchers who're developing a sophisticated understanding of our emerging online world. So I reached Bail what he made of the situation, before turning to others as well. While Twitter's financial woes and disastrous non-moderation policies have been big news over the past week, they remain rooted in the realities that Bail and his colleagues have studied intensively for years.
Bail suggested that Musk's retweet of the nonsensical Pelosi story was an attempt to "make a point," that being that "there's always two sides to every story, and seeing this as an opportunity to demonstrate that Twitter has some kind of bias in favor of liberals." But sharing such blatantly misleading information, he continued, "demonstrates what happens if any person tries to make content moderation decisions on their own. You get suboptimal outcomes, because drawing the line between what's acceptable and not acceptable is always going to inspire debate and criticism and disagreement."
What Bail found "particularly tragic" was that "Twitter already has mechanisms in place to promote effective content moderation. The one that's most important in my view is the Birdwatch initiative, which empowers Twitter's users to label posts misleading or false in a sort of crowd-source model, where people can then agree with those annotation, and they become boosted in the Twitter timeline." That sort of "community-led model," Bail said, can avoid the "hot take" mistake Musk apparently made.
Ironically, Musk pointed to Birdwatch (which he has renamed "Community Notes") when Jack Dorsey challenged his statement about accurate information by asking, "accurate to who?" But a Birdwatch note corrected him: "The stated goal of Birdwatch is 'to add helpful context to Tweets.' It is not to adjudicate facts or to be a universal source of information." An article in Poynter highlighted another problem:
"This feature is an absolute game-changer for fighting mis/disinformation at scale," Musk tweeted Saturday. However, as of Nov. 2, there were only 113 notes which were logged after Musk's purchase visible to the public — up from 34 before — which accounts for only 14% of the total notes submitted by Birdwatchers.
Of the community notes made public after the purchase, only one addresses misleading information about voting and elections. Dozens are about Birdwatch itself. One is about the vaccination status of a red panda at the Toronto Zoo.
Even if Community Notes takes off, it will have a tough time catching up with disinformation and misinformation spread by Twitter users, including the flood of $8 fake blue-check accounts he unleashed (and then apparently rolled back).
"Without moderation, platforms become a cesspit of misogyny, racial hatred and antisemitism, without any conceivable benefit for society. Holocaust denial is not free speech."
Then there's the issue of "whether the CEO of the major social media company should be chiming in on a case-by-case basis," Bail said. "I think the answer there is probably no, because it'll be impossible to appear objective by occasionally weighing in on stories of the day. That's not going to create the kind of environment that he seems to want to create, where equal numbers of Republicans and Democrats are upset."
Cognitive scientist Stephan Lewandowsky, whose work I've drawn on repeatedly over the past decade, expressed similar concerns. Musk's actions since taking over Twitter "are problematic and do not augur well for democracy," Lewandowsky said via email. "To take just one example, consider moderation. Without moderation, platforms become a cesspit of misogyny, racial hatred and antisemitism, without any conceivable benefit for society. Holocaust denial is not free speech — it is hate speech, and at least indirectly incites discrimination or violence. It is, at best, a shallow and uninformed interpretation of free speech to want to abolish moderation."
Furthermore, Lewandowsky said, Musk is "completely out of step with the public on this issue." In a pre-print article with several co-authors, Lewandowsky added, he shows that "people in the U.S. by and large support moderation and content removal for disinformation (such as Holocaust denial) that is harmful."
Musk's seeming indifference to spreading misinformation "indicates that he really does not understand what free speech means," Lewandowsky continued. "Free speech does not mean the freedom to make things up or to intentionally create and disseminate false information in pursuit of political goals. When Hitler claimed that Poland was attacking Germany in 1939, he was not exercising free speech. He was using propaganda to justify his own war of aggression. Putin is now doing the same with respect to Ukraine," he said. "The important thing to understand is that speech can only be free if is protected from propaganda, hate speech and incitement to violence."
Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.
Bail suggested that the key to improving online discourse is changing the "incentive structure." In other words, "Rather than rewarding posts that get a lot of engagement, which tend to be those that have outrageous content, promote the type of content that produces consensus."
Cautions about misinformation and a process that allows for discussion and voting on content labels has proven successful, Bail noted — though it can't be seen as a sure thing. "The problem is that Republicans and Democrats may start to use the misinformation labeling as a political game," he said, and that's indeed what happened with Birdwatch at first. "But then Twitter implemented an algorithm that boosted those messages where both Republicans and Democrats appreciated the note, and this seems to have created an environment where people stop spreading misinformation. If you incentivize people to find consensus, it may have particularly good outcomes in the aggregate."
This dovetails with the findings in Lewandowsky's pre-print article cited above, that even in our current contentious political climate, there's an untapped potential within social media for bringing people together to have reality-based conversations rather than symbolic or identity-based feuds. Elon Musk apparently sees himself as a champion of democracy, fighting against the danger of "echo chambers." But Bail's book argues that the echo-chamber metaphor is misleading and leads into a dead end.
Consider Musk's widely quoted letter to advertisers:
The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.
In the relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in the money, but, in doing so, the opportunity for dialogue is lost.
There's a profound flaw in that reasoning, Bail argued. "Like many tech leaders," he told me, Musk seems to "believe that social media should be a competition of ideas where everyone should be allowed to speak their mind and as these ideas compete, the truth or the most reasonable opinions will naturally rise to the top," Bail said. "So the concern is that if ideas can't compete, if some are banned, or if conservatives are only having conversations with each other on places like Parler and Gab and liberals don't engage in those conversations, it's inevitably bad for democracy."
The first problem with that, Bail said, is that "the vast majority of people are not in echo chambers," because "most people don't care a lot about politics, and you can only be in a political echo chamber if you have political views." But it gets worse from there.
The second problem is that taking people outside an echo chamber of mutual agreement "doesn't necessarily make them more moderate," Bail continued. "To the contrary, there's some evidence that it may even make them more extreme. The goal of most social media users is not to engage in reasoned debate and convince others about their views. It's instead to gain status, often by taking down people from the other side. So the effect of getting people outside the echo chamber is sometimes, counterintuitively, to create more conflict."
"The goal of most social media users is not to engage in reasoned debate and convince others about their views. It's instead to gain status, often by taking down people from the other side."
In his book, Bail gets into the more subtle dynamics that can emerge in social media. But these broad lessons are enough to signal that Musk's preferred pathway isn't likely. Indeed, every social media platform seems to go through something similar, as Mike Masnick laid out in a hilarious SNL-skit-for-nerds account at Techdirt, "Hey Elon: Let Me Help You Speed Run the Content Moderation Learning Curve":
Level One: "We're the free speech platform! Anything goes!"
Cool. Cool. The bird is free! Everyone rejoice.
"Excuse me, boss, we're getting reports that there are child sexual exploitation and abuse (CSAM) images and videos on the site."
Oh shit. I guess we should take that down. ...
Level Twenty: "Look, we're just a freaking website. Can't you people behave?"
It's a wicked, hilarious piece of work that illustrates why Musk and people like him don't understand what they're up against.
Lewandowsky called it "spot-on," saying, "I suspect Musk considers himself the great technologist and disruptor who cannot fail — he certainly acts like that — and that prevents him from taking a break to actually study the world and learn from it. He may yet learn, or at least his lawyers will, but that doesn't mean Twitter will become benign. There is lots of barely legal speech that's extremely harmful. That's what we have to worry about and monitor carefully."
Bail characterizes the disconnect this way:
Many social media leaders have treated social media as an engineering problem and simply argued we just need better AI, we need better software engineers. That appears to be very much the technique that Mr. Musk is using. The problem is, social media is not really just a piece of software. It's a community of people, and a community of people often resists attempts at social engineering. Also, engineers just often lack the understanding of what drives human behavior in order to make design choices that will promote more civil behavior.
Bail wryly adds, "I would love for Mr. Musk to learn from the many talented social scientists inside Twitter, but also from the broader field of computational social science." What he might learn there could challenge "a lot of his own assumptions about social media," including the idea that it's biased against conservatives or "that allowing a broad range of views will naturally produce consensus."
Another expression of Musk's engineering-based mindset was in this Nov. 3 tweet: "Because it consists of billions of bidirectional interactions per day, Twitter can be thought of as a collective, cybernetic super-intelligence."
"I would love for Mr. Musk to learn from the many talented social scientists inside Twitter," says Chris Bail. They might challenge "a lot of his own assumptions about social media."
That led research scientist Joe Bak-Coleman to tweet a response thread (upgraded to an article here), where he traced the hive-mind idea back to Aristotle's "Politics," noting that neurons are a poor analogy for individual human behavior, since they have no competitive goals. A better model, he said, is flocks of birds or schools of fish. The way those work is "remarkably complex, but themes emerge: groups are constrained in size or modular+hierarchical, attention is paid to only a few neighbors, etc." In the article he concludes:
So, Elon's premise that Twitter can behave like a collective intelligence only holds if the structure of the network and nature of interactions is tuned to promote collective outcomes. Everything we know suggests the design space that would promote effective collective behavior at scale — if it exists — is quite small compared to the possible design space on the internet.
Worse, it might not overlap with other shorter-term goals: profitability, free speech, safety and avoiding harassment… you name it. It's entirely possible that we can't, for instance, have a profitable global social network that is sustainable, healthy, and equitable.
In short, it's ludicrous to suggest that a social media platform as dynamic and complicated as Twitter is just an engineering problem. Another key point "about collective behavior, networks and complex systems" was highlighted in a quote-tweet by Philipp Lorenz-Spreen, one of the co-authors of the preprint article cited above: "Self-organization does not mean anarchy, but that good things can emerge if the (micro) rules of interaction are well crafted. Neither top-down nor hands-off will work." I reached out and asked him to elaborate further:
The magic of Twitter (and most other online platforms that rely on user-generated content) are emergent phenomena. Communities form, discussions evolve, and trends arise when people network with each other, but the results are neither random nor easily predictable; rather, they depend on the implementation of those interactions. That's the beauty of self-organization, when something emerges that is greater than the sum of its parts, which is what makes social media so fascinating and exciting, but also scary. But don't make the mistake of thinking that self-organization doesn't have rules, it just works differently than laws. The current version of social media serves primarily commercial purposes, and that also drives the evolution of its interaction rules. These interaction rules are what make complex systems like social media so counterintuitive: they operate at the microscopic level, between two actors, but their effects scale up to the macroscopic dynamics in unpredictable but not random ways.
In the current version of social media, some of these emergent dynamics are great, funny memes but also important social movements have their origins in such phenomena, but also all the dangerous dynamics of radicalization, polarization and conspiracy theories are fueled by the collective dynamics that are only possible on social media. But they are largely byproducts of the way social media has been used to date.
He went on to say that redesigning social media to serve societal goals, rather than purely commercial ones, will take hard work — and some very familiar strategies. He used the analogy of automotive traffic, where most of us support "the regulation of individuals with top-down rules," such as speed limits, seatbelt laws and severe punishments for drunk driving:
The opposite, the choice of a supposedly neutral hands-off approach, will lead to emergent phenomena that will, however, be driven and dominated by those privileged by the connections they already have and who are active because they seek power and abuse the scale of social media. These will be given a tool that self-reinforces their positions through collective dynamics, a scenario that we already see in the current version of social media and that is not consistent with democratic principles (e.g., representation).
We're better off sticking to democratic principles, Lorenz-Spreen argues, "because democracy itself is a self-organized system whose rules have evolved over many centuries. We may still need new rules for interaction on the internet, because we can (and need to) design them in an unprecedented level of detail. But that will have to be done carefully, and certainly not by one guy."
We're better off sticking to democratic principles, says Philipp Lorenz-Spreen, "because democracy itself is a self-organized system whose rules have evolved over many centuries."
Lewandowsky responded to a recent Washington Post article on "Musk's Trump-style management," which author Will Oremus summarized this way: "Inside Elon Musk's 'free speech' Twitter, a culture of secrecy and fear has taken hold. Managers and employees have been muzzled, Slack channels have gone dark, and workers are turning to anonymous gossip apps to find out basic info about their jobs."
"Appeals to free speech, and complaints about being censored, have been a major talking point of the hard right for decades," Lewandowsky told me. "So we now have commentators on Fox News or columnists in newspapers with an audience of millions complaining about being 'canceled,' just because someone opposed their views."
There are darker possibilities. Investigative journalist Dave Troy responded to Musk's "cybernetic super-intelligence" tweet with a thread that inquires whether Musk is describing "the concept of the Noosphere," which Troy links to Vladimir Putin's policy agenda and a heritage of relatively obscure Russian and European 19th-century philosophy and theology. Troy suggests that "Musk is aligned with Putin's agenda and will continue to use Twitter as part of an effort to challenge the dollar, the [U.S. government]. NATO, the EU and other governments."
Whatever lies ahead for Twitter — which may or may not survive Musk's chaotic early regime — there's a community of people there and elsewhere, who have a far more realistic grasp of the promise, possibilities and problems involved in social media than he does. Musk can clearly corrupt or destroy the platform he now controls, but he cannot control the future or the emergence of something better.