COMMENTARY

Selling "longtermism": How PR and marketing drive a controversial new movement

For star author William MacAskill and the "longtermist" movement, marketing, PR and brand management are crucial

By Émile P. Torres

Contributing Writer

Published September 10, 2022 12:00PM (EDT)

William MacAskill (Photo by Matt Crockett / Courtesy of Basic Books)
William MacAskill (Photo by Matt Crockett / Courtesy of Basic Books)

In a recent podcast interview with Griftonomics about the increasingly influential ideology known as "longtermism," I was asked at the end "So, what's the grift?" The difficulty in answering this is not what to say but where to start.

Longtermism emerged from a movement called "Effective Altruism" (EA), a male-dominated community of "super-hardcore do-gooders" (as they once called themselves tongue-in-cheek) based mostly in Oxford and the San Francisco Bay Area. Their initial focus was on alleviating global poverty, but over time a growing number of the movement's members have shifted their research and activism toward ensuring that humanity, or our posthuman descendants, survive for millions, billions and even trillions of years into the future.

Although the longtermists do not, so far as I know, describe what they're doing this way, we might identify two phases of spreading their ideology: Phase One involved infiltrating governments, encouraging people to pursue high-paying jobs to donate more for the cause and wooing billionaires like Elon Musk — and this has been wildly successful. Musk himself has described longtermism as "a close match for my philosophy." Sam Bankman-Fried has made billions from cryptocurrencies to fund longtermist efforts. And longtermism is, according to a UN Dispatch article, "increasingly gaining traction around the United Nations and in foreign policy circles."

Phase Two is what we're seeing right now with the recent media blitz promoting longtermism, with articles written by or about William MacAskill, longtermism's poster boy, in outlets like the New York Times, the New Yorker, the Guardian, BBC and TIME. Having spread their influence behind the scenes over the many years, members and supporters are now working overtime to sell longtermism to the broader public in hopes of building their movement, as "movement building" is one of the central aims of the community. The EA organization 80,000 Hours, for example, which was co-founded by MacAskill to give career advice to young people (initially urging many to pursue lucrative jobs on Wall Street), "rates building effective altruism a 'highest priority area': a problem at the top of their ranking of global issues."

But buyer beware: The EA community, including its longtermist offshoot, places a huge emphasis on marketing, public relations and "brand-management," and hence one should be very cautious about how MacAskill and his longtermist colleagues present their views to the public.

As MacAskill notes in an article posted on the EA Forum, it was around 2011 that early members of the community began "to realize the importance of good marketing, and therefore [were] willing to put more time into things like choice of name." The name they chose was of course "Effective Altruism," which they picked by vote over alternatives like "Effective Utilitarian Community" and "Big Visions Network." Without a catchy name, "the brand of effective altruism," as MacAskill puts it, could struggle to attract customers and funding.

It's easy for this approach to look rather oleaginous. Marketing is, of course, ultimately about manipulating public opinion to enhance the value and recognition of one's products and brand. To quote an article on Entrepreneur's website,

if you own a business, manipulation in marketing is part of what you do. It's the only way to create raving fans, sell them products and gain their trust. Manipulation is part of what you do, so the trick isn't whether you do it or not — but rather how you do it.

This is exactly what we see in the ongoing promotion of MacAskill's new book "What We Owe the Future," which offers an easy-to-understand version of longtermism designed for mass consumption.

"Longtermism" has a feel-good connotation because it suggests long-term thinking, but longtermism the worldview is an ideology built on radical and highly dubious philosophical assumptions that could be extremely dangerous.

Consider the word "longtermism," which has a sort of feel-good connotation because it suggests long-term thinking, and long-term thinking is something many of us desperately want more of in the world today. However, longtermism the worldview goes way beyond long-term thinking: it's an ideology built on radical and highly dubious philosophical assumptions, and in fact it could be extremely dangerous if taken seriously by those in power. As one of the most prominent EAs in the world, Peter Singer, worried in an article that favorably cites my work:

Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.

It's unfortunate, in my view, that the word "longtermism" has been defined this way. A much better – but less catchy – name for the ideology would have been potentialism, as longtermism is ultimately about realizing humanity's supposed vast "longterm potential" in the cosmos.

The point is that since longtermism is based on ideas that many people would no doubt find objectionable, the marketing question arises: how should the word "longtermism" be defined to maximize the ideology's impact? In a 2019 post on the EA Forum, MacAskill wrote that "longtermism" could be defined "imprecisely" in several ways. On the one hand, it could mean "an ethical view that is particularly concerned with ensuring long-run outcomes go well." On the other, it could mean "the view that long-run outcomes are the thing we should be most concerned about" (emphasis added).

The first definition is much weaker than the second, so while MacAskill initially proposed adopting the second definition (which he says he's most "sympathetic" with and believes is "probably right"), he ended up favoring the first. The reason is that, in his words, "the first concept is intuitively attractive to a significant proportion of the wider public (including key decision-makers like policymakers and business leaders)," and "it seems that we'd achieve most of what we want to achieve if the wider public came to believe that ensuring the long-run future goes well is one important priority for the world, and took action on that basis."

The weaker first definition was thus selected, essentially, for marketing reasons: it's not as off-putting as the second, and if people accept it, that may be enough for longtermists to get what they want.

The importance of not putting people off the longtermist or EA brand is much-discussed among EAs — for example, on the EA Forum, which is not meant to be a public-facing platform, but rather a space where EAs can talk to each other. As mentioned above, EAs have endorsed a number of controversial ideas, such as working on Wall Street or even for petrochemical companies in order to earn more money and then give it away. Longtermism, too, is built around a controversial vision of the future in which humanity could radically enhance itself, colonize the universe and simulate unfathomable numbers of digital people in vast simulations running on planet-sized computers powered by Dyson swarms that harness most of the energy output of stars.


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


For most people, this vision is likely to come across as fantastical and bizarre, not to mention off-putting. In a world beset by wars, extreme weather events, mass migrations, collapsing ecosystems, species extinctions and so on, who cares how many digital people might exist a billion years from now? Longtermists have, therefore, been very careful about how much of this deep-future vision the general public sees.

For example, MacAskill says nothing about "digital people" in "What We Owe the Future," except to argue that we might keep the engines of "progress" roaring by creating digital minds that "could replace human workers — including researchers," as "this would allow us to increase the number of 'people' working on R&D as easily as we currently scale up production of the latest iPhone." That's a peculiar idea, for sure, but some degree of sci-fi fantasizing certainly appeals to some readers.

But does MacAskill's silence about the potential for creating unfathomable numbers of digital people in vast simulations spread throughout the universe mean this isn't important, or even central, to the longtermist worldview? Does it imply that criticisms of the idea and its potentially dangerous implications are — to borrow a phrase from MacAskill's recent interview with NPR (which mentions my critiques) — nothing more than "attacking a straw man"?

MacAskill says nothing in his new book about the potential for unfathomable numbers of "digital people" in vast simulations spread throughout the universe. Does that mean that idea isn't central to the longtermist worldview?

I don't think so, for several reasons. First, note that MacAskill himself foregrounded this idea in a 2021 paper written with a colleague at the Future of Humanity Institute, an Oxford-based research institute that boasts of having a "multidisciplinary research team [that] includes several of the world's most brilliant and famous minds working in this area." According to MacAskill and his colleague, Hilary Greaves, there could be some 1045 digital people — conscious beings like you and I living in high-resolution virtual worlds — in the Milky Way galaxy alone. The more people who could exist in the future, the stronger the case for longtermism becomes, which is why longtermists are so obsessed with calculating how many people there could be within our future light cone.

Furthermore, during a recent "Ask Me Anything" on Reddit, one user posed this question to MacAskill:

In your book, do you touch on the long-term potential population/well-being of digital minds? I feel like this is something that most people think is too crazy-weird, yet (to me) it seems like the future we should strive for the most and be the most concerned about. The potential population of biological humans is staggeringly lower by comparison, as I'm sure you're aware.

To this, MacAskill responded: "I really wanted to discuss this in the book, as I think it's a really important topic, but I ended up just not having space. Maybe at some point in the future!" He then linked to a paper titled "Sharing the World with Digital Minds," coauthored by Nick Bostrom, who founded the Future of Humanity Institute and played an integral role in the birth of longtermism. That paper focuses, by its own account,

on one set of issues [that] arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity.

This suggests that digital people are very much on MacAskill's mind, and although he claims not to have discussed them in his book due to space limitations, my guess is that the real reason was concern that the idea might sound "too crazy-weird" for general consumption. From a PR standpoint, longtermists at Bostrom's Future of Humanity Institute no doubt understand that it would be bad for the movement to become too closely associated with the idea of creating enormous populations of digital beings living in virtual-reality worlds throughout the universe. It could cause "brand damage," to borrow a phrase from MacAskill, as critics might well charge that focusing on digital people in the far future can only divert attention away from the real-world problems affecting actual human beings.

Indeed, the early EA movement experienced a degree of brand damage because it initially recommended, loudly and proudly, that many EAs should "earn to give" by working on Wall Street or for petrochemical companies. As an article by MacAskill for the 80,000 Hours organization states:

When 80,000 Hours first launched, we led with the idea of earning to give very heavily as a marketing strategy; it was true that we used to believe that at least a large proportion of people should aim to earn to give long-term; earning to give is much simpler and more memorable than our other recommendations; and earning to give is controversial, so the media love to focus on it.

Yet, MacAskill adds, "giving too much prominence to earning to give may nevertheless have been a mistake." As the EA movement gained more attention, this marketing decision seemed to backfire, as many people found the idea of working for "evil" companies in order to donate more money to charity highly objectionable.

This foregrounds an important point noted by many in the EA community: Movement-building isn't just about increasing awareness of the EA brand; it also requires strategically enhancing its favorability or, as some would say, the inclination people have toward it. Both of these can be "limiting factors for movement growth, since a person would need to both know what the movement is and have a positive impression of it to want to become involved." Or to quote another EA longtermist at the Future of Humanity Institute:

Getting movement growth right is extremely important for effective altruism. Which activities to pursue should perhaps be governed even more by their effects on movement growth than by their direct effects. … Increasing awareness of the movement is important, but increasing positive inclination is at least comparably important.

Thus, EAs — and, by implication, longtermists — should in general "strive to take acts which are seen as good by societal standards as well as for the movement," and "avoid hostility or needless controversy." It is also important, the author notes, to "reach good communicators and thought leaders early and get them onside" with EA, as this "increases the chance that when someone first hears about us, it is from a source which is positive, high-status, and eloquent." Furthermore, EAs

should probably avoid moralizing where possible, or doing anything else that might accidentally turn people off. The goal should be to present ourselves as something society obviously regards as good, so we should generally conform to social norms.

In other words, by persuading high-status, "eloquent" individuals to promote EA and by presenting itself in a manner likely to be approved and accepted by the larger society, the movement will have a better chance of converting others to the cause. Along the same lines, the Centre for Effective Altruism compiled a lengthy document titled "Advice for Responding to Journalists," which aims to control EA's public image by providing suggestions for interacting with the media. Movement advocates should, for example, "feel free to take a positive tone about EA, but don't oversell it," "be gracious about people with differing viewpoints or approaches to doing good," and "be calm and even-handed." Meanwhile, articles like the one you're reading, which are critical of EA, are "flagged" so that leaders of the movement can decide "whether some kind of response makes sense."

In yet another article on movement-building, the author, an employee at the Centre for Effective Altruism who was formerly a professional poker player, suggests that neoliberalism could provide a useful template, or source of inspiration, for how EA might become more influential. "Neoliberalism," they write, "has two distinct characteristics that make it relevant for strategic movement builders in the EA community."

One article on movement-building suggests that the global success of neoliberalism could provide a useful template for how EA can become more influential.

The first is that neoliberalism "was extremely successful, rising from relative outcast to the dominant view of economics over a period of around 40 years." And second, it was "strategic and self-reflective," having "identified and executed on a set of non-obvious strategies and tactics to achieve [its] eventual success." This is not necessarily "an endorsement of neoliberal ideas or policies," the author notes, just an attempt to show what EA can learn from neoliberalism's impressive bag of tricks.

Yet another article addresses the question of whether longtermists should use the money they currently have to convert people to the movement right now, or instead invest this money so they have more of it to spend later on.

It seems plausible, the author writes, that "maximizing the fraction of the world's population that's aligned with longtermist values is comparably important to maximizing the fraction of the world's wealth controlled by longtermists," and that "a substantial fraction of the world population can become susceptible to longtermism only via slow diffusion from other longtermists, and cannot be converted through money." If both are true, then

we may want to invest only if we think our future money can be efficiently spent creating new longtermists. If we believe that spending can produce longtermists now, but won't do so in the future, then we should instead be spending to produce more longtermists now instead.

Such talk of transferring the world's wealth into the hands of longtermists, of making people more "susceptible" to longtermist ideology, sounds — I think most people would concur — somewhat slimy. But these are the conversations one finds on the EA Forum, between EAs.

What's the long-term grift here? To use cold-blooded strategy, marketing and manipulation to build the movement, and ultimately to maximize "the fraction of the world's wealth controlled by longtermists."

So the grift here, at least in part, is to use cold-blooded strategizing, marketing ploys and manipulation to build the movement by persuading high-profile figures to sign on, controlling how EAs interact with the media, conforming to social norms so as not to draw unwanted attention, concealing potentially off-putting aspects of their worldview and ultimately "maximizing the fraction of the world's wealth controlled by longtermists." This last aim is especially important since money — right now EA has a staggering $46.1 billion in committed funding — is what makes everything else possible. Indeed, EAs and longtermists often conclude their pitches for why their movement is exceedingly important with exhortations for people to donate to their own organizations. Consider MacAskill's recent tweet:

While promoting What We Owe The Future I'm often asked: "What can I do?" … For some people, the right answer is donating, but it's often hard to know where the best places to donate are, especially for longtermist issues. Very happy I now have the Longtermism Fund to point to!

The Longtermism Fund is run by an organization called Giving What We Can, which was co-founded by MacAskill. Hence, as the scholar and podcaster Paris Marx put it on Twitter: "Want to preserve the light of human consciousness far into the future? You can start by giving me money!"

In fact, EAs have explicitly worried about the "optics" of self-promotion like this. One, for example, writes that "EA spending is often perceived as wasteful and self-serving," thus creating "a problematic image which could lead to external criticism, outreach issues, and selection effects." An article titled "How EA Is Perceived Is Crucial to Its Future Trajectory" similarly notes that "the risk" of negative coverage on social media and in the press "is a toxic public perception of EA, which would result in a significant reduction in resources and ability to achieve our goals."

Another example of strategic maneuvering to attract funding for longtermist organizations may be the much-cited probability estimate of an "existential catastrophe" in the next 100 years that Toby Ord gives in his 2020 book "The Precipice," which can be seen as the prequel to MacAskill's book. Ord claims that the overall probability of such a catastrophe happening is somewhere around one in six. Where did he get this figure? He basically just pulled it out of a hat. So why did he choose those specific odds rather than others? 

First, as I've noted here, these are the odds of Russian roulette, a dangerous gamble that everyone understands. This makes it memorable. Second, the estimate isn't so low as to make longtermism and existential risk studies look like a waste of time. Consider by contrast the futurist Bruce Tonn's estimate of human extinction. He writes that the probability of such a catastrophe "is probably fairly low, maybe one chance in tens of millions to tens of billions, given humans' abilities to adapt and survive." If Ord had adopted Tonn's estimate, he would have made it very difficult for the Future of Humanity Institute and other longtermist organizations to secure funding, capture the attention of billionaires and look important to governments and world leaders. Finally, the one-in-six estimate also isn't so high as to make the situation appear hopeless. If, say, the probability of our extinction were calculated at 90%, then what's the point? Might as well party while we can.

What Ord wanted, it seems to me, was to hit the sweet spot — a probability estimate alarming enough to justify more money and attention from governments and billionaires, but not so alarming that people are frightened away, or dismiss it as doomsday alarmism.

All of this is to say that you, the reader who is perhaps encountering EA and longtermism for the first time, should maintain some skepticism about how the EA and longtermist visions are presented to the public. This goes for policymakers and politicians, too, although there are also efforts to "strategically channel EAs into the U.S. government," which would make converting those who already hold power unnecessary. As the philosopher Simon Knutsson, who has published a detailed critique of EA, told me last year:

Like politicians, one cannot simply and naively assume that these people are being honest about their views, wishes, and what they would do. In the Effective Altruism and existential risk areas, some people seem super-strategic and willing to say whatever will achieve their goals, regardless of whether they believe the claims they make — even more so than in my experience of party politics.

Worse yet, the EA community has also sometimes tried to silence its critics. While advertising themselves as embracing epistemic "humility" and always being willing to change their minds, the truth is that EAs like the criticisms that they like, but will attempt to censor those they don't. As David Pearce, an EA who co-founded the World Transhumanist Association with Bostrom back in 1998, recently wrote, referring to an article of mine: "Sadly, [Émile] Torres is correct to speak of EAs who have been 'intimidated, silenced, or 'canceled.'" In other words, cancel culture is a real problem in EA.

A striking example of this comes from an EA Forum post by Zoe Cremer, a research scholar at the Future of Humanity Institute. In 2021, she co-authored an excellent paper with the political scientist Luke Kemp that was critical of what they called the "techno-utopian approach," found most clearly in Bostrom's work. But this ended up being "the most emotionally draining paper we have ever written," Cremer writes. Why? As she explains,

we lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset. 

Some "senior scholars within the field" even "tried to prevent [their] paper from being published," telling Cremer and Kemp "in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as Open Philanthropy." The single "greatest predictor of how negatively a reviewer would react" to criticisms of EA, Cremer notes, "was their personal identification with EA."

According to a New Yorker article about EA and longtermism, when Cremer later met with MacAskill to discuss her experience, she "felt that MacAskill, the movement leader who gave her the most hope, had listened perfunctorily and done nothing." As Cremer later detailed on Twitter, MacAskill's "role in EA leadership is to play the normal one, the public facing charming character who onboards skeptics. Maybe that's also why talking to him felt like talking to the PR shopfront." He seemed "curiously uncurious about ideas on how to tame hero-worshipping or robust decision-making mechanisms."

This yields a very troubling picture, in my view. Effective Altruism and its longtermist offshoot are becoming profoundly influential in the world. Longtermism is ubiquitous within the tech industry, enthusiastically embraced by billionaires like Musk, encroaching into the political arena and now — in what I'm calling Phase Two of its efforts to evangelize — spreading all over the popular media.

To understand what's really going on, though, requires peeking under the hood. Marketing, PR and brand management are the name of the game for EAs and longtermists, and this is why, I would argue, the general public should be just as skeptical about how EAs and longtermists promote their brand as they are when, say, Gwyneth Paltrow's Goop tries to sell them an essential oil spray that will "banish psychic vampires."


By Émile P. Torres

Émile P. Torres is a philosopher and historian whose work focuses on existential threats to civilization and humanity. They have published on a wide range of topics, including machine superintelligence, emerging technologies and religious eschatology, as well as the history and ethics of human extinction. Their forthcoming book is "Human Extinction: A History of the Science and Ethics of Annihilation" (Routledge). For more, visit their website and follow them on Twitter." For more, visit their website and follow them on Twitter.

MORE FROM Émile P. Torres


Related Topics ------------------------------------------

Commentary Effective Altruism Futurism Longtermism Philosophy William Macaskill