I’m going to say something that many readers will find patently, even outrageously, false. Here goes: We almost certainly live in the best moment of human history so far. OK, now I’m going to say something even crazier: We very likely live in the best moment of human history ever. That is to say, “progress” in a broad sense is real, but it may very well end here, this century.
What reasons do I have for this oddly optimistic and pessimistic assertion? Let’s consider these issues in turn. Perhaps the most comprehensive and compelling case for historical progress was made by Steven Pinker in his book "The Better Angels of Our Nature." The central theme of this book is the decline of violence across history — from the Paleolithic era, in which our ancestors hunted, foraged and fished to survive, through the Dark Ages, Enlightenment and Industrial Revolution, to our present Information Age. Sure, the total number of human casualties in the 20th century was staggering in absolute terms, but the statistical occurrence of assaults, rapes, murders, mass killings, genocides and other forms of violence steadily fell — and when it comes to deciding which period one should wish to live in, only statistics matter.
These trends hold even taking into account the two world wars and the rise of global terrorism, which culminated with the 9/11 atrocities and the formation of the Islamic State — arguably the largest and most powerful terrorist organization ever. Today, we find ourselves in the midst of the Long Peace (no two superpowers have gone to war since World War II), in the wake of the Rights Revolutions (women’s rights, gay rights and animal rights), and smack in the middle of the New Peace (the prevalence of organized conflicts, including terrorist attacks, has declined since the end of the Cold War). Meanwhile, the number of democracies has risen around the world with a corresponding decline in autocracies.
This is good news. Yet good news only warrants optimism if it can be extrapolated into the future. Thus, the question is whether the happy trends associated with the march of “progress” and what Pinker calls the “moral Flynn effect” will continue.
This leads us to another set of phenomena that, so far as I can tell, suggest that we stand atop a Mount Everest of unprecedented prosperity with only one direction to go: down. First, there’s the immediate issue of our dear leader, Donald Trump. Not only is Trump a radical anti-science, climate-denying, man-child who singlehandedly moved the Doomsday Clock 30 seconds closer to midnight (doom), he exemplifies many characteristics (such as a lack of “integrative complexity”) that are associated with bad political outcomes.
If there’s one thing a world leader needs at this crucial juncture in human history, it’s moral wisdom and foresight, and if there’s one thing that Trump clearly lacks, it’s these two attributes. As the mathematician Yaneer Bar-Yam has argued, the world is rapidly becoming too complex for any governmental system to competently govern — and thus a government with Trump at its helm is even more likely to fail to implement judicious domestic and foreign policies. Even more worrisome, Trump is the mere symptom of a recent pattern toward populism, authoritarian leaders,and opposition to social justice, a phenomenon that James Masters discusses in a CNN article from last April.
Aside from the many near-term risks posed by the Trump administration, like an economic recession, rise in white nationalism, increase in the acceptability of violence against women or even a catastrophic conflict with North Korea (I found Trump’s “fire and fury” remarks the most frightening thing I’d ever heard from a Western public official) is his rejection of climate science. The effects of climate change will — in the absence of some magical new technology, which would be unwise to count on — last longer than civilization has so far existed, for literal millennia. And the consequences range from truly devastating to extinction-causing, in the case of a runaway greenhouse effect.
Essentially, what civilization is doing right now is a global-scale experiment. We don’t really know how it’s going to turn out, but we know enough to know that it won’t be good for anyone — not for contemporary humans, not for the ecosystem services upon which we depend for survival, and not for the children among us today. As the always-level-headed Neil deGrasse Tyson ominously said in a recent interview, it may already be “too late” for humanity to avoid a catastrophic outcome from climate change.
But climate change isn’t the only environmental disaster at our doorstep. Scientists now affirm that we are in the beginning stages of the sixth mass extinction event in life’s 3.8 billion-year history on this pale blue dot. The rate of species extinctions has skyrocketed, and one study from 2012 suggests that we could be on the verge of a sudden, irreversible, catastrophic collapse of the global ecosystem — an event that would make civilization as we know it infeasible.
Yet another study reports that between 1970 and 2012, the global population of wild vertebrates (birds, mammals, fish, reptiles, amphibians, and so on) fell by a spine-chilling 58 percent. Go ahead and extrapolate this into the century: Unless a flurry of policies are immediately implemented to reduce human-created pollution, ecosystem fragmentation, the effects of invasive species, etc., the biosphere is headed for ruination. And if the biosphere dies, so does that squishy bipedal ape ironically called the “wise man,” that is, Homo sapiens. In fact, not only is global hunger rising for the first time this century, but studies suggest that humanity will have to produce more food in the next 50 years than in our entire history thus far.
Some technocratic big-thinkers like Elon Musk might respond to these existential anxieties with: “Sure, we need to change our patterns of material consumption on Earth, but soon we’ll have colonies on Mars and this will significantly reduce the probability of our extinction.” Thus, the mission of SpaceX, headed by Musk, is to establish Martian colonies “in our lifetimes,” where this goal is motivated no less by “humanitarian” considerations than by a lust for exploring the “final frontier.”
The problem is that, as the influential scholar Daniel Deudney shows in a forthcoming book called "Dark Skies," space colonization will likely lead to immensely destructive interplanetary wars involving asteroids (or “planetoid bombs”) as well as other space-age weapons like giant lasers and lethal particle beams that traverse space at or nearly at the speed of light. Although I was once an enthusiastic advocate of space expansionism, it has become increasingly clear to me that, as the environmentalist’s slogan goes, “There is no Planet B.” If we turn Earth into a wasteland, it will become our graveyard.
But climate change and the sixth mass extinction aren’t the biggest risks to our collective survival — not by a long shot, which is not to minimize their significance. Rather, there are numerous emerging technologies that could enable smaller and smaller malicious groups to wreak unprecedented harm on more and more human beings. Breakthrough technologies like CRISPR/Cas-9, digital-to-biological converters, and base editing could enable religious terrorists or lone-wolf misanthropes to synthesize designer pathogens that are far more lethal than anything found in nature, because the goal of genes in nature is to propagate themselves (they are “selfish”), whereas a synthetic pathogen could be designed specifically to kill (thus “sacrificing” its genes for the designer’s goal of causing maximal harm).
If released into one or more crowded urban areas, and with the aid of intercontinental jet planes traveling at 600 mph, the result could be an engineered pandemic that dwarfs the 1918 Spanish flu outbreak — a devastating epidemic that killed about 33 million more people than World War I.
In addition to biotechnology and synthetic biology, there are also immense risks associated with future nanotechnology and artificial intelligence. In fact, many scholars in the field of “existential risks” believe that machine superintelligence constitutes the greatest known threat to human survival — and I happen to agree. The issue here isn’t that an army of belligerent robots with glowing red eyes and machine guns is going to try to take over the world. Rather, the dangers are far more subtle: if we manage to create an information-processing machine that has a level of general intelligence (or problem-solving ability) greater than ours, and final goals that are not almost exactly aligned with ours, then doom could be all but certain.
By analogy, consider the fact that our level of general intelligence is greater than that of the ant, and that our final goals (e.g., building suburban neighborhoods) don’t align with the goals of ants (e.g., building underground colonies). The result is catastrophic for ant colonies not because humans dislike ants but because sometimes ant colonies happen to get in the way of suburban sprawl. This is the primary risk posed by machine superintelligence; as one scholar puts it, “the AI does not hate you, but neither does it love you, and you are made of atoms that it can use for something else.”
Fortunately, some incredibly smart people are thinking about these issues, and I don’t believe that a solution is impossible. Unfortunately, it’s unlikely that this area of research will ever receive the funding or political attention it deserves (despite Hilary Clinton’s recent statement that she is worried about artificial intelligence). Humans are really bad about thinking far into the future — as the scholars Bruce Tonn and Dorian Stiefel write, “most individuals’ abilities to imagine the future goes ‘dark’ at the ten-year horizon.”
This is very worrisome given that avoiding many types of global catastrophic risks will require long-term planning, while others are actively unfolding on timescales of decades or centuries (think climate change, the sixth mass extinction and human overpopulation). Even more, a greater share of resources are spent on studying dung beetles and "Star Trek" than on studying existential risks to humanity, which is, I hope readers agree, a horrifying inversion of priorities, given that the entire future of human civilization depends on avoiding the latter (not to mention the livelihood of 7.5 billion current humans).
Considerations like these have led experts on the topic to estimate that civilization has a roughly 50 percent chance of surviving the current century. Others have put the probability of humanity following the dodo into the evolutionary oblivion of eternal nonexistence at 19 percent, and Stephen Hawking recently claimed that we live in the most dangerous period of human history. As I discuss in my forthcoming book, "Morality, Foresight, and Human Flourishing," these estimates are grounded in some extremely solid evidence.
Sure, every generation has had some doomsayer assure those willing to listen to him on the street corner that the end of the world is nigh. But there’s a crucial difference between today and a century ago: Today we have climate change, a mass extinction, nuclear weapons, designer pathogens, self-replicating nanobots, and artificial intelligence — not to mention our cosmic risk background of asteroids, comets, supervolcanoes and natural pandemics, which still threaten our collective prosperity. Today we live in the Age of Anthropogenic Apocalypses, a period of heightened risks unlike anything our species has experienced or, until recently, even imagined.
Making this situation even worse, studies show that higher levels of ambient CO2 have a negative impact on cognitive performance. While our ancestors enjoyed an atmosphere with 180 to 280 parts per million (ppm) of CO2, no human alive today will probably ever encountered less than 400 ppm of CO2 again, thanks to our bad habit of burning fossilized plant matter. So, at exactly the moment that we’ll need all the brain-power we can get to navigate the dense miasma of increasingly complex and high-stakes global threats, we’ll also be “dumber” than we are today, and perhaps than we were before the Industrial Revolution (the Flynn effect aside).
As I write this, I sit outside a café in my home city of Philadelphia with a hot cup of coffee on the table. I am never hungry for more than the hour before a meal and I never worry about running out of food. If I were to become ill, I could see a doctor with access to the most sophisticated medical technologies ever invented (even despite the United States' inferior health care system). If I were to find myself with a desire to travel the globe, I would easily find my way to the other side of the planet. Although the election of Trump marks a dangerous step in the wrong direction with respect to social justice and global stability — as Pinker himself suggested to me in an interview from earlier this year — American society is still reaping the benefits of the women’s rights, civil rights and animal rights movements, and despite the high “dread factor” of terrorism, the average American is literally more likely to be struck by lightning than to die at the hands of a right-wing or Islamic extremist.
Statistically speaking, life is pretty good in the developed world. Those who pay attention to the trendlines rather than the headlines would say that you should prefer being alive today than during the Middle Ages, the time of Jesus or even the Paleolithic (the period romanticized by anarcho-primitivists). Given that approximately 60 billion humans have lived on Earth so far (with 7.5 currently on it), you were much more likely to be born sometime in the past than the present (insofar as this person would still be you). This should make you count yourself very lucky! (If you have any doubts, just imagine yourself getting appendicitis in 1704. Uh-oh.)
Yet fixing one’s eyes on the futurological horizon leaves a worrisome afterimage. Climate change will cause extreme weather events, megadroughts, mass migrations, food supply disruptions, ocean acidification and catastrophic biodiversity loss, while also destabilizing governments and exacerbating apocalyptic terrorism. Emerging technologies will place unprecedented destructive power in the hands of malicious communities and even individuals at the ideological fringe of society. And the option of simply escaping to another planet — advocated by Musk, Stephen Hawking and even the philosopher Derek Parfit — appears to come with its own set of unprecedented hazards to human prosperity.
To be clear, these challenges shouldn’t leave anyone paralyzed — rather, they are reasons for taking immediate action to improve the world however possible. Indeed, there isn’t a single risk mentioned above that can’t be overcome with the right collective action. Yet I often find myself doubtful that our species — you know, the species that elected Donald Trump to be president, still questions climate science and overwhelmingly believes in ancient myths and superstitions — can get its act together in time. As the cosmologist Max Tegmark once put it, we’re in an anxious race between wisdom and technology, and right now it appears that wisdom is losing.
It is these considerations that lead me to wonder whether we might find ourselves in the best moment of human history, both past and future. Maybe this is as good as it gets. If so, then enjoy it.