The Trump administration so far can be summed up by a single word: scandal. On a weekly and almost daily basis, new revelations about connections between the Trump team and Russia emerge, the most recent involving Trump’s son-in-law Jared Kushner. This has led some politicians — including some Republicans — to openly discuss the possibility of impeachment, which may have been legally possible from the moment that Trump was sworn in, thanks to the Emoluments Clause of the U.S. Constitution.
But I would argue that the biggest scandal here isn’t Trump’s ties to a geopolitical adversary, but rather how this snafu is distracting us from far more pressing civilizational threats. Perhaps the two most urgent are climate change and nuclear conflict. Let’s not forget that the hottest 17 years on record have all occurred in the 21st century, with a single exception: 1998. And a major scientific study from 2009 suggests that we have crossed a “planetary boundary” that could lead to sudden, non-linear, even catastrophic changes to earthly processes. Climate change is also a major driver of global biodiversity loss, which has brought about the sixth mass extinction event.
If one abandons the myopic thinking of the next election cycle and quarterly reports and “zooms out” to see the next three centuries of human civilization, it becomes clear that mitigating climate change is one of the most important tasks for anyone who cares about transgenerational human well-being. Not only will the effects of anthropogenic global warming push civilization to the brink, but studies reveal a connection between societal instability and terrorism. In other words, to paraphrase the scholar Mark Juergensmeyer, extreme times will breed extreme ideologies. This is why the Environmental Protection Agency (EPA) should be seen in part as a counterterrorism organization: It aims to prevent catastrophic climate change, and in doing so it could reduce the probability of more radical, violent ideologies from coalescing in vulnerable regions of the world.
The topic of climate change has been largely occluded by Trump’s proliferating scandals, at least until this past week, when it made a sudden reappearance thanks to the president's potentially disastrous decision to withdraw the United States from the Paris climate agreement. Nuclear proliferation, on the other hand, has hovered just above the horizon of public awareness thanks to North Korea’s recent missile launches. Unfortunately, Trump is unlikely to resolve this tense situation without resorting to violence. As I recently explored in a Salon article, Trump’s speech exhibits low “integrative complexity,” and low integrative complexity in the language of political leaders is correlated with more conflicts. Individuals in power who tend to see the world in black and white — e.g., “You’re either with us, or against us,” as George W. Bush once declared — are more likely to lead a country into battle than those who are capable of more nuanced, careful, sophisticated thinking about complicated world situations. We should therefore not be surprised if the U.S. soon goes to war with a second spoke of the “Axis of Evil.”
Beyond these two immediate dangers — which have prompted the Bulletin of the Atomic Scientists to set the Doomsday Clock to the highest level of threat since 1953 — there is a swarm of additional issues that aren’t getting the attention that they deserve. For example, a looming risk stems from superbugs, or pathogenic bacteria that are resistant to multiple antibiotics. This has global risk implications because “antibiotics are the foundation on which all modern medicine rests. Cancer chemotherapy, organ transplants, surgeries, and childbirth all rely on antibiotics to prevent infections. If you can’t treat those, then we lose the medical advances we have made in the last 50 years.”
As Margaret Chan of the World Health Organization puts it, “Antimicrobial resistance poses a fundamental threat to human health, development and security.” If humanity were rational — if our political leaders were rational — we would spend far more on superbug research than we currently are. Indeed, infectious disease outbreaks are known to follow a power law with a small exponent, meaning that “if this law holds for future pandemics … the majority of people who will die from epidemics will likely die from the single largest pandemic.”
Yet another danger that deserves careful attention stems from machine superintelligence. This may sound outrageously implausible to the uninformed ear — indeed, it should appear crazy at first glance. But a more detailed examination of the issue reveals, I would argue, that machine superintelligence could constitute the single greatest long-term threat to human survival in the universe (that we know of).
The arguments behind taking AI risk seriously are somewhat arcane, but the crux is that intelligence has nothing to do with morality as we understand it. That is, a superintelligent mind need not be “moved” by moral considerations that guide our own behavior, such as empathy, sympathetic concern and a sense of justice (our “core moral dispositions”). This being the case, a superintelligence could pursue a wide range of final goals — for example, calculating as many digits of pi as possible. The problem is that unless these final goals almost exactly align with our own goals, the superintelligence will probably destroy us. Why? Because, put simply, we are made out of atoms that it could use for something else; in this case, creating massive, perhaps planetary-scale supercomputers to calculate 3.1415926535 … etc. The crucial point is that the existential danger stems not from a malicious machine that wants to kill humanity, but from an extremely powerful problem-solving algorithm with misaligned values. Yet figuring out how to upload the right values (whatever those are!) remains an unsolved problem.
Given the rapid pace of AI developments in recent years — driven in large part by “deep learning” connectionist systems — it is not implausible to imagine a child born today living to see the first superintelligent machine. Indeed, most researchers in the relevant fields concur that humanity will create the first superintelligence by the end of this century. This could be a welcome watershed if we manage to align the superintelligence’s value system, but it could also have doomsday implications if we fail. As Stephen Hawking has repeatedly suggested, if superintelligence isn’t the best thing to ever happen to Homo sapiens (the self-described “wise man”), it will probably be the worst. Yet society has given very little thought to this paramount issue, although it was beginning to gain traction at the end of Barack Obama’s tenure in the White House. Many Republicans are, indeed, much too anti-science to take AI risk seriously — not to mention the grim possibility of human extinction, which their religious beliefs say is impossible.
The point is not to diminish the immense importance of the Trump team’s Russian links. Although the U.S. was recently demoted to a “flawed democracy,” we are still a democratic state, yet there is hardly anything that would tickle the Kremlin like the American population losing its faith in democracy and the political leadership morphing into a quasi-autocratic state. This constitutes a clear and present danger. But when one takes a broader bird’s-eye view of the human situation at the dawn of the 21st century, it becomes clear that there are unprecedented challenges that are receiving far less attention than they deserve. This is why, in my view, the most worrisome consequence of the ongoing Russian chicanery is that we’re looking to the side when we should be looking straight ahead.