Never before has a living creature had in its repertoire of possible actions the virtual destruction of itself and other life on earth. … For the first time in more than three billion years of life, a living system is relentlessly creating the means not of self-preservation, but of self-destruction.
- Andrew Bard Schmookler
The doomsday argument is a way to use information about one’s birth rank to come to a conclusion about how probable it is that humans will soon be extinct. Because humans today are relatively low in birth rank, it seems likely that the total number of humans over the course of time—all time, including the future—is also relatively low; this is due to the notion that we should treat ourselves as random samples, thus inhabiting a somewhat typical position rather than, for example, being somewhere very close to the beginning of humanity’s story. The doomsday argument favors the doom soon scenario because there’s no reason for us to place ourselves in an unlikely, atypical position in humanity’s story.
On the other hand, perhaps the very fact I exist should bear on my credence function, an objection to the doomsday argument referred to as the self-indication assumption; that is, the fact of my existence as an observer should lead me to favor the idea that there is a very high number of observers, as against the idea that there are relatively few observers. I’m more likely to exist if the number of possible observers is high, and if the number is high, the doomsday argument breaks down. While there are of course several plausible answers to the doomsday argument, we don’t find a consensus around any one of these answers. And as philosopher Nick Bostrom has said, “If the doomsday argument is false, it’s nontrivial to say why it’s false.” Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies regards the emergence of superintelligence as among the future events that could spell doom for the human species.
Living in our normal day-to-day can make it seem like we’re nowhere close to superintelligent computers that could subdue and replace human beings. After all, for most of us, the technology we’re looking at and working with in our jobs isn’t the kind of AI that has, for example, bested us in chess and Jeopardy; it’s things like Microsoft Office products, database systems, and task management tools. Given this, we want to be mindful of our blind spots and to avoid under-appreciating how quickly things are likely to move from here, and thus how little time we may have to forestall an existential-level event. Complicating this question about time is the mind-bending fact that superintelligent computers won’t reckon it the way we do, their processing power giving them an important advantage. As Bostrom explains in an article already ten years old:
For example, if the upload is running a thousand times faster than the original brain, then the external world will appear to the upload as if it were slowed down by a factor of thousand. Somebody drops a physical coffee mug: The upload observes the mug slowly falling to the ground while the upload finishes reading the morning newspaper and sends off a few emails. One second of objective time corresponds to 17 minutes of subjective time. Objective and subjective duration can thus diverge.
This strange—strange to us, as biological beings whose lives open and close on a scale of decades—relationship with time means that computers will have the ability to compress what we humans might have regarded as hundreds or thousands of years’ worth of scientific and technological progress into years, months, or even days. A consequence of the very high speeds at which these computers will work is that their thinking and their projects are likely to become incomprehensible to us at a certain point soon after superintelligence emerges. They may have infrastructure projects whose timelines span millennia and whose tasks and dependencies are impossible for a human to understand. Further complicating matters, as Bostrom observes, our ethical intuitions and precepts depend more than we realize on background empirical conditions, many of which will not hold true for superintelligent machines. Even the examples of AI we have today can help us understand some of this strange time-subjectification and its practical implications.
Consider that AlphaZero, a program created by Google-owned DeepMind, learned everything any human being has ever learned about chess in a matter of hours. The point about learning is important here, as AI systems have long focused on searching large quantities of data generated from the activities of human beings. AlphaZero and its sibling, AlphaGo Zero, have reached superhuman levels of play through “tabula rasa reinforcement learning from games of self-play.” By removing all human knowledge, the developers behind these programs, led by David Silver, made them more general and flexible, untethered from the specifics of the real-world data. The idea is that this existing body of knowledge might actually be limiting, preventing the system from arriving at deeper levels of abstraction that we haven’t been able to see. And that’s just what has happened with games like Go.
Not very long ago, as even people close to my age will remember, many believed that Go was beyond the reach of AI. As recently as the year 2000, when a million-dollar prize for the creator of an AI able to beat a human in Go expired, even a nine-year-old child was able to beat the best Go program in the world—and after giving the program nine free moves at the start of the game. But beginning around five years ago, AlphaGo, the program that became the foundation for AlphaZero, started to overtake the best Go players in the world, including 18-time world champion Lee Sedol. The day before the match, Sedol was confident, telling reporters, “I believe that human intuition is still too advanced for A.I. to have caught up.” As they say, pride comes before the fall. This is the way we lose, by making just the mistake Sedol made: computers get there faster than we expect—before we’re ready to accept that our specialness isn’t as special as we thought. When Ke Jie lost to AlphaGo the following year, he was perplexed:
AlphaGo made some moves which were opposite from my vision of how to maximize the possibility of winning. I also thought I was very close to winning the game in the middle but maybe that’s not what AlphaGo was thinking.
At the risk of spoiling the ending, it’s only going to get more puzzling. John Zerzan has argued compellingly that time was, in a sense, invented when civilization emerged; it may be all but abolished with the advent of superintelligent machines, the evolution (if we may) and creativity of which are so much less limited by it. But it’s not all doom and gloom, and the focus on time is important: right now, we may still have an opportunity to get out in front of the kind of superintelligence that is not rigorously aligned with human values and goals. Doing this successfully depends on our social institutions and politics, presenting another fundamental problem, the problem of complex systems. Studies of such systems play out in a pre-paradigmatic state; we don’t yet see a consensus around how to approach the problem, which tools to use, and how to appropriately decompose the question into discrete problems on which scholars can work. Complex systems are so causally rich, their component parts so unpredictable, that attempting to understand their behaviors in terms of deterministic functions that always hold true seems to be impossible. Concomitantly, the data we can extract from such systems are limited in their ability to tell us meaningful stories about what we’re observing.
Physicist Neil Johnson, who researches complex systems, uses the example of the distribution of wealth in the United States, where “a lot of people have little; very few people have a lot,” with the distribution thus looking nothing like a bell curve—“that tail [of the wealth curve] goes on a long, long way.” The average of this distribution doesn’t characterize the system very accurately, given the extremely high values along that tail. Johnson notes that most of our existing strategies for social risk mitigation have an unfortunate tendency to focus on specific bad actors, which Johnson likens to trying to stop water from boiling by removing the bad molecule. Removing one molecule, of course, doesn’t stop the water from boiling, so we need a more rigorous view of the system itself. Politics is one of these complex systems and, as such, remains in this frustrating pre-paradigmatic state in which it’s difficult to make real advances—the kinds of advances that we’ll need to forestall catastrophic events of our own creation.
In the political realm, given a set of initial conditions, we’re woefully unable to accurately forecast what’s to come. Political, social, and economic systems are extraordinarily complex, with causal variables that are numerous and that behave in ways that are difficult to predict, particularly given the scale of today’s institutions. This level of causal complexity means that deterministic macro solutions are poorly suited to political, social, and economic problems; we don’t have predictive models able to deal effectively with the vagaries of human behavior. Our world, the modern world, is defined by its dangerous asymmetries, its gulfs separating the powerful from the powerless, the global rich from the global poor. I emphasize global because human institutions are now able, for the first time in hundreds of thousands of years, to span the entire world—to create one world where there had always been many. But though we have global institutions, we don’t share globally in the benefits; indeed, we seem to benefit yet and still along colonial lines. Global corporate capitalism’s losers just so happen to come from places colonized by Europe in the preceding centuries. The people in these places bear the costs, yet reap none of the rewards. Our power and the scale of our civilization mean that we truly are all in it together now, that we must discover long-term thinking together in a world in which defecting from such thinking will mean the extinction of humans.
I became interested in libertarianism as a kid because it seemed to me (and still seems to me) that violence was a poor way to solve problems—indeed, that it was a value-destroying negative-sum game that at least appeared to be creating just the kinds of problems it purports to solve. It has been singularly strange and disorienting to witness the defense of the existing global capitalist system by free-market libertarians, as if any of these historical phenomena emerged without the worst kinds of violence and thievery. To understand free-market defenses of global capitalism, though, we must understand that such defenses emerge from within the framework of a deeper, more fundamental ideological substrate that is common to almost all existing political ideologies. This is the ideology of extremely, unmanageably large institutions, infinite growth and progress (so-called), hierarchy, and systematic violence. Anthropologist Joseph Tainter observes that such social “[c]omplexity and stratification are oddities when viewed from the full perspective of our history,” requiring constant reinforcement. As strategies for a long, sustainable future, these oddities are much less than ideal. Despite our knowledge, we have yet to attain the wisdom of our pre-civilization ancestors. Toby Ord suggests the possibility that the Aboriginal Australian way of life, for example, is a strategy that represents serious thinking about long-term risks to survival, holding itself in balance with the natural world. We might at least consider that the Anthropocene’s extraordinary imbalances are driving us toward doom. Though it may be difficult to confront such a notion, the alternative is likely to be much more difficult.