Mostly Harmless #4 - Will AI Kill Us All?
Probably not, but it is still worthwhile to talk about it.
This post is written in collaboration with
from . If you enjoy these deep dives into topics at the intersection of philosophy and technology, make sure to subscribe and check his work.
There are many risks and challenges in the deployment of artificial intelligence. It is one of our most potent technologies so far, and like all technologies, it can be used for good or evil. The more powerful the technology, the greater the potential for positive and negative applications.
It depends more on its utilization by humans than on the inherent nature of the technology itself. The impact and consequences of any technological advancement are significantly shaped by how humans choose to employ and integrate it into their lives, societies, and the broader world. The responsible and ethical use of technology, therefore, plays a pivotal role in determining whether it leads to positive or negative outcomes, emphasizing the profound influence of human decisions on the course of technological evolution.
For example, a hammer can be utilized to build a house or to harm someone, but neither construction nor harm would be particularly efficient. Similarly, dynamite can be employed to construct roads or destroy cities, and nuclear power can either obliterate a nation or lift an entire continent out of poverty. AI appears to lean towards the more extreme end of this spectrum. It holds the potential to be an immensely powerful technology that can revolutionize society and automate complex tasks. However, this same power also allows it to cause significant destruction.
AI can be leveraged in various ways, ranging from the overwhelming dissemination of disinformation to the pervasive bias in news and media. In this article, I want to focus specifically on one set of AI risks: the so-called existential risks. These risks involve the potential for AI to completely destroy human civilization or even extinguish the human race.
First of all, we need to deal with the concept of existential risk. Because people often confuse existential with psychological. When a person avoids boredom or thinking about the finality of his life, this is existential fear. When he fears losing his job or missing the bus, that's psychology. The first differs from the second by less dependence on what happens or can happen and is connected with the very conditions of human existence.
In the case of attitudes towards AI, we have two kinds of attitudes. The first one is connected with the fundamental possibility or impossibility of a human being creating a machine that will possess something like free will and computational abilities. The second relates to what can happen if the machines created by man for some reason, unforeseen or accidental, begin to work differently than their creator intended.
In this article, we will review the most prominent scenarios for AI existential risk. Then, we will identify and examine the flawed premises upon which these scenarios are based. We’ll explain why we believe these scenarios to be highly improbable, if not impossible. Finally, we will argue why it is worthwhile to intellectually pursue and discuss even the most extreme doomsday scenarios. We believe approaching this topic with an open mind and rational thinking can provide valuable insights and perspectives.
Mostly Harmless is a premium newsletter.
Upgrade your subscription to unlock all past and future issues.
You can also unlock this single post for less than a cheap coffee (requires Telegram).
How AI might kill us all
There are many different scenarios for potential existential threats of AI. These situations involve artificial intelligence, or a manifestation of artificial intelligence, reaching a stage where it possesses not only the capability to obliterate human civilization and potentially all life on Earth but also the motivation or at least a trigger that incites this action.
Destructive capabilities
In order to have a doomsday scenario, first, there needs to be an incredibly powerful artificial intelligence that is capable, in principle, of annihilating mankind in an almost inevitable manner. The AI must possess technological and military power that surpasses everything humanity can muster by orders of magnitude, or it should possess something so potent and rapidly deployable that once annihilation commences, there would be no possible defense. One such example could be a swarm of nanobots capable of infecting the entire global population and simultaneously triggering a massive brain stroke in all 8 billion individuals.
This level of destructive capacity is necessary for a doomsday scenario because an AI that possesses a destructive capacity approximately equal to that of humans won’t annihilate us instantly. For instance, an AI that is roughly equal in military strength to the combined might of humanity would not suffice; it would, at worst, result in a prolonged war without complete elimination of either side. Even a complete nuclear exchange between humanity and AI won’t do. That might lead to the total destruction of civilization, causing unparalleled devastation and casualties. Nevertheless, some people would survive, finding refuge in shelters and potentially rebelling against the AI.
Keep reading with a 7-day free trial
Subscribe to Mostly Harmless Ideas to keep reading this post and get 7 days of free access to the full post archives.