A Dire Warning from AI Experts

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”                            Center for AI Safety and 400 AI scientists, engineers, and other experts and public figures

Metropolis: Film by Fritz Lang

The above statement appeared this week, signed by such luminaries in the field of artificial intelligence as Geoffrey Hinton, Sam Altman, Bill McKibben, and Jared Kaplan, as well as philosophers such as David Chalmers and Daniel Dennett, and more than 394 others. Reactions have spanned the continuum from praise for alerting us to a potential catastrophe, to cynicism that suggested the producers of AI were overhyping their products to gain a greater market, to pessimism based on humanity’s poor track record in reducing nuclear weapons or controlling the most recent global COVID pandemic. Whatever the true reason for the message, its use of the word, “extinction,” is what leaps out at us, and makes it the direst warning yet regarding the dangers of AI.

Some authors, such as philosopher, Susan Schneider, in her book Artificial You, have suggested that artificial intelligence is more than a technological development; it is the next stage in the evolution of thinking beings. Not only is it likely, she says, that we will substitute AIs for humans when doing things such as long-distance space travel, but our entire race could be replaced by AIs, as a next step in freeing consciousness from the limitations of existing in living matter. She does not suggest that AIs would annihilate humanity, but they would supersede us as the climax of thinking beings on earth.

In my own Voyages of the Delphi science fiction series, the first volume of which was published as Ezekiel’s Brain and the second, which will come out later this year, as Prime Directive, AIs, or more accurately, AGIs, artificial general intelligences, that are smarter than humans and flexible enough to solve problems across content domains, do, in fact, kill all the humans and replace them. The AIs are smarter, more moral, and more powerful than were the humans, and they expand their population and their sphere of influence, so they occupy the entire solar system and begin exploring the stars, looking for life elsewhere in the galaxy.

Voyages of the Delphi is a fictional series. The four hundred AI experts are also talking about a fictional scenario involving the development of artificial general intelligence, which has not happened yet, but is regarded by most people in the field as inevitable. What is it, when can we expect it, and is it really a danger?

Artificial general intelligence (AGI) is machine intelligence that can solve or learn to solve any problem solvable by humans, instead of being limited to one or a few domains of content (language, images, mathematics, sciences) as artificial (non-general) intelligence, AI, is. A characteristic of AGI is that it can learn to improve its own performance, so that being able to equal humans is only a starting base for its intellectual breadth and depth. It will eventually surpass humans in its abilities.

Common Visualization of Artificial Neural Network with Chip:mikemacmarketing / original posted on flickr Liam Huang / clipped and posted on flickrhttps://en.wikipedia.org/wiki/en:Creative_Commons

We’re not close to developing AGI yet, but we’re on the road to getting there, and the remarkable accomplishments of large language model (LLM) AIs, such as ChatGPT has surprised not just ordinary people but many people in the AI field. As Geoffrey Hinton has said, neural network-based AIs have not come close to the complexity of human brains, but with simpler architecture and access to a great deal of computing power and very large input data sets, they are already coming so close to what humans can do that we have a hard time distinguishing AI output from that of humans (the criterion for passing the famous Turing Test). Building AGIs, single devices that can process information across different modalities (written language, spoken language, mathematical symbols, images) and content domains (social conversation, science, mathematics, mechanics) is on the horizon. How close is unclear, but recent leaps forward in the field suggest that sooner is more likely than later.

One of the most intriguing findings is that current AIs can produce “emergent” abilities, meaning they can do things they were not taught to do. This has been demonstrated when AIs can, without direct instruction, learn to count after being trained on image recognition, generate optical illusions in situations similar to those which cause them in humans, and, most interesting to me, to develop a “theory of mind” about why people behave as they do, an ability thought to be confined to humans only after a certain age.  These findings suggest that with enough power and training on a large enough corpus of stimuli, AIs may spontaneously learn things we thought only humans could learn. These results also demonstrate that AIs already can become “smarter” than we intended them to be.

Are the recent developments in AI a portent of danger? If we already know that AIs will continue to become smarter and that they may develop skills and abilities we never anticipated or intended them to develop, is that a problem? It’s a problem if we don’t control two things: 1) what, external to itself, an AI is able to control, and 2) what we have done to make sure an AI’s goals “align” with those of its operator.

An AI by itself is simply an information processing machine. It can’t see, hear, speak, or move unless we build eyes, ears, a voice, and some form of mobility into it. It can’t control any other devices unless it is connected to them either by being wired or wirelessly. Some experts have recommended that any AGI be “boxed” inside a Faraday Cage, which seals it off from access to any electromagnetic radiation, until it is determined that it is safe. There is no reason to use an AGI to control a robotic arm or a robot vehicle, since neither needs to be able to think like a human in order to work effectively. Attaching such devices to an AGI to give mobility and the ability to manipulate physical objects would only make sense after long trials to determine the safety of such an arrangement. Needless to say, if someone develops an AGI “soldier” with arms, feet, and weapons, they are asking for trouble (which is not to say it won’t happen).

Making sure that an AI is aligned with its operator’s goals is harder than restricting its access to other devices. Isaac Asimov devised “three laws of robotics,” but almost as soon as he did, he and others found ways they could be circumvented. The problem with AI rules, is that, being smarter than the humans who design them, they can find a way around such rules. It comes down to motivation. How do you get the AI to want to be nice to humans and not harm, them, even inadvertently?  In Ezekiel’s Brain, my AI designer thought she had solved the problem by feeding her AGI all the finest thoughts of human history’s finest thinkers and having it abstract what was common to those ideas and then be programmed to uphold them in everything it did. Unfortunately, the AGI quickly decided that the biggest threat to upholding such noble virtues was humanity itself, so it exterminated the human race and replaced it with robots who would follow those values faithfully. But that’s just fiction.

The problem illustrated in Ezekiel’s Brain has some truth to it in that whoever builds the first AGI will be a human and we have no idea what ulterior goals that person (or the person who employs or funds them) has. In fact, if it were possible to install a set of absolutely benign goals in an AGI, the chances are pretty good that no one would do it. Instead, they’d use it to out-compete or out-fight, or out-explore whoever they considered their competitors. In other words, it’s most likely that whoever builds an AGI will build it to act like a human being—a supersmart human being, but a human being, nevertheless. That could spell danger.

Can an AI be superintelligent, and if so, should we fear it?  Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions

Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.

Subscribe to Casey Dorman’s Newsletter. Click HERE