Why Do AI Novels Always End Badly?

I got the idea for my novel, Ezekiel’s Brain from reading Nick Bostrom’s Superintelligence. Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” A superintelligence could be an alien brain, but in both Bostrom’s book and my novel, it is an artificial intelligence. Thinkers such as Bostrom, social scientist, Iason Gabriel, and physicist, Max Tegmark, regard superintelligence as potentially dangerous. The reason a superintelligent AI is dangerous is because it would have the intelligence and power to outthink humans and, if it controlled aspects of the world outside its own thinking, it could cause catastrophic damage to humans and their world.

No one has built a superintelligent AI, but most people who ought to know think it will happen sometime between ten and fifty years from now. This situation brings to the fore the question of how do you control an entity that is smarter than you are? In other words, how do you get it to do what you want it to do and not do what you don’t want it to do? This is called the problem of AI alignment.

Why do many writers about AI treat their subject as if it represents an alien intelligence, rather than just a supercomplex tool?  The answer is that artificial intelligence involves machine learning.  Instead of programming a machine’s actions, we allow it to use feedback from its attempts to reach a goal to guide its behavior. We don’t design the method of solving the problem, we design the method of learning that allows it to solve the problem. With a superintelligent AI, the machine could even modify its method of learning to maximize attaining its goals.

The difficulty arises when a superintelligent AI that works faster than our brains do (remember, electrical current traveling across transistors on a computer chip moves 10 million times faster than current traveling across a neural circuit) learns how to solve a problem in ways we are not able to understand. How do we guarantee that it will choose the solution that best meets our needs?

What has been declared the “World’s Funniest Joke” goes like this:  Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says, “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence; then a gunshot is heard. Back on the phone, the guy says, “OK, now what?”

This joke is instructive. Imagine the operator is the programmer and the hunter is the AI. When the operator says, “First let’s make sure he’s dead” we all know they meant to check on the status of the wounded friend. But the hunter takes their instruction literally. He misunderstands the operator’s intention.

Eliezer Yudkowsky gives another example, this time from the Disney film Fantasia. Remember “The Sorcerer’s Apprentice,” portrayed in the film by Mickey Mouse? He learns a magic spell and instruct a broom to carry buckets of water to fill a cauldron.

From Wallpaper Safari

Everything seems fine until the cauldron is filled and the broom continues to bring more buckets of water. Mickey forgot to tell it when to stop. When he tries to chop the broom in half to stop it, he ends up with two brooms carrying buckets of water. Yudkowsky points out that getting a computer to know when to stop doing what you asked it to do is not as simple as it seems.

Nick Bostrom takes the problem even further with his so-called “paperclip apocalypse” thought experiment. A programmer instructs a superintelligent computer to make paperclips. Since it is smarter than humans, it learns the fastest, most efficient way to do this, but it doesn’t stop.

CC0 Public Domain

When the programmer tries to shut it down, it resists and, being smarter than he is, keeps producing paperclips. It has learned that if it is shut down, it cannot fulfill its main function, which is to produce paperclips.

If it runs out of raw materials, it begins using whatever else it can find, turning cars, buildings, etc. into fodder for more paperclips.  If all you’ve got is an instruction to make paperclips, all the world becomes a resource for making more paperclips.

All of these catastrophic scenarios highlight the alignment problem. How do you get a superintelligent AI to carry out your intention without unintended negative consequences? One avenue for solving this problem is to learn how to be precise in giving instructions, but numerous examples show that this is not as easy as it sounds. A computer will respond literally to what it is instructed to do and the more capable and intelligent it is, the more resources it will bring to bear on the solving the problem in a way that doesn’t allow a human to subvert it.

The other route to finding a solution to the alignment problem is to give the computer “values” that guarantee it won’t perform actions that cause harm. This has been termed creating a “friendly” AI. This is also easier said than done. In the first place, we humans don’t agree on what our values are. Secondly, the same problem with specifying what we mean to a machine that will take us literally is daunting.  When I was working on my novel, Ezekiel’s Brain, I played around with several possibilities. One of these was telling the superintelligent AI to do “what was best for humanity.” Unfortunately, it ended up eliminating much of the population in crowded, poverty-stricken regions of the world. It was not a pretty sight. In another scenario, when asked to simply rid the human race of illness and premature death, the world became so overpopulated that life was unmanageable.

Achieving AI alignment with what is best for humans is a challenging task, which right now is so much in its infancy that it consists of mainly conjectures and thought experiments. But computers are getting smarter and the speed at which this is happening is faster than is our pace to figure out how to ensure that they are friendly.  Right now, the problem is still science fiction, and perhaps some of the answers will come from that realm. It wouldn‘t be the first time that our artists of the imagination led the way and science and technology followed. I gave it a try in Ezekiel’s Brain. You might take a look and see what you think.

Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here.

Buy Ezekiel’s Brain on Amazon. CLICK HERE