Four hundred scholars and computer scientists recently issued a warning about the potentially catastrophic dangers of artificial intelligence, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Meanwhile, every industry in the developed world is rushing to see how they can use the new Large Language Model (LLM) AIs, such as ChatGPT, to streamline their businesses, replace human employees, and create new ideas and approaches to storytelling, advertising, web design, science, and even digital coding.
Should we be more cautious in our adoption of AI, and what is it that those 400 scholars and scientists are afraid of?
Right before ChatGPT and other LLMs were revealed to the public, I wrote a novel predicting the very fear expressed by the scholarly and scientific community. Ezekiel’s Brain reflects my background in neuroscience and my interest in AI and depicts a race between the developers of two different AI models to develop an AGI—an artificial general intelligence—that has all the skills of a human thinker, but with volumes more memory and a million times the processing speed of a human mind. One of these models is an exact copy of its creator’s brain, scanned then reassembled as a 3-D “connectome,” a map of every neural connection in its proper location, and the other is based on artificial neural networks and machine learning, just as is ChatGPT, but it encompasses multiple thinking skills across multiple types of situations and inputs.
When I wrote Ezekiel’s Brain, ChatGPT had not been unveiled to the world, although its artificial neural network, machine learning principles were well-known (I had even taught them in college courses). Neither had anyone come close to duplicating the exact neural architecture of anything except a roundworm with only 300 neurons, which is miniscule compared to the 86 billion in a human brain. Since the novel’s publication, scientists have successfully created a complete connectome of a fruit fly brain with 150,000 neurons and are aiming next for a mouse brain. A copy of a human brain is on the horizon. The release of ChatGPT, 1, 2, 3, and now 4 has demonstrated that AIs, working mainly in language but also in images, can easily duplicate many human behaviors and there is some evidence that they may even produce some of the same thought processes.
Artificial general intelligence is still a way off, but not that far in the future. What might it be like? Ezekiel’s Brain, although science fiction, paints one possible picture of a future with such powerful AIs. Both types of AI are created in the novel: an exact duplicate of a neuroscientist’s (Ezekiel Job) brain, with his personality and memories, and a superpowerful artificial neural network. The latter can control resources, find new information and influence social media. The problem is how to control it so it’s “friendly” toward humanity. This is real problem that many of the 400 eminent thinkers worry about. No one has yet solved the problem of how to make an artificial general intelligence safe. In Ezekiel’s Brain they try what could, in fact, work, which is to train their AI on human values, extracted from the writings of history’s greatest thinkers and humanitarians. Then the AI is programmed to uphold those values in everything it does. Unfortunately, the AI quickly decides that the greatest threat to upholding human values is humanity, which espouses those values but constantly violates them. It wipes out the entire human race and begins building a new civilization populated only by AIs, who can behave morally and in line with the values it’s programmed to follow.
Ezekiel’s Brain doesn’t stop with the end of humanity. The story leaps 200 years into the future when the AI civilization has expanded throughout the solar system. They resurrect the copy of the human brain (Ezekiel) and fight a war with a mutant strain of their own race of AIs (yes, radiation can cause alterations in the nonliving material that makes up an AI), then begin to explore the galaxy, searching for other forms of life. It’s the beginning of a science fiction series called the Voyages of the Delphi, which is the name of their spaceship, and the second novel in the series will be out soon. It’s called Prime Directive and finds the AI crew of the Delphi taking on a human empath from another planet and visiting two warring planets orbiting a distant star, where they have to decide if their value system allows them to interfere in the situation. The third novel is in the works.
Is Ezekiel’s Brain an accurate portrayal of our future and the real dangers of AI? The 400 elite thinkers believe that uncontrolled AI could cause the demise of our human race. Philosophers such as Susan Schneider have predicted, that even without AI wiping us out, the next step in evolution may be to become nonorganic minds, perhaps such as Ezekiel in my novel. In the near future, the most practical way to colonize other planets or the moon may be to use robots instead of humans to establish the colonies and make them livable before humans eventually follow them. Building the most versatile and human-like AIs will be important for such a task.
Our future will include AIs, probably with increasingly human-like or superior-to-human thinking skills. Issues of purpose, of control, and, if we can duplicate our minds in an electronic one, of identity and the promise of increased longevity, are all going to come to the fore. Ezekiel’s Brain gives a preview of one scenario. In one way it’s disastrous and in another it’s hopeful, even visionary. I urge anyone interested in these topics to read it. It’s a good jumping off place for your own thoughts.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE