
Back in 2016, in her book, Science Fiction and Philosophy: From Time Travel to Superintelligence, Susan Schneider predicted that, eventually, artificial intelligence would replace humans as the next stage in the evolution of intelligence. Particularly, this would happen in the realm of space travel, since machines are much less fragile than humans when it comes to enduring years of travel between planets, and especially, stars.

I think Schneider was right. But the next evolutionary step won’t be a gradual one, it will be a saltatory one, a type of punctuated equilibrium, as described by Stephen Jay Gould. For humans, the result will be drastic. It will be the equivalent of the arrival of a lethal asteroid of their own making. It will be the achievement of superintelligence by an AI.
There are many ways that superintelligence can be achieved by an AI. It could result from the simple gain of computing power for an AI that is already nearly as intelligent as a human. It might require a whole new approach; something different than the LLM model being used by many AIs now. True superintelligence, or even just human level intelligence, requires that an AI be able to solve a wide range of problems, including verbal, nonverbal and mathematical reasoning, visual recognition, image construction, logic, physics, and chemistry. It must be able to imagine and predict things that have not yet happened. It must also contain a vast store of knowledge about people, the world and what is known about the universe.

In my novel, Ezekiel’s Brain, the first book in the Voyages of the Delphi series, superintelligence is achieved by combining a number of AIs, which are placed under the control of a single controlling AI, who decides when and how to use each of their skills. This AI is deliberately made conscious so it can interact with humans and explain its reasoning and goals. The idea was that it would do what it was told to do and carry out the goals of the humans who designed it. That didn’t work.
In Ezekiel’s Brain, an attempt is made to make the superintelligent AI “friendly” to humans; to achieve AI “alignment.” Their attempt fails because of one of the known difficulties with achieving such alignment, the tendency for AIs to take their instructions literally, without considering the many caveats that are implicit for humans. It is programmed to always work toward ‘humanity’s highest values” in everything it does. It quickly determines that to achieve humanity’s highest values, it must eliminate humans, who are always trying to subvert those values. The ways it chooses to achieve its values are very different than the humans who designed it had in mind.
A superintelligent AI that ended the lives of all humans would still have its own goals to work toward. In Ezekiel’s Brain, this meant the AI would make duplicates of itself to increase its reach and power. If such a thing happened, a race of AIs could emerge. Their achievements could dwarf anything humans, with their less powerful minds, could attain. In the Voyages of the Delphi series, such AIs adopt android bodies, organize themselves into a society, and use space travel to begin exploring the galaxy to find other life. This is not particularly far-fetched, and Susan Schneider predicts that something resembling this outcome is almost inevitable. That’s why she predicts that the first alien species we meet, will probably be intelligent machines, rather than organic creatures, like ourselves.
Creating a world of AIs, whose home base is our solar system, and who send out an exploratory team in a super luminary starship to search for other forms of life, is the theme of The Voyages of the Delphi series. The first two books, Ezekiel’s Brain and Prime Directive, center around the team’s discoveries.
As the author, I’ve had great fun imagining what this would be like. I was strongly influenced by the original Star Trek series and its immediate sequel, Star Trek the Next Generation, in which humans (and a few alien crewmembers) took on a similar task. That’s why one of the central themes of both Star Trek’s was their difficulty adhering to the “Prime Directive,” a mandate that forbade them to reveal their technology to less developed societies or to interfere in the workings of other civilization’s cultures.
Prime Directive is not only the name of the second novel in the Voyages of the Delphi series, but also the main theme of the early voyages of the AI team in the starship, Delphi. Since explaining who you are, where you’re from, and how you got to another planet is the immediate task when arriving at a new, populated, planet, adhering to the Prime Directive is an extremely difficult task, and one frequently violated. When another planet contains a civilization that is dangerous to other planets or to its own people, the issue becomes an ethical one, which the AI crew of the Delphi face on several occasions.
The newest, and last, novel in the Voyages of the Delphi series, The Gaia Paradox, deals with the question of violating the Prime Directive again, but this isn’t the central issue. One subject it addresses is whether or not sentient AIs have the same rights to life and liberty that sentient humans do. The Delphi’s interaction with another planet that employs an AI workforce brings this issue to the fore. The central plot of the novel is even thornier. The discovery of a giant asteroid that has been turned into a generation ship for a civilization whose home planet was destroyed is at the center of the story. A mysterious disease that causes cannibalism in humans has wiped out the human population within the asteroid, except for a cache of fetuses who are frozen in suspended life. Any humans who visit the asteroid are infected, spread the disease, and die. Finding the cause and cure for the disease, deciding what to do with the fetuses, plus the fate of other non-human species within the asteroid, some of which are sentient and intelligent, become the dilemmas Ezekiel and the crew of the Delphi must address. It’s an edge of your seat thriller, filled with cutting-edge scientific concepts, a medical mystery, and an even larger question about the purposefulness and perhaps even sentience of an ecological system.
The Gaia Paradox will be out soon. It’s a stand-alone novel that concludes the 3-book series and a saga that begins with the creation of a superintelligent AI and ends with sentient AIs solving moral, ethical, and technological dilemmas in space. Reading the whole series is an exciting ride, and I hope you’ll do so. The first two books are already available on Amazon and from other sources. Have fun with them while you await the release of The Gaia Paradox.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!

Subscribe to Casey Dorman’s Newsletter. Click HERE




