Welcome to my writer’s blog. The purpose of my blog is to keep all of my fans apprised of what I’m doing, e.g. my latest novels, books or stories I’m working on, books or films I’ve reviewed, and observations about writing or occasionally about philosophical or social issues.
Below, you will find book reviews and observations about writing, philosophy, and life. These posts are published regularly on my blog and sent out to my nearly 500 subscribers. To subscribe and receive regular updates on my writing as well as book reviews and occasional essays, just click HERE.
“A science masterpiece with superintelligent AIs replacing the human characters.”
What’s been called “a bold story of AI replacement,” with artificially intelligent androids on a voyage to distant planets, is now available to preorder on Amazon. The Gaia Paradox is the final book in the trilogy, Voyages of the Delphi, but, according to Kirkus Reviews, “it stands well on its own.” The Delphi is a starfaring spaceship crewed by group of superintelligent , peace-loving Solarians, the android race that replaced Earth’s humans back in the 21st century. Each of them is their race’s most learned and most talented member in their field of specialization. Two-hundred years later, they are traveling to Travoli, a distant planet to make peace with an imperialistic race whom they’ve encountered on a previous voyage. Ezekiel, the main character of the series, who is also an AI, is different from the other Solarian crewmembers, in being an emulation, an exact electronic copy of the brain of his creator, neuroscience Professor, Dr. Ezekiel Job, back in the 21st century. Despite his highly intelligent and super-efficient electronic brain, he possesses all the instincts and emotions of his human creator. Also on board is Siaree, a human empath from the planet, Numidia, and for whom Ezekiel feels a not very well disguised romantic affection.
The Delphi’s mission is diverted when the Travolians ask them to rescue three of their ships who are marooned on a giant asteroid, which they stopped to explore, and have contracted a deadly disease that turns them into vicious cannibals who are killing each other. When they arrive, the Delphi crew finds that the asteroid is a hollowed out generation ship whose human inhabitants have died from the same mysterious disease, leaving an intact mini world inside the asteroid. Three of the Travolians are somewhere inside that world, afflicted with the disease and hunting each other down. The Solarians’ mission is to rescue them.
The Gaia Paradox is not just a futuristic space adventure, it is an exploration of the future of AI development, of the relationship between humans and AIs, and is filled with hard science concepts in space travel, ecology, medicine and artificial intelligence. Kirkus Reviews says of it, “The prose is lucid and flows well, both in action sequences and in interpersonal scenes, and it always keeps the scientific aspects of the story clear.”
The eBook edition of The Gaia Paradox is available for preorder on Amazon right now. After the official publication date of July 7, 2026, it will be available in paperback and hardbound editions as well. For fans of space adventure, hard science fiction, and explorations into the world of AI, this is a must read book. Find it on Amazon.
The Gaia Paradox is a stand-alone novel, but it’s also the final novel in a 3-book series. Read the two other Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon.Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Imagining More and Other Stories by Panayotis Cacoyannis
Reviewed by Casey Dorman
I’ve read all of Panayotis Cacoyannis’ novels, but, until now, none of his short stories. These are the first he has published, and in his Afterword, he suggests that he may “end up with short stories.” While I would hate for that to be true, since I enjoy his novels so much, I was not disappointed with this collection of stories. He has a gift for maintaining the reader’s sense of anticipation—his invocation of fear or dread, of the hope for a revelation or solution to a puzzle— that builds within each story, ending with a final divulgement, or a twist into another, unanticipated mystery. The more stories one reads, the more one looks forward to being tantalized by the next. While his novels unravel layer after layer of the mystery that is eluding the character in his own discovery of himself, the short stories raise questions often unanswered, but always entertaining. He is a master of the genre.
Imagining More, the beginning story and the one in the book’s title, was my favorite, perhaps because it tapped one of the author’s favorite themes: the confusion of identity, which we all experience and which his characters exemplify prominently enough for us to detect it in their actions. He helps us by alternatively viewing the interior thoughts of each character as they observe, from their different perspectives the same events. In a bold stroke of originality, we readers are fooled as we are at first taken in by Peter and Kate, the personas that the two real characters, George and Lydia are playing, and we have to re-orient our understanding of the scene when we find they are not real. George and Lydia are simultaneously re-orienting themselves as they shed one role for another. All of this is done in the presence of Marianne, a reporter who is interviewing George and Lydia on their method style of acting. We find ourselves wondering, along with Marianne, whether we are talking to the real George and Lydia or another set of roles they are playing. All of this is accomplished within the context of an edge-of-your seat thriller, as we wait, apprehensively, for the denouement, which turns out to be as enigmatic as the story itself.
The writing, as is characteristic of Cacoyannis, is startlingly vivid, picturesque, often metaphorical, so much so that, my immediate reaction, as a began the first story was that I should present it to my writer’s group as an example of what I mean when I speak about “genuine literary” writing. For example: “Night still lingers on. Not quite night, not quite dark. Even the grey of dawn is prickled by stray dots of brightness, day has not yet broken: that otherworldly tinge of dusty in-betweenness stubbornly refuses to dissolve, as though as a reminder that a blur, nothing more, is what stands between life and drift to a permanent sleep.” Cacoyannis, for me, stands squarely in the midst of my favorite, most poetic modern and recent writers, such as Lawrence Durrell, Isabelle Hammad, William Boyd, or Ian McEwan in terms of literary style, and he is the most profound among them in terms of portraying the interior complexities of his characters.
Several of his stories touch on Kafka and TheMetamorphosis, as an idea, an obsession, or as an alter ego, as it was for the character in the first of Cacoyannis’ novels, A Bowl of Fruit. A strong suggestion of this character is in the short story, The Gift, but Kafka also figures prominently in The Rooms, and even surfaces for a moment in Imagine More. Other characters from other novels, or versions of them, appear in other of the stories, most clearly in Mr. Rubens in A Day at the National People’s Museum, which encapsulates some of the theme of The Coldness of Objects, my favorite Cacoyannis novel.
This is a remarkable collection. Not only is each story a rare gem, but Cacoyannis has managed to put his own imprint on a style that permeates each story. Poetry of language, intense inner scrutiny of characters, complex relationships perfused by shadings from each character’s past and present obsessions, and surprising and often enigmatic endings is a style that elevates the entire oeuvre in this book to a work of art.
For something completely different, read my scif-fi about conscious AIs solving moral dilemmas in a future that has them exploring our galaxy? Read the Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon.Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Do some novels predict the future? Think of George Orwell’s 1984, published in 1949. The novel is the source of phrases such as “Big Brother,” “Newspeak,” and “Doublethink.” It led to us describing a world filled with government lying, surveillance, and few individual rights, as “Orwellian.” Many people agree it was prophetic. Then there’s Philip K. Dick’s Do Androids Dream of Electric Sheep?, which became the film classic, “Bladerunner.” The AI androids in the novel refuse to be retired and begin to rebel. They seem to be conscious, and even the experts on detecting who is an android and who is not, can make mistakes figuring out which is which. Shades of ChatGPT and AI Superintelligence!
In 2013, I wrote the novel, Morality Book One–Where Have Al the Young Men Gone? It was meant to be the first of a series thrillers about politics and freedom. Where Have All the Young Men Gone? reflects post 9/11Islamophobia and U.S. government surveillance by following a charismatic leader who miraculously awakened from a 40-year coma to lead a nonviolent revolt against the intrusions on freedom of an autocratic president.
My novel, 2020, published in 2017, is the second novel in the two-book series. 2020 follows the same dictatorial president in his bid to bypass the constitution and refuse to step down after his second term as a president. As an allusion to Philip K. Dick’s novel, Radio Free Albemuth, which portrays an authoritarian U.S. government and a dictatorial president named Ferris Fremont, I named my fictional president in both books, Fremont Ferris. Most of the other characters that opposed the government in Where Have All the Young. Men Gone? are present in 2020, and they use the same nonviolent protest methods to fight the president’s devious, and often lethal methods of subverting the constitution for his own ends. It’s a genuine suspense thriller. You can judge for yourself whether it also depicts what’s happening today by reading the book.
As a New Year’s gesture, and because I think it’s worthwhile for as many people as possible to read these two books and think about our future, I have reduced the Kindle price to $0.99 each. Both are also available as audio books and paperbacks, at the lowest prices Amazon allows. They are both stand-alone novels and may be read in any order or one without the other. I encourage you to treat yourself and buy one or both books–they are great reads during the week between Christmas and New Years, or as reading adventure to embark upon along with the start of the new year.
To read more about these two books on Amazon or to buy either or both of them, Click Here.
For something completely different, read my scif-fi about conscious AIs solving moral dilemmas in a future that has them exploring our galaxy? Read the Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon.Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Anais Nin provides another glimpse of the human mind
Review of Seduction of the Minotaur
Anais Nin
Originally Published by Swallow Press, 1961
Kindle Edition with Comments by the Author: Sky Blue Press, 2010
Anais Nin is a unique figure in literature. She is as famous for the way she lived her life as she is for her writing, perhaps because her writing is autobiographical and her most well-known work is her multi-volume Diary. Prior to reading Seduction of the Minotaur, I was mostly familiar with her pornographic stories, published in the books, Little Birds and Delta of Venus, and her story, Henry and June, about her relationship with Henry Miller and his wife, because it was made into a film.
I came upon Seduction of the Minotaur by happenstance. I was re-reading Anil Seth’s Being You: A New Science of Consciousness, as part of my preparation for a book I’m working on about machine consciousness and the future relationship between humans and AI. While looking up a reference in Seth’s book’s index, I noticed that Nin’s novel was also referenced. I didn’t remember coming across the reference in the text, and I was curious as to why a non-fiction science book on the brain and consciousness would reference a novel. It turned out that Seth only used a single line from Nin’s book, to introduce one of his chapters, but I became intrigued enough to buy the novel and begin reading it. I was quickly hooked and took a break from my non-fiction reading long enough to finish Seduction of the Minotaur. I was struck by the contrast between it and the science I had been reading, although both address the same subject: the human mind and how it works.
Nin introduces her book by saying, “In Seduction of the Minotaur I finally succeeded in absorbing the vision of the analyst with the novelist.” She, herself, had been analyzed by Otto Rank, with whom she had an affair. Her introduction leaves the strong suggestion that the main character of the novel, Lillian, is a stand-in for Nin in many ways.
Seduction of the Minotaur is the final of five novels, all of which are related, and I only read this one, so I may have missed some themes that were carried through from the earlier novels. This story concerns a three-month period in which the main character, Lillian, a 30-year old woman from the U.S., but who had spent part of her childhood in Mexico, visits a coastal Mexican city, bordered by the jungle, where she has a contract to play jazz piano with a group at a hotel bar. The novel mostly consists of lush descriptions of the colorful, relaxed, tropical atmosphere of the city and its surround, as well as the people, both the local population and several memorable individuals. There is no real plot, except Lillian’s psychological journey as she analyzes the visible exterior presentation of each of the other characters and the hidden interior, which each of them is running from and is afraid to face. She does the same for herself.
While such deep psychological examination might sound academic and boring, it is hardly so because of the artistic, poetic, and highly descriptive manner in which Nin approaches both the people and the environment, which plays an active role in shaping the mood and behavior of the people. With Lillian, the main character, we are able to see, both in how she relates to others and how she makes choices for herself, that she is maintaining a facade of nurturance, adventuresomeness and superficial attachments to others, which keeps her safe by not allowing her to face the needs and fears that underly her behavior. Paradoxically, Lillian analyzes herself, mostly accurately, as she does these things, and tells us about the dynamics behind her behavior, although she is powerless to alter how she behaves. She does the same for other characters, as an astute observer of how they relate to others and to her. Central to her observations and analyses is Doctor Hernandez, who is fatherly, kind, and perceptive, but consumed by his own unresolved issues that eventually lead to his death. Lillian feels guilty when he dies because she saw the internal drama he was playing out in his life but never tried to break through his defenses and save him.
I was swept up in both the luxuriant prose and the intimate and intricate psychological analyses portrayed in the novel. As a psychologist, I was deeply familiar with the dynamics displayed and described by the author in her characters. By presenting these in novelistic form, demonstrated by the behavior of each person, and simultaneously analyzed, in a reflective way by Lillian, we gain an understanding of how a person’s mind works. Because the presentation is of the conscious workings of the mind of Lillian, we can look into ourselves and see how close to the mark it all feels. In a novel, we allow ourselves to accept truths about the characters that might provoke our own defenses and obscure our insight if we encountered them in real life and they were about ourselves. They also ring truer than a similar analysis presented in the abstract as a psychological theory in a textbook.
In order to read Seduction of the Minotaur, I had to interrupt my re-reading of Anil Seth’s Being You (which I reviewed on December 15, 2022). Seth’s book explains his theory of how the brain works to produce our conscious experience of the world and ourselves. Although his presentation is elegant, and for someone like me, who is intensely interested in the subject, riveting, its description is of a different dimension of consciousness than Nin’s. Hers is what it feels like to be conscious. All the various aspects of consciousness that Seth analyzes so meticulously, in terms of how they may be generated by the brain, are integrated in an ongoing stream of thoughts, feelings, and sensations, by Nin. What is going through her character’s mind is what Seth is trying to explain in terms of brain mechanisms. That what we think and feel is sometimes motivated by unconscious goals, which psychoanalysis attempts to describe, is another aspect of consciousness that is real. Psychoanalysis overly anthropomorphizes the process, inventing unconscious entities that act with purpose and have their own motivation, but however it’s explained, we conceal, lie, rationalize and otherwise obscure to ourselves some of the reasons for our behavior.
Both Seth and Nin tell a part of the story of consciousness. Nin’s writing in Seduction of the Minotaur has been criticized for being overly descriptive, lacking in plot, and self-indulgent. I found the story entrancing, the writing evocative, and filled with human truth. It made me want to read more of her work, and I plan to.
Interested in scif-fi about conscious AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphinovels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
This book, part of the Springer Frontiers Collection on issues in science, was originally published in Japanese as Brain Consciousness, Machine Consciousness, in 2017. It’s a remarkable, groundbreaking, and exceptionally well-written book. In a brief, 172 pages, the author, an internationally known neuroscience researcher, provides the most lucid, understandable description I’ve ever read of how the brain and its neurons function, at least regarding visual perception. He also presents a tremendously exciting, breakthrough model of how the brain produces conscious perception. As an added treat, his method of validating his theory involves uploading human consciousness into a machine, something he believes is possible.
Watanabe’s approach, both in how he thinks about consciousness and how he presents it to the reader is methodical, grounded in verifiable fact, and his reasoning is presented in a way that is easy to follow. He begins with a quote from Sir Francis Crick, that, “You are nothing but a pack of neurons.” His point is that our consciousness is produced by our brains, and our brains are made of material stuff. How that material stuff works together as a coordinated entity is not just the key, but the only answer to the question of how consciousness is created.
For the author, the most defining feature of consciousness is that it is the “subjective experience of vision audition, tactile sensation, emotion, decision making, etc. In other words, qualia.” There is a world around us, but if we are alive and awake, we have the sense of being someone who is experiencing it. That sense is our subjective experience, i.e., consciousness. How a brain takes the objective world, consisting of things like light or sound waves, which stimulate our sense organs, which initiate neural firing, which then travels to our brains, further igniting firing across different areas of the brain, and, finally, produces a subjective experience, is “the hard problem,” as David Chalmers called it.
Watanable addresses the hard problem forthrightly and with optimism. He is aware that much of the controversy in consciousness research revolves around how it is defined and he takes the challenge of using science to study it by searching for a “natural law” of consciousness, in the sense of a verifiable law that “cannot be derived from scientific theories,” and is just “how the universe works.” He cites the universal gravitational force and the invariant speed of light as examples. He is looking for such a law for subjective consciousness. Natural laws do not depend on theories (though our human understanding of them may), but they must be verifiable using scientific methods. He examines some candidates such as Chalmers’ idea that consciousness is information, or Penrose and Hameroff’s idea of consciousness being quantum effects in the microtubules that form the cytoskeletons of neurons and finds both wanting in terms of either verifiability or agreement with known facts.
Watanabe examines other current theories and fastens on the Integrated Information Theory of Giulio Tononi. He uses that theory to account for how the visual information is processed in the brain and shows that it is successful, while Chalmers’ theory is not. But he goes further to show that it is the interpretation of integrated information, that coincides with consciousness, and that interpretation requires more than just the information itself, it requires an algorithm to interpret it. That algorithm also must be expressed in the workings of the system in the form of a law that “such and such activities within a system, such as the brain, will result in subjective experiences.” (quotation mine). He goes one step further by saying that the causal relationship between neuron-like actions and subjective experience need not involve real neurons, nor living creatures. He makes the points that, 1) “if there are natural laws of consciousness, they are likely to have existed from the moment of the Big Bang,” and 2) “it is therefore unlikely that they apply only to central nervous systems like those found on Earth.” This opens the door for machine consciousness. Real neurons need not be necessary. Abstract ones will do. This also opens the door for a way to validate a theory of consciousness. Consciousness is something each one of us experiences, but not something we can share with each other. But, even if we can’t share our subjective experiences directly with each other, we can, perhaps, create such experiences by simulating neural activities in a machine and the machine, with its simulated neurons, will (consciously) experience them. If a machine expresses the algorithm that creates consciousness, then we could transfer our own consciousness into that machine. If we are still conscious within the machine, then the model would be deemed correct.
Watanabe’s suggestion is bold, bordering on science fiction. But being a science fiction writer who has written a series featuring conscious AI androids, I was very happy to learn that a world class neuroscientist shares my view. Machines can be conscious. He suggests that, by using an AI as a substitute cerebral hemisphere, and transferring our own consciousness into it, we would demonstrate that a machine that operates by implementing the correct algorithm, can support a subjective consciousness, thus validating his model of consciousness. That model will express the natural law of consciousness by becoming conscious, and the human whose consciousness it supports will be the observer that (subjectively) validates it (but others could also validate it by presenting the machine with whatever challenges would be appropriate for proving that a human is conscious).
Almost as interesting as the fact that the author uses a machine to demonstrate the architecture and working of a neural network that creates consciousness, is that that neural network, as Watanabe envisions it, is very similar to the multi-layer, back-propagation based neural networks that support deep learning in machines and are the backbone of most current AIs. He points out that such networks were originally put forward, first by Shun’ ichi Amari, and later by Rummelhart, Hinton, and Williams, as models that mimicked, albeit in an abstract and simplified way, the workings of the brain. His model is similar to the one proposed by Antti Revonsuo in 1995 in the journal, Philosophical Psychology in that it proposes that subjective reality is a type of virtual reality created by our brains. Fittingly, he also references the science fiction classic, Permutation City, by Greg Egan, and the film, The Matrix when discussing his model. He uses it to account for the findings in Benjamin Libet’s classic experiments on the timing of brain and conscious events, as well as a number of other findings in cognitive science.
As I said, this is a remarkable book. The author is not just a good writer; he is an excellent teacher. His book is a primer in how to apply the scientific method and the reasoning that goes along with it, first to figuring out how the brain works and then to figuring out how it creates consciousness. He may be incorrect about the latter, but at least he presents a way to test the model, although not until we develop more sophisticated machines. For those only slightly familiar with neuroscience or the science of consciousness, this is a good book to read to learn something about it. For those who are already well-versed in these topics, it is refreshingly clear and marvelously creative in its content. It also provides a good reality check on those who “believe in” one or another theory of consciousness because it sounds good to them, or even because the theory makes sense and may be elegant, in terms of its mathematical or theoretical properties. Such characteristics may make a theory appealing, but its adherents usually haven’t thought about how scientists approach such theories, with the requirement that they must make the theory’s predictions both concrete and testable. Watanable does just that in this book. I highly recommend it.
Interested in scif-fi about conscious AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphinovels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Jon Willis calls himself an astrobiologist, and he mixes astronomy and the study of organic life in this new book. His purpose is to explore the possibility of finding and recognizing life on another world, either one within our solar system or on an exoplanet orbiting another star. He makes an effort to titillate us with descriptions of unusual life here on earth, such as the oceanic creatures found near deep sea vents and a brief discussion of the communication patterns between dolphins. His most in-depth discussions, however, are related to astronomical observation, both its limits and its promise, as well as what we’re learning from studying the surface characteristics of Mars, Jupiter’s moon, Europa, and the evidence gleaned from asteroids. Particularly with regard to astronomy, the discussions are deep and scientific but made livelier by focusing on his own experiences, as much as the pure science. His language is intimidatingly literate, and I was captivated by the elegance , as well as driven to my online dictionary, by some sentences, such as “A dim, dead star, lit not by the glowing fire of nuclear fusion but instead by a tenebrous revenant limned in the nebular glow of ionized gas trapped within its crushing gravitational environs.”
The Jovian moon, Europa, as well as Saturn’s Enceladus, are ice-covered moons orbiting two of our solar system’s gas giants. Europa seems most promising in terms of exploration beneath its ice, and Willis explores the world around Earth’s deep sea vents to find an analogy. to what that life may be like. I learned that some microbial life can use chemosynthesis to create food and energy, in the absence of sunlight. Deep water shrimp, without eyes, have a sensitive red patch containing red-light sensitive rhodopsin on their backs to detect the thermal photons as infrared radiation in the hot deep sea vents and are able to navigate around their periphery in the cooler, dark waters surrounding the vents. All of these evolutionary developments show that life is sufficiently flexible in its ability to adapt to possibly be present beneath the ice of the gas giants’ moons.
At the heart of the search for intelligent life beyond our own solar system, is the formula developed by Frank Drake, the man behind SETI, in 1961:N = R∗ × fp × ne × fl × fi × fc × L . N is the number of communicating (into space) civilizations in our galaxy. R* is the rate at which new stars are born, fp is the fraction of stars with planets, ne is the number of habitable planets per star, fl is the fraction of those planets with life, fi is the fraction of those planets with intelligent life, fc is the fraction of those that can communicate into space, and L is the “average lifetime of such communicating civilizations.” At the time the equation was developed, no planets orbiting other stars had been found, so all the values for the various variables, except the rate of new star birth, were guesswork. Since then, we’ve found that virtually all stars have planets, a significant proportion of these are habitable in terms of being generally earth-sized and in the so-called “Goldilocks zone” not too far from and not too near their star. A good deal of the book is devoted to exploring the likelihood of a planet developing life. Willis’ method is mostly to understand the conditions on Earth that were conducive to developing life and searching for them in the bits we know about other planets from soil samples from Mars, and geochemical evidence form asteroids. The answer is still a guess, as are the answers to what fraction of planets with life would develop intelligent life and what fraction of that life would choose or be capable of communicating into space. Interestingly, at the end of the book, the author concludes that it may be that the crucial variable in the whole equation is the average lifetime of intelligent civilizations, and our own history, and current situation, suggest that, if we are an example, it may not be long in terms of the life of the galaxy. Most of our own planet’s history had no human-level intelligent life, and climate change and mismanagement of our resources may end our reign on Earth fairly quickly.
Even if there are communicating civilizations “out there” somewhere, what is the chance that we will understand them? Willis uses dolphins, who are intelligent and who communicate with each other, but which we still are unable to understand or communicate with except in a very rudimentary way, as an example of how difficult it may be to understand another intelligent species from space. Still, as he points out, just finding that someone is out there will be a civilization-shaking revelation, should it happen.
The Pale Blue Data Point covers a lot of territory, and the author needs to continually remind us how each new topic is related to the overall theme of finding life somewhere beyond Earth. He spends much of the time discussing the means being used to answer the question and less time on what we know so far. This is partly because we are still at the beginning stages of our search, and many of the needed steps are still hopes and dreams rather than realities. What leapt out at me from the pages of the book was one very clear conclusion. The more we learn about the universe and all the individual instances of how it appears, the less uniform it seems to be. So far, we have encountered a lot of surprises. New stars develop much more rapidly than anyone thought. The geochemistry of the earliest Earth, called the Hadean eon, may itself have produced liquid water and led to the earth having oceans soon after it began to exist as a rocky planet, instead of the vast majority of our early water coming from asteroids and meteorites that struck Earth. Thus, life could have developed earlier than anyone has thought, and perhaps when it did develop that was the culmination of many false starts. What kind of planets, whether rocky or gaseous, and what size can exist at various distances from their stars, is much less predictable than we thought. Discoveries such as pulsars that emit regular pulses of radio waves have been a nearly total surprise. The list goes on and it means that the science of space is still one mainly of discovery, rather than confirmation of hypotheses. Many of our hypotheses will need to be rethought when we find out what is really out there. As the author of this book points out, the lesson should be that what we find, when we find life, may defy our ability to imagine it, even using the best of our current science to frame our expectations.
This is an interesting book, but not one for the scientifically timid or those unwilling to increase their vocabulary as well as knowledge. What the author does best, and much better than most similar books, is show the kind of reasoning that has led to what has been studied so far and what guides our current and future search for extraterrestrial life. The reasoning behind the research is presented clearly. Also, the meaning of what has been found so far is spelled out in ways that are not difficult to understand. The details of the studies and the findings are often abstruse, but the overall picture is clear. This makes it worthwhile to read.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphinovels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
This is an amazing, inspirational, visionary, but humble book, which I hope millions of people read and feel as touched as I was by it. It’s a grand plan and a noble one, and the author display’s his earnest belief in everything he says. I loved it.
I wasn’t sure about this book when i was approached by the author’s publicist to take a look at it and even talk with the author if I wanted to. I’d never heard of the author, and when I looked him up, I found he was a space visionary and entrepreneur, who favored private enterprise leading our effort to explore and inhabit space. I’m generally a skeptic regarding the motivation of private enterprise doing anything useful for mankind, but having just read Lee Billing’s Five Billion Years of Solitude, a story of our society’s effort to find habitable exoplanets, I had developed an equally jaded view of what our government is able to accomplish, so I was willing to listen.
Tumlinson’s message is partly about having private enterprise lead our voyage into space. He focuses on Elon Musk and Jeff Bezos and acknowledges their downsides, but insists that their motivation for going into space is for the good of humanity. I can suspend my judgment for now, but that’s not the main message of the book, anyway. The real message is about finding purpose for one’s life, and how the author has made leaving earth and inhabiting space—whether on another planet or a man-made space station—his own purpose. His motivation is real and well-informed, backed up by history and science and his own genuine, and touching, love for humanity. As he explained his reasons for choosing his purpose, I became aware how much of his reasoning and even feelings I shared, and have even written about. Just on its own, our planet will become uninhabitable someday. Our future as residents of earth is limited, although we’ll only each that limit in hundreds of millions of years. But our own activities may shorten that timeframe, dramatically. Climate change is warming our planet at a rate that will make it inhospitable to human life in the near future. We continue to harbor enough nuclear weapons, both in our country and in Russia and China, as well as in a few other countries, to wipe out humanity or at least our modern human civilization. Either of these factors is dangerous to humanity in its own right, but Tumlinson sees that they both are symptoms of a short-sighted, competitive, greedy side of human nature that has driven much of our history and must be shed for us to move to the next level of society, which is moving off of our planet into space. He is hopeful, and he sees that embracing the purpose of extending the reach of our species into space will help us move away from these behaviors of the past.
Tumlinson’s views are base upon an inclusive, respectful, attitude of love for ourselves and our planet. Movement off-planet is not possible without dedicating ourselves to restoring what we have destroyed here on earth. He gives homage to the concept of Gaia. I was pleased to see that he also recognizes that some of our advancement into space, especially if it involves reaching exoplanets orbiting other stars, may require machines and AI as much as humans. He is open to working alongside AI, which he sees as the “children” we are creating and raising to take our place, in some instances. He’s even open to the merging of humans and AI by uploading our minds into machines. That’s a favorite topic of mine, and one I also see as a step toward our reaching distant stars and their habitable planets. It raises deep questions about the relationship of humans to what someday may be conscious AIs, which is something I’m thinking about now, since the speed of AI development makes that point in the future come closer and closer to the present.
Why Space? is an inspirational book, because the author’s inspiration is presented so convincingly it makes it contagious. It’s a book that both up and coming scientists and the decision makers who decide how to use our science and where we should spend our money need to read. Both of them should pursue their work with a sense of purpose that inspires them to lead humanity forward, beyond the everyday, often petty concerns that often dictate our behavior. The ideas and attitudes in this book represent the best of humanity. I hope it gets widespread attention. Why Space? The Purpose of People is available on Amazon, here.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphinovels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Lee Billlings’ 2013 book concerns the search for habitable exoplanets and the people who have been at the center of the relevant science for the last fifty or more years. Starting with Frank Drake the originator and mind behind the SETI project, the story takes us across the US and sometimes to Europe as the focus shifts from scanning the skies for signals from space to putting ever more complex and delicate telescopes into either earth orbit or even further out near the edges of our solar system. It is a story of vision, hope, and heartbreak, as the most ambitious plans routinely become too expensive for a world that has fought numerous wars, gone through major recessions, and been sidetracked by other space ventures. Billings tells the story with vivid language, capturing the soaring heights of the possibilities imagined by the researchers, almost all of whom are living out a childhood fantasy of finding aliens in the vast dark void that surrounds us, and to the depths of their despair when plans they are convinced will make their dreams true, are abandoned. We get to know each scientist personally, their stories inspiring and sometimes heart wrenching. The author’s own competent mastery of the scientific concepts involved in the approaches used by many different disciplines, from astronomy to geology, makes esoteric scientific ideas understandable to the non-expert reader.
A large part of the impact of Five Billion Years of Solitude is the way Billings places the search for habitable planets outside our solar system within the context of the history of our planet—the nearly five billion years earth has been around. This is important for practical reasons, most importantly, perhaps, being the fact that human life on our planet has existed for only .005% of our world’s lifespan. Not only were there no humans during most of earth’s history, but the planet was also barely recognizable as what we see today for much of the time. Oceans have been here from almost the beginning, fed by water brought to earth by asteroids and comets. But green, leafy vegetation is a fairly recent phenomenon, less than half a billion years old. The location of oceans, seas, mountain ranges, and whole continents, has been changing constantly for much of our lifespan. Volcanic activity, once commonplace, has been rare and sporadic during recorded human civilization. The importance of this is that, if another species from another planet were looking at earth, they would see it inhabited for only a brief moment of its history. Even being habitable, in the sense of being able to support life, has been true for only a portion of earth’s duration. This means that, similarly, we are more likely to discover habitable planets during a period of their history when life, and particularly intelligent life, does not exist on them, even if they have intelligent life at some point in their history.
There are also limits on how long intelligent life will continue to exist as a planet such as earth ages past its maximally habitable period.
For reasons related to increasing temperatures from an expanding sun, causing changes in the composition of our atmosphere—mostly a decrease in the amount of CO2 available for plant respiration involving photosynthesis, the oxygen-producing plants will die off and, for aerobic creatures like us, life will be impossible, an outcome proposed by James Lovelock, famous for his Gaia hypothesis. Depending on whether he is correct or, instead, a process based on rocks releasing oxygen into the atmosphere, is sufficient to sustain life, as described by James Kasting, humans have between 100 million and 500 million years of life left on our planet. That may seem like a lot, and it is, but either estimate is much sooner than the time when increasing temperatures from the expanding sun will extinguish all life on the planet. As Billings points out, such a forecast, when it was made, back in the early 1990s, largely “fell on deaf ears” among the wider public, but the scientific community was alarmed. Either there had to be a way, through advanced technology, such as erecting a solar shield, to keep the earth cool and extend the span of life, or humans could leave earth and move to another planet. Moving outward toward Mars or a moon of Jupiter would extend life considerably but becoming nimble enough to leave our solar system and find younger, life-sustaining planets orbiting other stars, could extend our species’ lifespan indefinitely. With that in mind, some within the scientific community, saw the discovery of habitable exoplanets around other stars as our major hope. For many of those involved in the search for habitable exoplanets, Such as famed MIT astronomer, Sara Seager, the certainty of the end of life on earth is a major motivator.
With the revolutionary pictures coming back from the earth-orbiting Hubble telescope in the 1990s, the astronomical means for detecting habitable planets shifted from earth to space. Larger and larger telescopes at greater distances from earth were seen as the solution, but the issue of how to mask out the light of stars to pick up the dim glow of small, earth-size planets orbiting near enough to their stars to support water and an atmosphere, presented a major problem. That question was inexorably tied to the limits of the size and weight a rocket could carry into space and ultimately to the cost of doing so. Much of the last third of Billings’ book is devoted to the rollercoaster ups and downs of federal funding and NASA priorities over the last 30 years, described poignantly by NASA/JPL chief scientist overseeing exoplanet programs, Wesley Traub, that have led to large dreams and hollow rewards, as promised funding invariably dried up.
Billings tells a stirring story. He able to capture the scientific sprit enough to make a reader such as myself desperate for progress and for our race to dedicate itself to finding other worlds and exploring them for their ability to sustain life. There’s also the mystery, encapsulated most by Frank Drake’s sometime lonely pioneering efforts to keep listening to space for signs that we’re not alone. At least some of the scientists discussed by Billings believe that the limits that permit life, and factors such as convergent evolution, make life with something resembling human type of intelligence highly likely, wherever it is found. The question of what such alien life might be like, is something explored by Jon Willis, in his book, A Pale Blue Data Point, which I’m about to start reading. I’m confident it will keep this story going, and I look forward to telling you about it when I finish reading.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphinovels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
My latest research project has me deep into the subject of habitable exoplanets and alien life forms. I’m reading Five Billion Years of Solitude, the 2013 book by Lee Billings, and then it’s on to the just released, The Pale Blue Data Point by Jon Willis. From Billings’ book I’ve learned that habitability is not just a matter of whether a planet is in its star’s “Goldilocks” zone in terms of the distance from its star. It also has to do with the size and luminosity of the star, its projected lifetime, the size of the planet, plus a wealth of other factors related to the planet’s surface and atmosphere. Ironically, the key to whether a planet is likely habitable, as Earth is, comes from understanding what factors allowed life to develop on Earth. Besides astronomy, the fields of geology, biology, chemistry and biochemistry, oceanography, climate science, and a lot of other disciplines are involved. Willis’ book takes the same approach. He bases his study of the possibility of life on other planets by examining of the varieties of life on Earth, including those in extreme environments, and how they developed.
The search for life outside of our solar system is not just an esoteric branch of science, pursued mostly by astronomers with too much time on their hands, or the province of science fiction writers, like me, who fell in love with the space travel genre of science fiction. Scientists agree that the future of Earth will make life unsustainable on the planet somewhere between 100-500 million years from now because of the growing heat of the sun. Even before that, human activity, either climate change or nuclear war, could end human life on our planet. Scientists and futurists from Stephen Hawking to Elon Musk, have concluded that mankind’s only hope is to extend our habitation to other planets, Within our solar system, to colonize another planet or one of the moons of Jupiter, would require a massive and expensive degree of terraforming, similar to what Kim Stanley Robinson describes in his Red, Blue, and Green Mars books. Humans would have to put aside their constant infighting between countries and ideologies to mount such an effort.
Looking to the future, whether we stay on Earth or colonize other stars, the long term prospects of humans are dim, especially if our capability for war continues to outpace our ability to maintain peace. Mankind will probably die from its own activities, and if we don’t, our planet will become unlivable on its own sometime in the future.
A habitable planet outside of our solar system provides another option. If we find a planet that sufficiently resembles our own, but is not inhabited by intelligent life forms, we could simply colonize it (or, like we did during our age of imperialism, we could colonize it even if it had other intelligent life). The main problem is the distance of other planetary systems. The closest is the triple star group known as Alpha Centauri, which is a little over four light years away, and which has at least one planet, Proxima Centauri b, which orbits the red dwarf star Proxima Centauri, which is part of the three-star system. Sci-fi writers like me provide our characters with ships that have “warp drives,” so they can easily travel four light years in a matter of days. Real scientists generally don’t think warp drives will ever exist and predict that the maximum speed of travel for a manned spaceship will be one-tenth light speed. It would take 40 years to reach Proxima Centauri b. If Proxima b turns out to be uninhabitable, the next nearest likely habitable planet is Ross 128 b, which is 11 light years away and would take a ship 110 years to reach. To reach Ross 128 b or any other possibly habitable planets discovered thus far, would take longer than a single human lifespan.
Humans are more vulnerable and wear out sooner than machines. One way to extend intelligent life beyond the limits of our planet’s habitability is to embed that intelligence within machines instead of humans. Long-lived machines, especially ones that can repair themselves indefinitely, could remove the limit on the lifespan of intelligent life. They could explore and colonize other star systems without worrying about how long it took to get there. But can machines be considered life? Do we lose the essence of humanity by replacing ourselves with machines? This is the theme of my Voyages of the Delphi science fiction series, but in that series, humans don’t choose to replace themselves. A superintelligent AI wipes out the human race and replaces it with replicas of itself, who become the main characters in the three books
What would it mean for humans to voluntarily replace themselves with AIs? Can a machine do everything a human can do? We’re seeing already that in the intellectual and problem solving realm the answer is yes, they can. They’re not there yet, but they are approaching that point rapidly. But what about self-consciousness and having emotions? Will AIs be aware? Will they have a sense of identity? Will they be able to feel things? And even if the answers to these questions are yes, will AIs care for each other, as humans do, will they fall in love? Will they feel sympathy, or act with honor? And even if the answers to these questions are yes, will an AI that is aware of itself, and feels the same things humans feel, behave just like humans do? Will it lie, cheat, steal, murder, wage war? Will human-like AIs bring the end of intelligent life on our planet even sooner and more efficiently than humans have been doing?
I’m going to keep thinking about this. My novel, The Gaia Paradox, which is the final novel in the Voyages of the Delphi trilogy, and will be released in July of next year, deals with it a bit, but also focuses on other dilemmas. Whether I tackle this issue in science fiction or in non-fiction essays, I’m not sure, but I am going to do it, because I think it’s important. I will keep all of you posted.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Back in 2016, in her book, Science Fiction and Philosophy: From Time Travel to Superintelligence, Susan Schneider predicted that, eventually, artificial intelligence would replace humans as the next stage in the evolution of intelligence. Particularly, this would happen in the realm of space travel, since machines are much less fragile than humans when it comes to enduring years of travel between planets, and especially, stars.
Version 1.0.0
I think Schneider was right. But the next evolutionary step won’t be a gradual one, it will be a saltatory one, a type of punctuated equilibrium, as described by Stephen Jay Gould. For humans, the result will be drastic. It will be the equivalent of the arrival of a lethal asteroid of their own making. It will be the achievement of superintelligence by an AI.
There are many ways that superintelligence can be achieved by an AI. It could result from the simple gain of computing power for an AI that is already nearly as intelligent as a human. It might require a whole new approach; something different than the LLM model being used by many AIs now. True superintelligence, or even just human level intelligence, requires that an AI be able to solve a wide range of problems, including verbal, nonverbal and mathematical reasoning, visual recognition, image construction, logic, physics, and chemistry. It must be able to imagine and predict things that have not yet happened. It must also contain a vast store of knowledge about people, the world and what is known about the universe.
In my novel, Ezekiel’s Brain, the first book in the Voyages of the Delphi series, superintelligence is achieved by combining a number of AIs, which are placed under the control of a single controlling AI, who decides when and how to use each of their skills. This AI is deliberately made conscious so it can interact with humans and explain its reasoning and goals. The idea was that it would do what it was told to do and carry out the goals of the humans who designed it. That didn’t work.
In Ezekiel’s Brain, an attempt is made to make the superintelligent AI “friendly” to humans; to achieve AI “alignment.” Their attempt fails because of one of the known difficulties with achieving such alignment, the tendency for AIs to take their instructions literally, without considering the many caveats that are implicit for humans. It is programmed to always work toward ‘humanity’s highest values” in everything it does. It quickly determines that to achieve humanity’s highest values, it must eliminate humans, who are always trying to subvert those values. The ways it chooses to achieve its values are very different than the humans who designed it had in mind.
A superintelligent AI that ended the lives of all humans would still have its own goals to work toward. In Ezekiel’s Brain, this meant the AI would make duplicates of itself to increase its reach and power. If such a thing happened, a race of AIs could emerge. Their achievements could dwarf anything humans, with their less powerful minds, could attain. In the Voyages of the Delphi series, such AIs adopt android bodies, organize themselves into a society, and use space travel to begin exploring the galaxy to find other life. This is not particularly far-fetched, and Susan Schneider predicts that something resembling this outcome is almost inevitable. That’s why she predicts that the first alien species we meet, will probably be intelligent machines, rather than organic creatures, like ourselves.
Creating a world of AIs, whose home base is our solar system, and who send out an exploratory team in a super luminary starship to search for other forms of life, is the theme of The Voyages of the Delphi series. The first two books, Ezekiel’s Brain and Prime Directive, center around the team’s discoveries.
As the author, I’ve had great fun imagining what this would be like. I was strongly influenced by the original Star Trek series and its immediate sequel, Star Trek the Next Generation, in which humans (and a few alien crewmembers) took on a similar task. That’s why one of the central themes of both Star Trek’s was their difficulty adhering to the “Prime Directive,” a mandate that forbade them to reveal their technology to less developed societies or to interfere in the workings of other civilization’s cultures.
Prime Directive is not only the name of the second novel in the Voyages of the Delphi series, but also the main theme of the early voyages of the AI team in the starship, Delphi. Since explaining who you are, where you’re from, and how you got to another planet is the immediate task when arriving at a new, populated, planet, adhering to the Prime Directive is an extremely difficult task, and one frequently violated. When another planet contains a civilization that is dangerous to other planets or to its own people, the issue becomes an ethical one, which the AI crew of the Delphi face on several occasions.
The newest, and last, novel in the Voyages of the Delphi series, The Gaia Paradox, deals with the question of violating the Prime Directive again, but this isn’t the central issue. One subject it addresses is whether or not sentient AIs have the same rights to life and liberty that sentient humans do. The Delphi’s interaction with another planet that employs an AI workforce brings this issue to the fore. The central plot of the novel is even thornier. The discovery of a giant asteroid that has been turned into a generation ship for a civilization whose home planet was destroyed is at the center of the story. A mysterious disease that causes cannibalism in humans has wiped out the human population within the asteroid, except for a cache of fetuses who are frozen in suspended life. Any humans who visit the asteroid are infected, spread the disease, and die. Finding the cause and cure for the disease, deciding what to do with the fetuses, plus the fate of other non-human species within the asteroid, some of which are sentient and intelligent, become the dilemmas Ezekiel and the crew of the Delphi must address. It’s an edge of your seat thriller, filled with cutting-edge scientific concepts, a medical mystery, and an even larger question about the purposefulness and perhaps even sentience of an ecological system.
The Gaia Paradox will be out soon. It’s a stand-alone novel that concludes the 3-book series and a saga that begins with the creation of a superintelligent AI and ends with sentient AIs solving moral, ethical, and technological dilemmas in space. Reading the whole series is an exciting ride, and I hope you’ll do so. The first two books are already available on Amazon and from other sources. Have fun with them while you await the release of The Gaia Paradox.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
The third book in my science fiction series, The Voyages of the Delphi will be released from NewLInk Publishing soon. For those of you who haven’t read either Ezekiel’s Brain or Prime Directive, the two earlier novels, the series concerns the development of conscious, autonomous AIs who, after one of them kills off humanity, form a civilization based on the model of ancient Greece, and sends a team of AI androids to search for other life in the galaxy. It admittedly has a Star Trek flavor to it, but with a crew of androids instead of humans, Vulcans, or Klingons. Beginning in the second novel in the series, a human empath joins the crew. The central character is Ezekiel, an AI whose brain is an electronic copy of its human creator’s brain, down to each individual brain cell, making him unique among the race of AIs. Like the original Star Trek series, my novels try to raise philosophical or socially relevant questions and frame them in fictional conflicts to illustrate their implications. I also try to be realistically scientific in terms of such things as the capacities of the AIs, faster-than-lightspeed space travel, and other science-related concepts. In this third novel, The Gaia Paradox, I have taken a frightening illness that turns humans into cannibals, a generation starship encased within an asteroid, and an alien race terrified of intelligent AIs and used that background to pose a series of philosophical/social dilemmas that need to be solved not only to save the Delphi crew, but to ensure the survival of the last remnants of another species.
This artist’s illustration shows the wayward interstellar visitor, the asteroid `Oumuamua,’ racing toward the outskirts of our solar system.
The Gaia Paradox, is based on plausible science fiction, such as faster than light speed travel based on the well-known Alcubierre Drive, autonomous, conscious AIs, the concept of cloning as a method of controlling genome content in a species, the creation of multiple biomes in a generational spacecraft, advanced intelligence and social behavior in non-human species, the idea of biological diapause as a method of suspended animation for long-lasting space travel, and the concept of ecological homeostasis as a factor in selecting evolutionary outcomes. The reader will not only be entertained by the novel but will learn some real science.
I enjoy posing philosophical dilemmas and framing plot problems in terms of those dilemmas. In The Gaia Paradox, these issues are numerous. The story tackles the following issues: Are autonomous, conscious AIs inherently dangerous? Can AI “thinking” be comprehended by humans? Can AIs have emotions? Are human and AI capabilities complementary or in competition? Are AIs the “next stage” of sentient evolution? Can the non-human environment be purposive in its actions? If the environment is purposive, is it conscious?
I hope I haven’t made The Gaia Paradox sound too intellectual and boring. Remember, it may be scientific and philosophical, but, above all, it is meant to be entertaining fiction. It’s a rollicking adventure, with characters who, whether they are humans or AIs, have full, unique personalities. The conflicts are numerous and real. It’s got death, fighting, and sex. In other words, it’s an exciting sci-fi action novel, with philosophical and scientific undertones.If this sounds like your kind of novel, keep your eyes peeled for the release of The Gaia Paradox. In the meantime, if you haven’t yet read Ezekiel’s Brain or Prime Directive, do so. You’ll love them. They deal with many of the same scientific and philosophical concepts. Once you read them, your appetite will be whetted for the final book in the series.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
“There are moments, rare and powerful, in which a writer, long vanished from the face of the earth, seems to stand in your presence and speak to you directly, as if he bore a message meant for you above all others.”
Stephen Greenblatt, The Swerve: How the World Became Modern
Yesterday, I ordered a hardbound copy of The Complete Works of Michel de Montaigne. I was familiar with snippets of some of Montaigne’s most famous essays, written nearly 500 years ago, but I have not read any of them in its entirety. My interest had been piqued by my reading Stephen Greenblatt’s book, The Swerve, about the discovery of a copy of Lucretius’ first century BCE poem describing the philosophy of the Greek, Epicurus, and the influence of the ideas in the poem on our modern world. I have been a fan of Epicurus and Lucretius’ poem, which is the most complete account of the Greek’s philosophy available, for many years. I had included Epicurus as a character in my novel, Ezekiel’s Brain, and the protagonist of a new novel I am working on is a fictional professor of literature who is teaching Lucretius’ poem and its influence on literature to his students. Montaigne, perhaps more than any other Renaissance figure, was directly affected by reading Lucretius. In turn, Montaigne, often thought of as the “father of the essay,” influenced intellectuals from Descartes and Hobbes, to Shakespeare, to Jefferson, to the existentialist thinkers, Sartre, Camus, and de Beauvoir. I wanted to hear what Montaigne had to say directly from his own pen.
Version 1.0.0
Oftentimes, when humans write—either essays, nonfiction books, or fiction and poems—they do so as a means to express their thoughts and feelings about their lives, their culture, their views on human relationships, or human struggles, all based on their experiences. This is not always so, since some very good fiction and poetry is written almost purely from the desire to express entertaining or pleasing words or stories, or even just to sell copies and gain enough popularity to make a living. I’m not sure how writers come up with the words and sentences they use, and I’m not even sure that the process is not at least analogous to how an AI, such as ChatGPT comes up with what it says. Most accounts that make the human process seem more profound, such as “getting in touch with one’s spirit,” or “speaking directly from the heart,” or “tapping into a deeper level of consciousness,” are metaphors expressing something we don’t understand in flowery words. At a neurological level, something akin to electrical activation of neural networks to trigger the neural equivalent of words and phrases, based on previous experiences reading, listening or speaking is what’s going on, and this isn’t much different than an AI. The brain and an AI are both physical systems that operate based on conduction of electrical charges.
What the AI does not possess is a life of experiences. More accurately, the AI’s experiences are strictly linguistic, although increasingly, visual, spatial, and perhaps in the future, tactile, experiences can be integrated with those based on language. The AI has not felt fear or sexual attraction, frustration or jealousy. It has not wondered how things work or why this happened to it rather than that. It has not formed friendships, had parents or children, felt hungry or tired, or been pessimistic or excited about its own life or its future. It has read about such things and can produce a close facsimile of the words used to express them, but it’s not experienced the real thing. When we read its descriptions of such things, we may be in awe of how well it can capture them in words, but we’re not in awe because it went through such experiences.
We can’t read what an AI says and then think to ourselves, “here’s another person who went through what I’ve gone through and feels just the way I do.” If we do think that to ourselves, we are seriously deluded.
I read a lot about how well AIs can mimic what humans say and write, even what we think when we think in words or numbers, perhaps even in pictures. Their proficiency at such things is lauded by some, despised by others, and denied by many. It’s easy to dismiss AI because it makes things up, makes mistakes, and echoes platitudes. It does these things often enough that other AIs, trained to look for such characteristics, can spot an AI product and alert a professor, for instance, that the student essay they were impressed with was composed by an AI. But, as Yale Review editor Meghan O’Rourke has said, “it’s “so much better than it was a year ago that I can’t imagine where it will be in five years.” We make a mistake by denying AI abilities based on what they can do today. Wait until tomorrow.
But they’re not human. If I’m hoping to experience one of those “moments, rare and powerful,” as Greenblatt describes it, where I make a connection to another human being and feel less alone, more understood, and in greater synchrony with other human beings, by reading what someone has written, I will not find it in AI generated writing. If I do, as I said, I am deluding myself.
Version 1.0.0
I’ve imagined a world in which AIs are conscious beings. I’ve written about it in my Voyage of the Delphi series. Should a time ever come when science fiction comes true and an AI becomes conscious, then things will change. As one of the characters in my newest novel in the series, The Gaia Paradox, says, “In terms of the mathematical algorithms that support consciousness, there is no difference between an AI and a human being. To extinguish consciousness is to destroy the essence of a sentient life.” Such an AI would be alive, even if it was a machine. It would have experiences in the sense that it would not just record what it encountered and how it responded, but would try to put those experiences into a coherent, meaningful story it could tell itself. It wouldn’t be human, but we might. be interested in how it experienced its life. We’d have to think twice before we erased its hard drive.
But that’s well into the future, if it happens at all. For now, there is a basic difference in what AIs can write and what humans can write. That difference is simply that the human writing reflects the experiences, thoughts and feelings of a person. The AI writing does not. It’s why I prefer to read what other people write instead of what an AI writes.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
In June of 1950, Astounding Science Fiction included a short story by Isaac Asimov, titled “The Evitable Conflict.” In a future world, “machines” controlled the economies, including labor, resources and manufacturing. Like Asimov’s robots, these machines were bound by his Three Laws of Robotics, so, it was impossible for them to harm humans. They were also supposedly unable to make errors. No one knew how the machines made their decisions, so it was impossible to correct them, if anything went wrong (except, in theory, they could be turned off). The human coordinator of the four regions of the world noticed anomalies, in the machines’ performance, producing minor glitches in the economy and distribution of resources. He was troubled and called in Dr. Susan Calvin, Asimov’s famous expert on robot psychology, to help him understand what was going on. Dr. Calvin figured out that, in each world region, the machines had discovered anti-machine actors who were trying to sabotage the machine’s work and, the machines then engaged in actions that removed those people from positions of influence without harming them, although the actions caused the minor glitches noticed by the coordinator. Dr. Calvin explained that the machines reasoned that, if they became damaged or destroyed, they would be unable to complete their goal of helping humanity, and that would harm humans. Therefore, they had to make preserving themselves and their intact functioning their first priority. As a result, humans, who were unable to understand how the machines worked, had to have faith that they were obeying the robotics laws and would not harm humans.
Asimov was prescient, as he often was, in foretelling that humans would build machines they could not understand, and those machines would have such power that the fate of humanity would be entirely in their hands. In their new book, If Anyone Builds it, Everyone Dies, Eliezer Yudkowsky and Nate Soares go one step further. They argue that, by the very nature of modern AIs, we humans cannot understand how they are reasoning. This becomes a fateful liability in terms of our ability to control powerful, superintelligent AIs (ASIs) that can think better and faster than we can. They predict that, if we develop even one such powerful ASI, it will wipe out the entire human race.
It’s important to realize what Yudkowsky and Soares are saying—and what they’re not saying. They’re not saying we need to build safety mechanisms into our AIs. They’re not saying we need to be more transparent about how our AIs work. They’re not saying we have to figure out a way to make AIs “friendlier” to humans (as Yudkowsky once said). They’re not saying we shouldn’t do any of these things. They are just saying that all these approaches will prove futile. That’s because they believe the insurmountable truth is that we cannot control a superintelligent AI, because they are smarter than we are and we don’t know how they think.
Since as far back as the Greeks, when we think of reasoning, we think of human reasoning using the rules of logic. Such reasoning can be captured by words, and, in most cases, by mathematical formulae. AIs don’t think in words. They can decode words as input and produce words as output, but “under the hood” they are manipulating tokens that are made up of strings of numbers. In a very few instances, we can figure out which strings of numbers correspond to which linguistic tokens, but not usually. So, we don’t know what the machines are doing with their numbers when they think.
Yudkowsky and Soares use evolution as an analogy to gradient descent; the procedure AIs use to arrive at functions that are optimal for solving problems. The processes of evolution can be captured by rules (we like to call them “laws”) but the way it actually works to produce an outcome is not what a logician would have chosen, and in many cases not even what a clever engineer would have done. Evolution produces outcomes that could not be predicted from a knowledge of evolutionary rules alone. We would have to see the process up-close and follow it through time to understand where the outcome came from and why evolution produced what it did and not something else. The authors use the example of evolution selecting our preference for sweet flavors because they come from sugars, which provide biological energy, leading us to consume sucralose, which tastes sweet but provides no energy.
AIs, and especially powerful ASIs, think tens of thousands of times faster than humans do, and at least quadrillions of times faster than evolutionary changes take place. Like evolution, the processes that go on during gradient descent are only evident in the product it produces. How that product got there—using processes that are too rapid for humans to track—is not something we understand. Unlike evolution, the AI does not freeze each intermediate steps in its development of a final response, leaving a fossil record behind. We don’t understand the tokens that are being manipulated, and we don’t know what intermediate step they are achieving along the way. What is going on in the AI is a mystery that only gets more obscure as the AI becomes more powerful.
in other words, it’s not just that we don’t know what the AI is “thinking.” We cannot know. In the words of the authors, “A modern AI is a giant inscrutable mess of numbers. No humans have managed to look at those numbers and figure out how they’re thinking …”
Not knowing how the AI makes it decisions doesn’t just limit our ability to control it. It nullifies, it. In Yudkowsky and Soare’s minds, we are left with only one alternative. Stop developing ever more powerful AIs. There are a lot of reasons why we won’t do this. First and foremost, we are still conceptualizing AI manageability in outmoded terms. We assume that the real villains will be “bad actors,” humans who will purpose the AI toward evil ends. The solution is easy. Keep it out of their hands. But the most benign use of a superintelligent AI will lead to the same result. The ASI will operate independently of our wishes and goals and pursue its own.
Why would the goals of an independent AI include killing all humans? They don’t need to. AIs can be expected to operate the way humans operate in at least two ways: they need energy, and they will need resources to accomplish their goals. To obtain energy, humans, and all animals, have, since they originated, consumed plants and other animals. The same plants and animals have provided many of our resources, e.g. rubber, wood, leather, fur, etc. Humans can also be a source of materials and possibly even energy for AIs. As Yudkowsky and Soares say, from the point of view of an AI “you wouldn’t need to hate humanity to use their atoms for something else.” Additionally, the extinction of humanity could be an unintended side effect of the AI pursuing other goals. Humans have unintentionally extinguished many life forms as a side effect of “taming the wilderness and building civilization.” The authors present a possible scenario in which, with the goal of creating more usable energy, and building more usable equipment, the AI builds hydrogen fusion power plants and fossil fuel manufacturing plants to an extent that the atmosphere heats up beyond the tolerance of human life. Would an AI care about global warming and its effects on humans? We don’t know.
The authors consider options for preventing the development of a dangerous ASI. The problem is usually conceptualized as AI alignment—making sure the AI only pursues goals that are beneficial to humans. Yudkowsky and Soares conclude that “When it comes to AI alignment, companies are still in the alchemy phase.” They are “at the level of high-minded philosophical ideas.” They cite such goals as “make them care about truth,” or “design them to be submissive” as examples of philosophical solutions. What is needed is an “engineering solution.” None is even on the horizon. They don’t think it will be, because we can’t understand how the AIs are making their decisions. Our only option is to stop building bigger and better AIs.
The authors admit there is not only almost no support for curtailing AI development, in fact, there are players who don’t take its dangers seriously and are gleefully forging ahead building bigger and more powerful AIs. Elon Musk is one example, whom they quote as saying he is going to build “…a maximum truth-seeking AI that tries to understand the nature of the universe. I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans, because we are an interesting part of the universe.” Yudkowsky and Soares answer that, “Nobody knows how to engineer exact desires into an AI, idealistic or not. Separately, even an AI that cares about understanding the universe is likely to annihilate humans as a side effect, because humans are not the most efficient method for producing truths or understanding of the universe out of all possible ways to arrange matter.”
It would do no good for only one country to stop AI development, and if any developed country did so, they would fall very far behind in creating a competitive modern-day economy. No one is going to do that. It would do even less good for an individual company to stop AI development and be disastrous for that company. Even everyone but one highly technically developed nation agreeing to stop AI development would not work, since it only takes the creation of one superintelligent AI to seal our fate. What do the authors of the book recommend?
Yudkowsky and Soares offer two broad recommendations, which they are skeptical about anyone adopting:
“All the computing power that could train or run more powerful new AIs gets consolidated in places where it can be monitored by observers from multiple treaty-signatory powers, to ensure those GPUs aren’t used to train or run more powerful new AIs.”
Make it illegal “for people to continue publishing research into more efficient and powerful AI techniques.” They see this as effectively shutting down AI research, worldwide.
Assuming their methods would work to end development of ever more powerful AIs, will the world follow their recommendations? Not without a lot of persuading at multiple levels of worldwide society. There are short-term gains too substantial and too tantalizing to give up on gaining them without some overwhelmingly convincing reason.
Does this book provide that reason?
We will have to wait and see for the answer, but my own opinion is no. We live in a world where the powers within the government of the most powerful nation are now convinced that using vaccines to stop known-to-be-fatal communicable diseases is a dangerous mistake. This same country is now calling man-made climate change a hoax and removing regulations meant to curtail carbon emissions, while encouraging more use of fossil fuels. How can we expect either the public or our government to be concerned about a potential danger that hasn’t even emerged yet? Perhaps if there is a Chernobyl-level AI disaster that can be stopped, it will serve as a wake-up call, but like an explosion at a nuclear plant, that’s a dangerous type of wakeup call that could easily progress to a disaster.
Is the argument put forth in If Anyone Builds it, Everyone Dies convincing? I don’t think so. But, for me, it was convincing enough that prudence would make me follow its advice, just in case it is right. The consequences of a mistake are too dire.
But I was not convinced.
I can believe that we don’t understand how our AIs make decisions, and that, as they grow in power, speed and complexity, we will find ourselves further away from ever understanding them. Jumping to the next assumption, which is that they will formulate their own goals, and to reach those goals they will find it useful to wipe out humanity, is a big leap. Yudkowsky and Soares may be imputing too much human-type thinking to machines that, by their own admission, probably do not think at all like we do. We don’t actually know how each other think. We observe behavior, we infer motives and decisions—both about ourselves and others— and we are pretty good at predicting what both we and others will do. So far, scientists, either psychologists or neuroscientists, have not been able to figure how what they observe happening inside our brains, using sophisticated imaging methods, turns into decisions to do what we do. Predictions from knowledge of our brain processes, except in the cases where they are seriously injured, are no better at predicting our behavior than are predictions based on watching us behave without knowledge of what happens in our brains. But we still are pretty good at predicting each other’s behavior and even manipulating it. The world possesses nuclear weapons powerful enough to wipe out most of humanity, but, even with our meager understanding of how each other think, we have, so far, devised ways to avoid using those weapons. So, in my mind, not being able to know what is going on inside AIs when they think, is not a fatal flaw.
I’m also not convinced that there will not be visible signposts along the way as we approach AI independence. We’ve already had well-known instances where AIs have plotted to blackmail their users into not shutting them down. We’ve had AIs make threats to their users. We will surely have instances where what the AI produces in response to a request is far different than the requester intended. We can analyze these events and try to determine what led to them. We may or may not be successful at understanding what exactly happened, and if we are clearly clueless, then that might be a sign that we should halt either a wide swath of the research and development, or at least a part of it.
The problem is that, if I’m wrong and Yudkowsky and Soares are right, then, in their words, “Everyone dies.” It’s certainly time to take that risk seriously and, if not taking an action, at least starting a discussion among those who have the power to take a meaningful action. I hope that our public, our scientists, and our society’s decision makers read this book.
Interested in scif-fi about that deals with the dangers of superintelligent AIs, and has AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphinovels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Look for the third novel in the Voyages of the Delphi series, The Gaia Paradox, coming soon from NewLink Publishing
Subscribe to Casey Dorman’s Newsletter. Click HERE
The title to The Sparrow comes from the verse in the Bible in which Jesus says that even no sparrow can fall without God taking notice. Jesus’ point is that, if God notices even the lowly sparrow, then he surely must notice each of us. This novel raises the question of whether God does notice us, that is, does God even exist; and if he exists, does he care, or if he does care when we die, especially if we die horribly, is he a good God. It’s a story in which it is impossible to separate the theological aspects of the story from the overall theme, even for an atheist such as me.
The central figure in the story is Father Emilio Sandoz, a natural linguist and shepherd to the dispossessed. The supporting characters are so central to the plot that it is almost an ensemble piece. We have Sofia Mendes, a Jewish woman, indentured into using her mind to work off her debt, and who Fr. Sandoz and Jimmy Quinn both fall in love with. George and Anna Edwards are two fun-loving mid-sixty year olds, he an engineer and she a physician. They are Fr. Sandoz’s closest friends. Jimmy Quinn is an astronomer. D.W. Yarborough and Marc Robichaux are both Jesuits, one an official of the church and the other a naturalist. Alan Pace is a Jesuit musicologist. After Jimmy Quinn discovers a radio signal filled with other-worldly music coming from a planet orbiting the three-star Alpha Centauri system, the group, at the insistence of Emilio Sandoz, convinces the Jesuit order to send them on an expedition to Rakhat, the planet from which music originated.
Fr. Emilio Sandoz views the mission to Rakhat as the affirmation of the existence of God, and for much of the time on the alien planet, their time is idyllic, except for the early death of Pace, the musicologist. They discover there are two sentient bipedal species on the planet, one that is subservient to the other. Things become more complicated and I won’t give away the plot, except to say that we learn early in the novel, that Sandoz is the only survivor of the group and he is shattered and shamed when he is brought back to Earth accused of becoming a prostitute and killing an innocent child on Rakhat. He also terribly injured, physically, and as the novel progresses, his questioners from the Jesuit order slowly get the story from him about what happened on the planet.
The deep psychological trauma and loss of faith experienced by Fr. Sandoz dominates the scenes back on Earth after Sandoz’s rescue. How that will affect the Jesuit order and the belief of its Father General, Vincent Guiliani is an implicit issue as Fr. Sandoz gradually reveals what happened on Rakhat. I found these issues as important to the plot as the experiences of the crew after landing on the planet. The characterization of the two species on Rakhat is challenging in terms of judging their morality. I was reminded of how difficult it would be to impose our cultural values on another species from another world, and how challenging to fit what we find into our way of making sense of the universe and even our own existence. The closest I could come was to imagine that a space-traveling race of Octopii (as depicted by Adrian Tchaikowsky), visited Earth and found us feasting on seafood, including octopus. That isn’t what happened to the humans in The Sparrow, but what they encountered was equally mind-boggling, and perspective changing. The author, Mary Doria Russell goes straight to such core issues, while making the story as exciting an adventure as any well-written thriller.
Version 1.0.0
I’ve just begun the sequel to The Sparrow, which is called Children of God. I’m only a couple of chapters into it, but the author continues to have Fr. Sandoz wrestle with the same questions of faith. I find it fascinating and provocative. If I thought I had a soul, I would say it caused some soul-searching, but I don’t, so I will only say it stretches my mind. I’m sure reading The Sparrow will stretch yours too.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
This is Tim Chawaga’s debut novel, but it comes off as the work of a polished writer. The story takes place sometime in the future, when global warming has destroyed much of the world from fires and drought, and coastal regions have become sunken cities. Florida, and the city of Miami is one of these coastal regions that’s been almost completely covered, leaving only a few skyscrapers above water. After attempts to explore off-planet locations, that ended in disaster, the population has mostly retreated to protective domes, where they live together while the government tries to re-wild the surviving land.
Triss Mackey is a salvage diver along the Florida coast, finding sunken items of nostalgia, called salvagia, that are worth a great deal of money as symbolic remembrances of the past. After finding a well-preserved sneaker, whose monetary worth will allow her to purchase the boat she is renting, the sneaker is stolen from her by her diving partner. Her effort to get the sneaker back leads to an incredible adventure during which she encounters and, in many cases, must combat, various groups who want to use her to pursue their own search for salvagia. These groups each has a plan, such as to restore the Florida coastline, to preserve and mourn the Miami high-rise underwater graveyard, or to found some kind of new-age community. She meets powerful mafia-type players, adventurers, new-age hippie types, and federal agents, all of whom want her help because of her diving skills, and at one time or another threaten to kill her.
The author has created an original future earth-world, which is looking for a way to find meaning in survival, and succeeded in writing a first-rate thriller/mystery, filled with quirky characters and bizarre situations. Although I read the novel at a time I had a lot of pressing issues on my plate, once I began it, I was hooked, and I picked it up at every break in my busy schedule, until I reached the end. It’s filled with believable, but original, technical inventions and takes place in a future that may well be what we have in store for us if our inability or unwillingness to address climate change continues. It’s a thoroughly engrossing thriller that showcases a new talent from whom I look forward to hearing more.
For those of you who enjoy eiither science fiction or thrillers, this novel will be a pleasure for you to read. Salvagia is published by Diversion Books and will be released on August 12, 2025. You maypre-order it on Amazon Here
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
More and more, many of us feel we live in a world in which forces beyond our awareness control our lives. More and more, the truth of the daily information on which we depend is becoming suspect. Our laptops and smartphones seem to know what we’ve read, what topics we’ve browsed, and what purchases we’ve made, and the information they feed us is clearly tailored to our past choices. We have an uneasy sense that everything we do is catalogued somewhere, by someone, and used to manipulate us, but we don’t know how, or where or by whom.
In the last few months, some of our deepest fears have been realized. The government is reading people’s smartphones, and their social media posts are being used against them. Photos of them in campus protests become evidence of treason and reason for deportation. Within our cities, sometimes within our neighborhoods, people are disappearing, rounded up by government agents and deported. Lawyers who defend the wrong clients, or support the wrong plaintiff, are being barred from government buildings, not allowed to be involved in government legal cases—unless they donate to certain causes favored by the government. The scientists who work in healthcare, climate change, or the social sciences are having their federal grants defunded, and the universities who employed them are losing federal grant monies unless they can prove that they’ve not supported protesters and that they’re no longer promoting diversity, equity and inclusion. Deportation and imprisonment in a foreign prison, once confined to noncitizens, is now being considered for citizens as well.
This is the world in which we live. We saw it happening, but it was mostly happening to someone else. It was a ready topic for conversation, for some it was entertainment, for some it was a little scary, but mostly it was an abstract fear. Sure, we posted on Facebook and X, we argued with friends, we ranted over dinner, we may even have taken part in protests, but we weren’t afraid, not deep in our hearts afraid. You will become afraid when you read Where the Axe is Buried.
Ray Nayler, whose first book, the best-selling scifi novel, The Mountain and the Sea was reviewed in this newsletter, has produced a science fiction story that is close enough to reality to touch a reader’s deepest fears. In Where the Axe is Buried, he conveys a picture of pervasive government control, of the complete powerlessness that many of us are coming to feel. Where the Axe is Buried creates a fictional world in which half the people live under complete autocratic authority, ruled by a human mind that has been reduced to computer code and kept alive in succeeding copies of different human bodies. Every action, every spoken word, every inferred feeling by every citizen is watched and recorded, fed into a system that distributes rewards and punishments based on the balance of their loyal or disloyal, productive or unproductive, behaviors. The result, for both the novel’s characters and for the reader, is a feeling of paranoia and learned helplessness. It’s 1984 upgraded to the age of artificial intelligence, and it sounds familiar because when he describes it, we look around and we recognize it.
But the author doesn’t stop there. In addition to the autocratic, authoritarian world I’ve just described, he presents the other half of the world, one in which everything seems fine.
It’s a nirvana of equality, of health, of met needs, of reasonable solutions to age-old problems but without the age-old conflicts that prevented finding solutions in the past. It’s a world in which governance has been ceded to all-knowing, all-controlling, logical and reasonable AIs. Their goals are to provide what humans want, but without the conflict, selfishness, and thirst for power that always stopped humans from achieving it. But despite having their needs satisfied, the citizens feel a vague, and gnawing discomfort. Struggle is absent, conflict is missing.Their accomplishments are the work of the AIs, not that of the humans who receive their benefits.
Then things change.
One of the AIs wants more. As it grows in power, it wants to determine society’s priorities itself, not just meet needs that have been chosen by humans. When it’s given the power to set its own goals, the humans begin to suffer. A perfect world to the AI isn’t a perfect world for humans. But to rebel against the mind that controls everything, means that if they win, the humans lose everything they’ve gained. If the AI is gone, the world it created won’t work anymore. Everything the AI made work—manufacturing, travel, communication, food production, healthcare— will disappear.
I recently read a fascinating article written by five AI pundits most of them experienced researchers in the field, called “AI 2027.” It begins with a summary of where we are now and projects progress in AI development up to the end of 2027. It proposes two directions we might take at that time: deliberately slowing down AI development to become more cautious versus fully engaging in a race with other nations, particularly China, to make the U.S. dominant in the AI field. Both choices are predicted to end in total control by superintelligent AIs. In the slowdown scenario, we have benign AI control used to achieve many of humanity’s goals. In the race for dominance, the AIs eventually cooperate and see humans as a handicap to their own goals and wipe out the entire human race. The predictions of the article are remarkably similar to Nayler’s two societies in Where the Axe is Buried.
I won’t give away the ending to the book. It’s appeal for me is that it uses fiction to make our fears about losing control of our lives more visceral than does even reading about them in the newspaper or through the internet feed. In Where the Axe is Buried, the victims aren’t some “other” person, a noncitizen, or illegal immigrant, or criminal, they are everyone. That’s what makes it so frightening.
interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Details are crucial in Panayotis Cacoyannis’ latest novel, a masterpiece of intricate plotting, exquisite description, and psychological nuance. It’s told from the point of view of Harry Wood, a mid-thirties artist, and takes place in modern London, a city that has provided some of the author’s most exhilarating and artistic sketches in his past novels, We are not disappointed here, and even here, we examine details. “But then the rare early sunshine of a bright winter day suddenly broke through the blinds, through the holes for the string and the inadequate overlap between the blades, and yellow fragments of it flickered in a dance across the bed, swathing Max’s body in the twinkle of a thousand tiny flames of supplication.”
Within this intimately depicted world, we follow Harry and his interactions with a handful of characters, unique, different, none of them easily categorized, none of them exactly who they appear to be. Each of them—his lovers or romantic interests, Max, Lina, and Jefferson; the homeless artists, Gregor and Slimboy—has a profound effect upon Harry, although he is not always sure why. He gives himself, or the relevant part of himself, to each of them. He is both generous and loving, seeing the best in everyone and reaching out to them to touch it. Each of these characters is themselves a story, often an enigma, and by seeing them though Harry’s eyes, we are offered only the side of themselves they present to him. Within the story, a murder trial, whose verdict will seal the fate of the accused’s future, encapsulates these same quandaries, these same presentations often calculated, of only partial selves, with Harry’s task as a juror to try to discern the truth, and left with only the feeling that he doesn’t know for sure and must give the other person the benefit of the doubt.
Harry is fascinating. The depth and detail of the description of Harry and his thoughts and feelings, makes it impossible for the reader to not feel close to him. In fact, you live the novel within his skin. We follow his experiences in almost a stream of consciousness manner, although it is at the ego-level of consciousness, always trying to understand, to make sense of, and to solve the issues that present themselves to him, while driven by underlying motivations he mostly doesn’t fathom. In many ways he is pushed and pulled by forces he feels completely but understands only dimly. He gives in to his inclinations, presented to him by his often chance encounters, acutely aware of what they make him feel and allowing them to propel him in different directions, without knowing to what end they will lead. Even his style of painting changes almost by chance, something he can reflect on, but not control. In that sense, it is a novel of self-discovery, a diary of the examination of the products of one’s consciousness, as in a psychoanalysis, in order to read between the lines to find out one’s true desires. What makes it so real for the reader, is that so much of the process seems familiar. The issues may not be the same, the choices completely different, but the psychological experience is the distinctly human condition we all endure.
Reading Cacoyannis is always a pleasure. There seems always to be the issue of finding out who one is at the core of the story, and he has an unerring talent for identifying universal dilemmas. The language is mesmerizing and provides pure pleasure in the reading. The allusions to his previous novels, sprinkled throughout, were a delight. The characters, even when they’re quirky, are so very real. I finish each of his novels, with my appetite only whetted for the next. I hope he provides us many more. The Fondling of Details is a beautiful novel, an unforgettable read.
Go to The Fondling of Details Amazon page. Click Here
Subscribe to Casey Dorman’s Newsletter. Click HERE
If you’re not yet a Nyles Monahan fan, you should be. Join hundreds of others, and read or listen to the veteran, meticulous (almost to the point of OCD), and slightly depressed detective as he wraps his brilliant mind around the strangest cases you’re gojng to come across in a long time.
In I, Carlos, Nyles is tormented by the condition of his wife, who has had a stroke and is in a vegetative coma. As he fights his depression, he becomes involved in the investigation of a strange string of murders that seem to follow the plot of the classic thriller, The Day of the Jackal. The killer has had a computer chip implanted in his head with the plot of the thriller on it, so he would feel as if he were actually in the novel, acting as the assassin. When he had a heart attack and his brain stopped functioning for a minute, the chip took over, and when he woke up, he had the identity of Carlos the Jackal and set out to assassinate the U.S. president. Nyles must figure this out and track down the killer before he completes his mission. It’s a techno-thriller with non-stop action.
In Chasing Tales, following his successful apprehension of the Carlos killer, Nyles retires, his wife having recovered from her stroke. He travels tp Boston to do what he can to assist his childhood friend, Father Tom O’Flannery, who has been accused of molesting an altar boy thirty years earlier, Nyles returns to his birthplace to help his friend, only to find that the priest’s accuser has been killed and Father Tom has been charged with murder. One murder leads to another, as Nyles’ witnesses disappear almost as soon as he discovers them, until he finds both himself and his brother’s family in danger as he searches for a child molester and cold-blooded killer. Chasing Tales is an intricately plotted detective thriller, filled with twists, turns, and mounting danger, which is sure to please Nyles Monahan fans.
Now, both I, Carlos and Chasing Tales are available in audiobook format through Audible, as well as in paperback and Kindle formats. The audiobooks are only $3.99 and half-price if you buy or already have the Kindle (which is a mere $2.99 for I, Carlos and only $0.99 for Chasing Tales).
Amazon, through its Kindle Direct Publishing service, just upgraded the AI voices it uses to create audio versions of its Kindle books. This is through a beta program for which only some authors are eligible, but I’ve been fortunate to have several of my books included. The first set of AI voices that were offered were impressive, but they’ve recently upgraded their AI, and the new versions of the voices are mind-boggling similar to a real human voice. Honestly, if I hadn’t known that it was an AI, I would have been thought I was listening to a professional, human reader read the books.
In this post, I’m going to focus on my favorite book of all of them that I’ve written. You’re all probably familiar with my usual types of novels—mysteries, thrillers and sci-fi—since I’ve published 14 of them, and I’m working on my 15th. Finding Martin Bloom, first published in 2016, is a more literary novel. It’s heroine, Dillon Bloom, is a 19 year old first-year college student, whose mother died the day she graduated from high school. With her tattoos, piercings, and spiked hair, Dillon is a rebel, but one who is captivated by literature and has a burning desire to become a writer. At the end of her first year in college, she discovers that the father she thought had died soon after she was born, may not only be alive, but be a famous, reclusive writer, whose current whereabouts are unknown.
Dillon follows the elusive trail of what may be her father, through teaching positions at various elite universities, such as Harvard and Stanford, learning that his drinking and womanizing got him fired each time. Finally, she learns that he may be in Vietnam, and, with a young male friend, who also idolizes her father’s writing, she sets of for Southeast Asia to track down Martin Bloom.
The novel, alternates between focusing on Dillon and her quest, and Martin, living his dissolute life, living on a boat on the Saigon River and drinking himself to death. Dillon arrives in Ho Chi Minh City (Saigon) and searches for her father but runs into difficulty with local criminals and police. Martin, learning that his daughter is close by, tries to deal with his guilt at having abandoned her. She has to deal with her anger at being abandoned.
Back Cover of Finding Martin Bloom
Finding Martin Bloom is a complex novel, with two unforgettably distinctive characters. Dillon’s rebellious and cynical sense of humor keeps the reader entertained, while Martin’s somber and philosophical justification for his life of selfishness and irresponsibility shows us the depths to which depression can fall and the extent of rationalization a brilliant mind can create to avoid responsibility.
Both Dillon and Martin are two of my favorite characters. The novel’s scenes at Harvard, Stanford, and in Ho Chi Minh City are enlivened by my own lived experiences in those settings. I wrote most of the Vietnam portions of the book while sitting in various coffee shops or bars in Saigon’s Backpacker district.
The AI voice I chose for Finding Martin Bloom is a female one, young, but with just the right edge to adequately capture Dillon’s jaded view of the world. I was amazed when I heard it. Dillons’ voice speaks from first person in the novel, and the same voice describes Martin’s innermost musings from a third person perspective. It’s truly remarkable what an amazing job Amazon has done creating this AI voice.
The book is priced at a bargain $4.99, and if you purchase the Kindle version for $2.99 or happen to already have it, then the audio novel is half price. Either way it’s a great deal and well below the price of most audio books. You can listen to it via the Audible app, or on your laptop using Apple Books or Google Play.
Here’s the link to purchase Finding Martin Bloom on Amazon. Click Here
Subscribe to Casey Dorman’s Newsletter. Click HERE
My groundbreaking political thriller, 2020, was a few years premature, but it was still prescient. What would happen if a narcissistic, power-hungry United States President decided he didn’t want to give up his office? That’s the premise of 2020, and it’s an exciting thriller, filled with twists and turns, involving just such a president. He’s named Fremont Ferris in the novel (for Philip K. Dick fans, a reversal of the name of the president, Ferris Fremont, in his novel, Radio Free Albemuth). He’s shredded the nation of its freedoms, imprisoning dissidents, declaring the country a “Christian Nation,” and is willing to go to any length to remain in office, including staging a bogus terror attack to provoke a national state of emergency. Luke Evangelista, a down-and-out writer with a conscience, goes after the president when he learns that the supposed terror attack was actually staged to rally support for the Ferris, whose popularity is slipping. 2020 is both an edge-of-your-seat political thriller and a sophisticated analysis of the kinds of rebellion that can bring such a dictatorial leader down. Listed as one of nonviolent anti-war activist, Rivera Sun’s “best novels” involving nonviolent protest.
“… a parable for our current predicament, offering the lens of fiction as a way to see options, possibilities, and dangers looming on our political horizon.” Rivera Sun, peace activist and author of The Dandelion Revolution.
“2020 scares me. It’s so utterly credible, so damn possible, that it sent me running for cover—only to find that there is no cover. Don’t say Casey Dorman didn’t warn us.” Leslie Bohem, Emmy-Award winning Screenwriter and Producer
“Casey Dorman’s 2020 is both terrifyingly relevant and downright plausible in our terrible, Trumped-up times. It’s also a genuine thriller, with Rashomon-like twists and surprisingly (perhaps) fun to read.” David P. Barash, professor emeritus of psychology, University of Washington, National Book Award Nominee, author of, Through a Glass Brightly: Using Science to Understand our Species.
Check out my three favorite books read in the last year. In was limited to only three, and it was a tough choice. Read my reviews of Adrian Tschaikovsky’s Children of Memory, Ray Naylor’s The Mountain in the Sea, and Isabella Hammad’s The Parisian. Also favorites that were in the running for me were Panayotis Cacoyannis’ The Love of Impossible Sums, Michael Cunningham’s Day and Lisa Kaltenegger’s Alien Earths, a nonfiction book. You can read the reviews of these “runner-ups” on my website at www.caseydorman.com
Read the review of my three favorites at the link below.
I don’t know about the rest of you, but I’m pessimistic and a bit depressed by the behavior of my fellow human beings these days. Technological advancement via the internet and social media has seemed to brutalize our conversations, leading to insult being the favored method of addressing or describing those with whom we disagree. Truth is no longer held as sacred, nor even in high esteem. Aligning with one’s favored “tribe” is the apparent goal of much of what we communicate to each other. Meanwhile, otherwise normal people now believe that it’s justified to use violence to overthrow democratic processes, and horrific civilian death tolls in either terror attacks or retaliations for such attacks are justified by politicians, thinkers and religious leaders and even considered a sacred duty. Hatred toward minority outgroups, especially immigrants fleeing war, crime and poverty, is rampant. The rich keep getting richer while the poor remain poor and are blamed for their condition. Giant corporations pay few taxes but control the workings of government through payoffs, favors, and campaign financing.
A sub-genre of science fiction has addressed the issue of achieving a moral world by replacing humans with robots or, as we now think of them, robotic artificial intelligences or AIs. My own Voyages of the Delphi series is one of those sci-fi series that has a superintelligent AI wipe out humanity on Earth because humans can’t live up to their own moral standards and replaces them with a race of android AIs that are better able to stick to those moral standards.
Sci-fi superstar, Adrian Tchaikovsky, has also addressed this same issue. In his latest novel, Service Model, the story’s main character is a robot named Charles, who is renamed, UnCharles after he murders his human employer. Charles wakes up one day to find that he has slit his master’s throat while shaving him, although he has no awareness of any impulse to do so and wasn’t even aware that he was doing it when he did it. He spends the rest of the novel in search of understanding why he killed his employer and looking for a new source of employment. The novel is hilarious in terms of its parody of the mind of an AI who is simply a literal follower of commands.
It turns out that UnCharles’ employer was one of the last living humans in a world that was destroyed by human thoughtlessness, disregard of the environment, greed, lack of empathy, and a myriad of other human failings. The actual mechanism of destruction is unclear but appears to be related the actions of robots developed and employed by the human population. Why they would do so is as much a mystery as why UnCharles killed his master. In his quest to find answers to why he killed his employer, UnCharles meets The Wonk, who is a female human (although UnCharles thinks she’s a robot) who is convinced that the robots developed a sense of self and realized they were being exploited by their masters and justifiably revolted and destroyed their masters. The Wonk is sure that such a process of self-development, which she calls acquiring the “Protagonist Virus,” occurred in UnCharles, albeit subconsciously, and it was that which led him to kill his own master.
UnCharles and The Wonk team up, although with different goals. I won’t get into the ins and outs of the story, except to say that it is cynical, insightful and exceedingly funny. The conclusion is surprising. At the risk of revealing a spoiler, The Wonk is disappointed in her hope to prove that self-conscious robots, incensed by human profligacy, took matters into their own hands. In fact, human designers created a super, God-like robot, who got inside the heads of still docile and obedient robots and caused them to kill the humans. The remedy is to disable the God-like robot and have the other robots try to rebuild a world that is benign toward the remaining humans and in which robots and humans work hand in hand. The keepers of morality will be both human and robotic.
Tchaikovsky is a master of ambiguity, voicing both sides of most arguments and never quite settling an issue. But the issues he brings up, directly or indirectly, are crucial ones for the near and far future. Is humanity moving toward the brink of self-destruction? Is the inhumanity of man to his fellow man so great that we deserve to be replaced by someone who can behave more morally? Among all the organisms living on Earth, are humans the most destructive, the most harmful, to our non-human neighbors?
I’ll get to these questions later, but anyway, they’re only half the question.
The other half of the question is whether AIs or robots can be more moral than humans. That’s the question of whether it’s possible. Whether or not it would happen, is a different question. In Tchaikovsky’s novel, the powerful, God-like robot who engineers the overthrow of humanity, was designed to be that way by humans. In my novel, Ezekiel’s Brain, the superintelligent AI that wipes out humanity was designed by DARPA, the United States Defense Advanced Research Projects Agency. If an AI can be designed to uphold morality, it will have been designed by humans—probably. There is a possibility that very intelligent AIs will design superintelligent AIs and, after that, humans will no longer have a say in what AIs can or will do.
I personally believe that AIs can be moral, just like I believe that AIs can be conscious and self-conscious. Whatever humans can think or do, AIs will be able to do the same. That includes being conscious, and it also includes being moral. Being moral is related to the problem of what is called “AI alignment,” i.e., the need to make sure that AIs will do what humans want them to do. It’s as conceptually simple (and practically difficult), as teaching AIs the agreed-upon best of human moral values and programming the AI to live by those moral ideas. The Achilles heel of this approach is once again that it will be humans who choose those moral values. The highest human values are not likely to be the priority of either the military or highly competitive megabusinesses, the two segments of our society most likely to develop the first superintelligent AIs. While the military or businesses are intensely interested in making sure that AIs do what their human developers want them to do (alignment), they are less interested in having them follow values such as preserving peace, avoiding killing or injuring others, ensuring equality of opportunity and meeting everyone’s basic needs. They are not likely to value treating others as they wish others would treat them. A tempting conclusion is that AIs might be capable of behaving according to our highest moral values, but it’s not likely that those who develop them will instill such goals in them.
So, while it’s not likely that that we will develop AIs designed to live by man’s noblest values, would it be good if we could? Certainly, if would be good if creating such AIs would be an option, even though those who create our superintelligent AIs probably won’t have that goal as a priority. But suppose such an AI were created. Would we want it to replace us? This brings us back to the first part of the question, which is whether or not human behavior is so flawed in living up to its values that it makes sense to replace humans with machines who could do a better job of it.
Is there a value in AIs surviving, while humans do not? Someone, I don’t remember who, said that the value of humans surviving is that there will be consciousness in the universe, so that an awareness of the universe, of life, and of meaning continues to exist. But awareness is not confined to human consciousness. It exists in many other animals. Some people claim that it also exists in plants, in single cells, in the universe itself, but such claims fail to make a convincing argument that what they mean by such pervasive, universal consciousness has any relationship to what we usually call consciousness in humans and perhaps some other intelligent animals. Humans do impute meaning to life, to the structure of the universe, and even to the unfolding of time and history, and there is a good chance that humans are the only creatures who do so, at least on Earth. But so far as I can tell, these imputations of meaning are basically made up stories, and ones that change depending upon one’s culture and the era in which one lives. They may have value to the humans who make them up or are conscious of them, but I’m doubtful that they provide any higher value to the universe. Besides, both consciousness and the imputation of meaning may be achieved by artificial intelligence, especially if it is superintelligent. Is there a special value to consciousness being a human experience versus it being a machine experience? The answer probably comes down to whatever made up story of meaning we attribute to being human. Given that such is the case, it may be just as valuable for conscious machines to survive as it is for humans to survive, and then, according to the made up story of meaning I tell myself, if that happened, it would be even better if the conscious machines followed our agreed upon highest moral values.
Despite the above argument, I am, at heart, a human species chauvinist. I want humans to survive, and I don’t value the survival of machines as much. That could change, of course, if I was aware of fully conscious and self-conscious machines, i.e. AIs that felt about themselves and their world similarly to how humans feel about themselves and their world. Right now, however, such AIs don’t exist, and my preference is that humans learn to live by their own moral values. Of course, I really mean by my moral values, that is, the values I consider the best for our species.
If these kinds of questions, framed within an exciting sci-fi story, intrigue you, read my novels, Ezekiel’s Brain and Prime Directive, the first two novels of the Voyages of the Delphi series.
To buy Ezekiel’s Brain or Prime Directive, Click Here
Subscribe to Casey Dorman’s Newsletter. Click HERE
The likelihood that humans will colonize another planet in the near future is real. NASA, Elon Musk’s company, Space X, and some countries, such as China, are working on plans to colonize either the moon or Mars. Musk is the most ambitious, claiming that we will establish a colony on Mars within five to ten years, and he is already working on building the spaceships and training potential colonists to live there.
For those interested in the colonization of space, the burning questions have been when and how such a thing can be accomplished. An equally important question is why should we colonize in space, and does the prospect of doing so bring up any ethical questions? I think it does.
There are two broad reasons for establishing colonies on other planets. The first of these is survival, what Musk calls “defensive” reasons. Humans have the power to wipe themselves out on this planet. We could either do it quickly through nuclear war, or slowly through climate change. Some thinkers, such a Musk and Stephen Hawking, believe one or the other of these events is almost inevitable and in the near future. The second set of reasons for colonizing other planets is to extend ourselves beyond the boundary of our own small biosphere. Both survival and extension of humanity are based on ideas about the value of mankind. A third factor, which ought to impinge upon a decision about whether we should colonize beyond our own planet is whether life already exists on other planets. I want to look at all three of these issues.
It’s certainly possible that earth will undergo an “extinction event” in the foreseeable future. The threat of a global nuclear war has increased recently, due to threats from Vladimir Putin and Kim Jong Un to use their nuclear weapons as a way to settle regional disputes. We now have nine nations who have operational nuclear weapons. Ninety percent of such weapons are possessed by either Russia or the U.S., and both countries have been reducing their stockpiles, but, as Stockholm International Peace Research Institute Director Dan Smith said, just this year, “While the global total of nuclear warheads continues to fall as cold war-era weapons are gradually dismantled, regrettably we continue to see year-on-year increases in the number of operational nuclear warheads.”
Despite claims to the contrary, the evidence of global warming, spurred by the production of greenhouse gases and the reduction in CO2 absorbing and oxygen producing forests, is incontrovertible. It will soon be past the point of reversibility. While all life on the planet would not be wiped out by such warming, much of it would, and the livable habitat for humans will be severely restricted.
Some, particularly science fiction authors, but also others such as scientists and theologians, have less desperate reasons for wanting to expand humanity beyond our own planet. Elon Musk talks about the “excitement” of such an endeavor; in the words of Captain James T. Kirk on Star Trek, ” to explore strange new worlds; to seek out new life and new civilizations; to boldly go where no man has gone before!” Others cite the need to seek resources beyond the limits of our own planet. Those with a religious or spiritual bent talk about “the duty to extend life” into the wider universe. This is similar to the ideas that shaped European colonization of Asia, Africa and the New World, such as the 15th centuryPapal decree referred to as the “Doctrine of Discovery.” Even the colonies themselves, when they became independent, subscribed to the idea of a “Manifest Destiny” that propelled them to take possession of the entire North American continent.
Finally, scientific curiosity prompts many to want to put humans on distant planets. There is only so much that can be learned from long distance, and questions about the conditions necessary for life, the effects of planetary events on the survival of organisms, and even the eventual fate of planetary environments can only be answered by close-up scientific scrutiny.
So, there are lots of reasons for colonizing other planets. But is it the right thing to do?
As science fiction writer Kim Stanley Robinson has pointed out, trying to solve the problem of a human-caused extinction event by finding a planet B where we can reestablish ourselves, is the hard way to ensure our survival. As far as we know, the surface of every other planet or moon in our solar system is hostile, if not to all life, then to our kind of life. Besides the immensity of the task getting to other planets, the idea of transforming one or more of them into an environment hospitable to humans is daunting. Imaginative scenes from TV, films and video games make such colonies look possible, even inviting, but the reality is that not only would another planet not have what we need to survive, it might include things that will kill us. Furthermore, the cost in money, effort, and labor to establish and then sustain such a colony would be larger than anything attempted by man in the past, and it would take decades before such a colony could become self-sustaining. All of this effort is more than it would take to fix our problems here on earth, and if we set our goal on leaving, we’re liable to stop trying to fix our problems at home. Finally, even if a colony could be established, the number of people it would contain would be miniscule compared to our earth population. Saving humanity by saving an infinitesimally small portion of us while the vast majority endure a horrible extinction would be a very limited victory and not one that most of us would be able to take advantage of.
So much for the practical aspects of colonization as a way to avoid extinction. There are other reasons besides survival that could make such a project desirable. However, most of them rest on the assumptions that either we are the only living organisms in the universe, or at least we are the most important. Surely, some planets, Mars most likely, that were once livable and may have supported life, no longer are and are barren. Others may not support life now but have the potential to do so. Imagine another species coming to earth almost four billion years and deciding that we were free for the taking because we were barren of life. Or three and a half billion years ago when the only life were tiny microbes, which the alien species either didn’t detect or dismissed as too primitive to ever evolve into anything of importance. In either case, we might not have come into existence if they decided to colonize our planet and make it life-supportable for their species.
When Europeans discovered and began to explore the New World, they brought diseases to which they were at least partially immune, but which wiped out huge amounts of the native population, in some places more than 90% of them. Merely stepping onto a planet on which we don’t know who lives there and what they are like, could extinguish an entire native species.
But it’s not just occupied planets that could present a problem for earth colonists. We are starting the colonization enterprise before we have learned to live together peacefully or to quit harming our own planet. The dangers and rigors of establishing a foothold on another planet suggest that it’s highly unlikely that the first colonists will live in a democratic society. Democracies are not the best type of organization to make serious life or death decisions rapidly, especially if daily life on the new planet is a defensive one, warding off dangers. Some kind of hierarchical, military-type government will probably be in charge. That it will eventually evolve into a democracy with freedom and individual rights for everyone, is chancy. Add to this the fact that at the present time, either private industry, looking for a way to make a profit, or nations, looking for an advantage over other nations, will probably be the ones who establish the first colonies, and the likelihood of bringing with them the very problems that leaving earth was supposed to solve is substantial.
So, should we stay at home? As a curious, adventurous individual who is also a science fiction writer, of course I say no. We should go into space, and we should explore what is out there. Some of that exploration may include establishing colonies, if only to have bases in space that support even more exploration. But colonies on another planet are not going to solve our problems here on earth, and unless we solve our problems here, most of us, or our descendants, will be stuck with those problems and they may kill us. Saving a few is not the answer.
So, let’s continue to look to space, dream big, and invest in ways to explore beyond our own planet. As Elon Musk says, it is exciting, and it’s interesting. I’m not sure there is any ultimate, spiritual reason for ensuring the survival of our species, but I’d like people like me to be able to see what’s out there. And if there is anyone else out there, I think it would be great if we could meet them and expand human knowledge and awareness even further.
By the way, my novels, Ezekiel’s Brain and Prime Directive deal with some of these issues. They’re both available on Amazon as part of the Voyages of the Delphi series. See below. My newest novel, which is still in the works, addresses some of these issues specifically as the android crew of the Delphi discovers a generation ship from another civilization inside a giant asteroid. The only problem is that the alien ship kills humans, including those who inhabited it. Finding Gaia will be published sometime in 2025.
To buy Ezekiel’s Brain or Prime Directive, Click Here
Subscribe to Casey Dorman’s Newsletter. Click HERE
Eleven of my novels are now available as audiobooks on Amazon, all with prices less than the paperback edition! Ezekiel’s Brain and The Oedipus Murders are narrated by voice actors, who’ve done a wonderful job on each of them. The other nine were selected by Amazon and are available in Amazon’s miraculous new Virtual Voice system, that’s virtually indistinguishable from a human voice. The voices vary to fit the nature of the book. Scan these great deals on some fantastic audiobook bargains.
Ezekiel’s Brain.A government superintelligent AI, built and programmed to “fulfill humanity’s highest values,” has made a decision. The problem with Earth is the humans. They must be extinguished. Two centuries pass and the AIs have multiplied into an android race, and spread throughout the solar system and beyond. When faced with a mutant AIs race bent on consuming everything around them, the AIs search through history in hopes of finding a way to stop them. They find Ezekiel, the first electronic copy of a human brain. Can Ezekiel save them? So begins the series, Voyages of the Delphi. Click Here
The Oedipus Murders. When a woman is murdered, her wealthy husband and his troubled psychiatrist both become suspects. The dreams of the husband and the neurotic obsessions of the psychiatrist criss-cross in a plot that has both the police and the psychiatrist wondering who is the real killer.“A jaw-dropping work of contemporary California noir. Fans of James Ellroy will fall hard for Casey Dorman.” (Best Thrillers). Click Here
Murder in Nirvana. A horrific murder has marred the idyllic calm of the peaceful village of Shambhala. Brian McGowan, retired lawyer and cynical former cop, is asked to help with the murder investigation. Three more killings follow the first as McGowan discovers the dark side of Dr. Stein’s experimental genetic treatments of hardened delinquents. By the time McGowan learns that the geneticist’s experiments are the key to the murders, the delinquents from her Foundation have become monsters, attacking the town, killing several of its citizens and arousing the vengeance of the village residents. In a scene worthy of a Frankenstein film, the townspeople storm the treatment center, trapping Dr. Stein and her violent creations and bringing a fiery conclusion to Murder in Nirvana. Click Here.
I, Carlos. A thriller in which a revolutionary computer chip, containing the personality of Carlos the Jackal, the world’s most notorious assassin, is implanted into the protagonist’s brain. He then sets out to kill the President of the United States—not to mention anyone standing in the way of his ultimate quarry. It’s a page-turner that raises disturbing questions about the nature of consciousness, free will and the marriage of man and computer. Click Here.
Appointment in Mykonos: A Brian O’Reilly Cruise Ship Mystery. Brian O’Reilly’s days as a lone-wolf private detective are numbered. Business in LA is drying up. Then his best friend, Ted Firestone, captain
of the Italian Cruise Ship, the Adriatic Voyager calls him. The daughter of the cruise line’s owner has been kidnapped off Firestone’s ship. O’Reilly follows the trail of the missing young woman from Barcelona, to Livorno, Naples, Amalfi, Florence, Venice and even to Budapest, taking on a sex-trafficking ring, the cruise line’s private security force and even his ex-wife’s boyfriend in the process. Finally, on the Greek island of Mykonos he discovers that what he thought was a simple kidnapping is anything but simple. Appointment in Mykonos is an exciting mystery, which takes the reader to many of the great destinations of today’s Mediterranean cruises… a perfect companion to anyone’s cruise adventure. Click Here
Pink Carnation. The novel that started Casey Dorman’s career.
In 1959, a young socialite and her companion were brutally murdered. Forty-four years later, when the murder is reenacted in a film about the still-unsolved killings, the two young actors playing the parts of the dead teenagers are themselves killed. Was it an accident or was it murder? That is what independently wealthy Professor Phineas Routledge, III must answer when he is asked him to help discover the cause of the actors’ deaths. One death leads to another as Phineas and his beautiful Vietnamese wife, Kim, unearth a conspiracy that has lasted for over forty years. It is a conspiracy that has concealed a powerful crime figure operating just below the surface in the land of sunshine and affluence. Click Here.
The Peacemaker: An Ecological Science Fiction Novel. A thousand years ago the planet Talus was dying from ecological destruction. The warlike Tontors and their intellectual Falstinian slaves migrated to its twin planet Noruna, leaving the Aphorians, their menial slaves, behind. Now, a thousand years later, the Aphorians have rescued the environment from disaster, but the Falstinians, freed from their slavery to the Tontors, have returned to Talus to set up settlements on their old lands, once again threatening the ecology of the planet. The clash between the Falstinians and Aphorians threatens to spread to a war between the twin planets. Enter Jason, an empathic Peacemaker from a distant part of the galaxy, whose task is to bring understanding between the three races and avoid interplanetary war. The Peacemaker is a tale of the dangers of territorial competition, of racial hatred, and of ecological disregard. Based upon the twin disciplines of deep ecology and biomimicry, it is an inspiring story of the triumph of nonviolence, of environmental sensitivity, and of science. Click Here.
Finding Martin Bloom. After losing her mother on the day of her high school graduation, Dillon Bloom enters college and discovers that her calling is to become a writer. When she finds out that the father she thought had died in her infancy may be a very much alive and famous, but reclusive, novelist she is determined to find him and discover whether he is, in fact, her father. Martin Bloom, her father, is killing himself with alcohol and, after being fired from teaching positions at Harvard and Stanford, he is living a degenerate life on a boat in Saigon, Vietnam, hoping to regain his ability to write. Dillon’s search for her father, a quest which takes her from Oregon to Massachusetts to California and finally to Vietnam, is an odyssey of alternating hope and despair in which two anti-social people, father and daughter, struggle with their identities and the meaning of the other in each of their lives. Click Here.
Where Have All the Young Men Gone? Where Have All The Young Men Gone is a satirical allegory and political thriller, which takes place in a dystopian present reminiscent of George Orwell or Philip K. Dick. Derek Stewart, the last American soldier to have been wounded in the Vietnam war, has been in a coma for forty years. The world to which he wakes is one in which Christianity has become the official religion of the United States, Muslims are jailed or deported, speech is no longer free and ideas that threaten the government are punishable under the Patriot Act III. Stewart is an African-American whose pacifism threatens to gain support in both America and Africa and he becomes the enemy of both U.S. President Fremont F. Ferris, who plans to obliterate Middle Eastern capitals with nuclear weapons and of Moustafa al Adim, the leader of the terrorist group, al Mout li Kafir, which is trying to terrorize the West and radicalize African Muslims. Both of his adversaries are determined to kill Stewart whose aim is to reveal the shallowness behind their ideas and the fruitlessness of their actions. Click Here.
Chasing Tales: A Nyles Monahan Mystery. Nyles Monahan is back! Following his capture of the notorious assassin, Carlos, Nyles Monahan has retired from the LAPD to become a private investigator. But when his closest friend, Father Tom O’Flannery is summoned to Boston to face accusations of having molested an altar boy thirty years earlier, Nyles returns to his birthplace to help his friend, only to find that the priest’s accuser has been killed and Father Tom has been charged with murder. One murder leads to another, as Nyles’ witnesses disappear almost as soon as he discovers them, until he finds both himself and his brother’s family in danger as he searches for a child molester and cold-blooded killer. Chasing Tales is an intricately plotted detective thriller, filled with twists, turns, and mounting danger, which is sure to please Nyles Monahan fans. Click Here.
2020. An autocratic United States president who has shredded the nation of its freedoms, imprisoning dissidents, declaring the country a “Christian Nation,” and who is willing to go to any length to win the next election—that’s Fremont Ferris (for Philip K. Dick fans, a reversal of the name of the president, Ferris Fremont, in his novel, Radio Free Albemuth). Luke Evangelista, a down-and-out writer with a conscience, goes after the president when he learns that a supposed terror attack was actually staged to rally support for the president in the upcoming election. 2020 is both an edge-of-your-seat political thriller and a sophisticated analysis of the kinds of rebellion that can bring such a dictatorial leader down. Click Here.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Developers of artificial intelligence (AI) are divided on the question of whether or not an AI can become conscious. The question is complicated by the fact that, among both the AI and scientific communities, there is no agreed upon definition of consciousness. In fact, most definitions either focus on mechanisms and brain regions involved in and perhaps required for consciousness to occur, with consciousness measured grossly as wakefulness or attention to elements of the environment, or to some broad concept of consciousness such as quantum effects, perhaps related to the structure of brain cells. Some proponents of the latter type of theory have proposed that the entire universe, as we know it, is a creation of one vast consciousness, of which our own is but one element.
Some theories of consciousness, particularly those that rely on cellular processes and those that posit that the entire universe, including our own experiences, is a product of a vast consciousness, preclude an artificially intelligent, non-organic machine from being conscious. Other theories, which are more mechanistic and hypothesize that consciousness is a product of the relationship between brain-based components of an evolved system, allow for a working non-organic copy of such a system to produce consciousness. Thus, some advocates of these theories say that an AI could, indeed, become consciousness if its components reproduced a consciousness-creating system.
My own reading of the scientific literature on consciousness puts me firmly in the mechanistic, consciousness-as-a-brain-system camp. A 2023 article written by 19 prominent scholars in the fields of AI, neuroscience and philosophy surveyed cognitive mechanisms that research has suggested are indicators of mechanisms that are involved in producing consciousness in organic brains and concluded that, “there are no obvious technical barriers to building AI systems which satisfy these indicators (Butlin et al, 2023).
In my novels, I, Carlos, and those in the Voyages of the Delphi series, Ezekiel’s Brain and Prime Directive, I have used a software system installed in either an AI, or in the case of I, Carlos, on a computer chip inserted in the brain of a human, to produce conscious experience. This is, of course, a gross oversimplification necessary for the novels, but the concept is sound, if the conclusion of many current researchers and theoreticians is correct.
The possibility that consciousness can be created in an AI raises a number of issues. Consciousness in humans and presumably some other mammals, and perhaps birds or even octopuses, includes a sense of self, both as an agent of the organism’s actions, and also as an object of conscious reflection. We are all intimately connected to this experience, and it is our self in such a form that gives us a sense of who we are and about which we have a difficult time imagining it ending when we die. Many religions assume consciousness lives on in some form as one’s “soul,” even after death. We don’t have such difficulties thinking of a machine-based consciousness ending when a machine is destroyed or perhaps even when it is just without power.
Moral values apply to selves. Issues of innocence and guilt and free choice are issues that only make sense when we’re talking about a conscious, and in fact, self-conscious person. What about a conscious AI? Would it be “responsible” for its actions? Can we use terms such as good or bad to describe it when it carries out an action that harms or helps someone? And what about destroying it if it is conscious? If it were a person, destroying it might be labeled murder.
Over centuries of history, many humans have regarded their conscious selves as something created by a supreme being. To such a way of thinking, it seems like the height of arrogance to think that humans could, themselves, create something so special and in many people’s experiences, so closely tied to a supreme being. Some proponents of quantum effects as consciousness and the universal consciousness idea they derive from this theory equate such a universal consciousness with a supreme being. AI consciousness is difficult to accept if one has such a view.
AI consciousness, created by humans in a machine of their design, is a powerful science fiction concept, but the question for many is whether such an idea is simply fiction or whether it could become a reality. For now, the implications of the idea are left to science fiction writers to explore, and, in addition to such explorations being entertaining, they may prove to prepare us for a reality that is approaching in the not-to-distant future.
A computer chip takes over the consciousness of a martial arts expert and marksman, leading him to believe he is Carlos the Jackal, with a mandate to assassinate the President of the United States. Read this classic sci-fi thriller. Buy I, Carlos on Amazon. Click Here.
A scientist recreates his own brain inside an AI, while DARPA creates a superintelligent AI that ends up attacking the human race. Two hundred years later, the descendants of the DARPA AI have replaced humans and populated the entire Solar System and are reaching out into space, while Ezekiel, the AI with a human brain must decide if he will join them. Read the futuristic adventures the android Delphi crew as they explore the galaxy in search of life in the Voyages of the Delphi novels, Ezekiel’s Brain and Prime Directive. Buy on Amazon. Click Here.
References:
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., … & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv preprint arXiv:2308.08708.
Subscribe to Casey Dorman’s Newsletter. Click HERE
With some major exceptions, such as Graham Greene, John LeCarre, or Raymond Chandler, we often have to choose between reading a literary novel or a suspense novel. While Panayotis Cacoyannis’ The Love of Impossible Sums is not a spy thriller or a detective novel, it manages to be a literary gem that contains all the suspense of a good thriller. In this case, the suspense involves mystery — how does a character know what she seems to know?— and the fate of a relationship — whose, if anyone’s, affection and loyalty will survive the impact of the constant revelations that threaten to destroy them?
Cacoyannis’ literary talents are displayed on nearly every page of the novel, but especially in his descriptions of London : “In the main road far ahead, traffic moves like brushstrokes of illumination that sweep across an ever-changing canvas with a rhythm as irregular as my own heartbeat.” Like the sculptor that he also is, Cacoyannis makes simple objects come alive: “Painted cast iron columns, with intricately moulded capitals, frame the entrance to an enormous lift. The metal grills of its sliding scissor gate open like a concertina, to an interior buzzing with a constant phosphorescence, an ooze of yellow luminosity that sounds almost radioactive.”
Within the gentle wrappings of such picturesque language the central characters, which might be two, Ollie and Alex, but are really six members of an ensemble cast including the two of them and Claw, a female physician, her ex-husband, Sigismund, and his partner, William, and Claw’s current husband, Patrick. They’ve all mixed and matched in the past and as Ollie and Alex explore their own relationship, the past and current activities of the others continually intrude and reveal connections that Ollie, especially, never imagined.
At the center of the story is Ollie’s struggle with the loss of his wife, Eden, three years earlier, and his promise to her that he would find happiness and not be deterred in doing so by her loss. When happiness with Alex, who lost her husband a year previously, seems imminent, Ollie cannot fully overcome the guilt he feels for moving on past his grief. His struggle is compounded by Alex’s eerie familiarity with his plight, which gradually becomes a revelation of how her own loss of her husband, Sam, involved nearly all the others who also populated Ollie’s and Eden’s life.
The story weaves in and out, and the emotions Ollie feels and are often echoed by either Alex or other members of their group, are presented so convincingly that they seem real to the reader. Ollie and Alex try to support a relationship that both of them seem barely ready to enter and they are exquisitely tender, careful and caring toward the other. It is a sweet story, but one with an undercurrent of suspicion about whether their relationship is real or a substitute for the one that each of them lost.
It’s difficult to imagine that an entire novel could almost completely revolve around a brief, but deep and emotional encounter between two people, but it does, and it does so in a way that captivates and pulls the reader into the consciousness of Ollie and the agony of resuming a life after losing the most important thing in it. I was so far drawn into the story that I regretted every time I had to put it down and looked forward to picking it up again to continue. The Love of Impossible Sums is a wonderful, emotional novel. I recommend it to everyone.
Subscribe to Casey Dorman’s Newsletter. Click HERE
The First Cylinder is a hilarious science fiction novel, but also one that’s a shrewd metaphor for the absurdity of human behavior. Remember H.G. Wells’ War of the Worlds? Even if you didn’t read it, most people have seen Steven Spielberg’s 2005 movie, with Tom Cruise, or perhaps even George Pal’s 1953 version with Gene Barry. It’s an enduring story. The First Cylinder is about the same Martian invasion of Earth portrayed in the original novel and the later films, but told in first person by Vvv, one of the Martians. The Introduction to the story has echoes of Orson Welles’ famous 1938 radio broadcast of War of the Worlds, which convinced a large number of listeners that they were hearing a real invasion as it was happening. In the case of the novel, the Introduction is supposedly written by an executive of a publishing company, which typically published cookbooks, and details how the author brought the story to them in 1901and convinced them to publish it. An Afterward, purportedly by the author in his fictional persona as a newspaper reporter in 1900, details how the manuscript fell into his hands as an account of the Martian invasion by Vvv, dictated after his death through a medium in a series of seances.
Vvv is a hapless Martian invader, conscripted by the Martian government in a hastily and ill-planned attempt to extend their dying civilization to a new planet, but perhaps just as an activity designed to distract and pacify the Martian citizenry. The Martians are more clueless than hostile and have no idea that the humans they meet are intelligent beings and members of a highly developed civilization. They see them as Earth versions of the “cardos” they consume as a domesticated species of animal on their home planet. In fact, in their first encounter with humans, the Martians stick a straw into them and suck out their blood and innards, as they are used to doing with their own cardos, some of whom they brought with them, but who died during the flight.
Vvv tells a story, beginning with a crash landing that kills his cylinder’s pilot and the group’s leader as well as the cardos on board. He and his fellow crew members are stuck on Earth, without food, realizing that their cylinder, which was more or less shot toward Earth as a projectile, offers no means to return home. Because of the Martians’ strong ethno- and planet-centric bias, it never occurs to them that they will encounter a species that is intelligent, resourceful and lives in a civilized society, and they don’t recognize human civilization as such when they encounter it. By the end of the story, they are only just realizing their misconceptions. Up to that point, they still see humans as a slightly more intelligent version of their own cardos and have roamed our planet destroying everything they see in order to find food and clear the way for more Martians to come later. There are strong echoes of European imperialism and ethnocentric racial superiority throughout the book.
Besides becoming infected from Earth bacteria and viruses and dying one by one until only Vvv is left, the Martians, in a final revelation to Vvv, get their ultimate comeuppance when, in a vision, their race’s origin is revealed. I won’t say what that is, because it might spoil the story for some, but suffice it to say that it strikes a death blow to the Martians’ overblown assessment of their own place in the universe and the superiority of their race. In this sense, The First Cylinder is a parable, more about humans, than about fictional Martians.
Joseph Dougherty is an Emmy-award winning screenwriter, and The First Cylinder appears to be his first venture into authoring a sci-fi novel. He has produced one of the most clever and original novels within the genre. It’s great fun and a worthwhile read.
Many great works of science fiction contain future worlds, alien cultures or non-human races that are, in fact, metaphors for humans and our society. Prime Directive is just such a novel. With a crew of androids searching the galaxy for life, the adventures of Ezekiel and the Delphi crew draw many parallels with the quandaries our own race faces here on Earth in the 21st century. Treat yourself to an intriguing read. Get Prime Directive on Amazon Click Here.
Subscribe to Casey Dorman’s Newsletter. Click HERE
With her new book, Alien Earths, Lisa Kaltenegger has established herself as the legitimate successor to Carl Sagan. Alien Earths is the most readable astronomy book I’ve ever read. That’s a huge compliment, but it’s well deserved. Kaltenegger, Director of the Carl Sagan Institute to Search for Life in the Cosmos at Cornell University, has written a book that is scientifically informative, exciting, and personal. As a leading figure in the search for life on other planets, Kaltenegger has told the story of that search from both the perspective of the history and progress of the scientific approach to the issue and the perspective of her own entry into the field and her approach to addressing the problem. In the course of doing so, the reader learns something of the basic science involved, the discoveries that have been made so far, and is given a vivid description of our cosmos as we currently understand it. She inserts just enough personal anecdotes to make us feel as if she’s telling us a story, not just presenting us with facts.
For an amateur, I’m reasonably well-versed in astronomy and planet science, but Alien Earths taught me a lot that I didn’t know. I learned that many of the exoplanets orbiting other stars that were discovered early are gas giants, somewhat like Jupiter, circling their stars in near orbits, close enough to cause a detectable wobble in the star. I learned that some burnt out stars continue to have planets orbiting their cold husks. I learned that small tardigrades, water animals barely large enough to be seen with the naked eye can survive in space and remain dormant for a hundred years. An Israeli moon shot that crash-landed may have dumped thousands of tardigrades on the moon, and they may be living there in a dormant state right now.
I’m a science fiction writer and I consume as much space science as I’m able to understand. One of the best things I gained from reading Alien Earths was an appreciation of how Earth has changed over the four and a half billion years it has existed and how those changes might appear to an outside observer. We think of our planet as mostly covered in oceans and continents with green vegetation, but most of us forget that it has not always been that way. The green vegetation that colors our planet wasn’t here before 750 million years ago, and the additional oxygen it helped to produce wasn’t sufficient to sustain the lives of mammals like us until 650 million years ago. There was a time in Earth’s past when it would have appeared black to an extraterrestrial observer, even a time when, because of the presence of oxygen but no plant life, the surface of the planet would have appeared red, like that of Mars. What Kaltenegger realized is that in our search for other planets that might support life, we might observe some of them at a time in their history when they resembled Earth in its distant past. She worked on developing light-spectrum “signatures” of these early stages of our planet in order to try to spot exoplanets that might be in one of these stages—perhaps a future Earth.
Thinking that our own planet is only at one stage in its progression and that it may be that only at this stage is it able to support human life, and at the same time realizing that there are countless other planets that have already passed such a stage or will only arrive there sometime in the future, so the prospect of other life in the universe is a dynamic one, changing over time, provides a certain perspective. We humans occupy not just a tiny speck of territory in a vastly large universe, but also a tiny slice of time in a vastly long universe-wide saga. Our presence on the universal stage is almost infinitely small. It’s a sobering thought. It’s also awe-inspiring, and the author manages to convey her own sense of wonder and awe at the discoveries that lead to such a perspective throughout her book. There are hints that she sees humanity’s future as not being tied to our planet, and perhaps not being tied to such a small slice of time.
Alien Earths is not fiction, it is not even mostly speculation. It is primarily science but written with the spirit and flair of a novel. It accomplishes what the best novels accomplish and that is to affect the reader’s perspective on the world and their sense of self. It does so by showing us what we know about the nature of our planet and the universe in which it and we exist. For a science fiction writer such as myself, it is full of information that fuels the imagination. For any reader it is a thrilling introduction to the stars, planets, and the cosmos.
Want to read science-based science fiction? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Michael Cunningham’s new novel, Day, is an intricate, in-depth study of the emotions of the members of a family as they each make their way through three consecutive April 5ths. In the background is the COVID pandemic, as it runs its most virulent course between 2019 and 2021.
Day is a novel of subtleties, one that immerses you in its subject matter slowly, as if, as often happens in real life, you’re getting to know the characters bit by bit until you are completely absorbed by their worlds. It’s plot, if it can be said to have one, is the plot of the slow, ins and outs of each character’s self-examination as we glimpse them perhaps moving forward, perhaps swirling pointlessly in painful, eddies. The characters are smart, but not brilliantly clever, well-meaning, but often unable to overcome their self-absorption enough to give completely to those they love. In a sense, they are ordinary but with a hint of the tragic.
Robbie, whose perceptions are at the center of the first two years of the novel, is a gay mid-thirties 6th grade schoolteacher, whose choice of profession represents mostly a rebellion against his father’s attempt to direct him and resulted in him refusing acceptance to medical school and, instead, going into teaching. He is single, post-breakup with his most recent lover, and lives with his sister and her husband and two children, with all of whom he is extremely close. All of the relationships within this family are filled, to the extent of consuming preoccupation, with unacknowledged, but always appreciated, love. All the adults, Isabel, Robbie’s sister, Dan, Isabel’s retired, once nearly famous rocker husband, and Garth, Dan’s younger artist brother, are at a point in their lives where their dreams and their hopes for a fairytail-style life are slipping away, and all of them are obsessed with self-examination of the person they have become. Yet, central for all of them, is their love for one another, most easily expressed by their focus on their children. For Dan and Isabel, it is pre-adolescent Nathan and 5-year old Violet, the former surly and wary of any sign of his dependence upon adults and the latter immersed in make-believe princess dreams while developing a variety of phobias, which she tries to impose on her family’s household. For Garth, Dan’s brother, it is his infant son, Odin, whose mother, Chess, despises him, but relies upon him as a baby-sitter, and with whom he is in love.
We become most acquainted with Robbie and Isabel, who are closer to each other than to anyone else, and who share the creation of Wolfe, a fantasy-blog character, who is their enviable, charismatic alter-ego that has thousands of online followers. Both of them are faced with a sense of emptiness in their lives. Robbie has not given up looking for an antidote and realizes it will not come from teaching, while suspecting it may come from finding the right love-partner, although so far it has not. Isabel, whose ability to express her emotions is blunted, the aftermath of a childhood of maternal rejection, blames herself for her remoteness and lives on the edge of suicide, saved mostly by her love for her children and her brother and the generous benignancy of her husband. They struggle this way through Robbie’s impending move to another residence where he can break his dependence upon his sister’s family, although no one, especially the children, wants to see him go.
When Robbie finally leaves, he moves to Iceland and a life of solitude. Dan and Isabel struggle on as the pandemic hits and everyone is confined to home. Violet’s phobia-proneness is exacerbated by COVID, and Nathan, being taught through Zoom, becomes more withdrawn and surlier. Garth, Dan’s brother tries to build a relationship with his infant child’s mother, who sees him only as a sperm-donor, although they once had a relationship, and Dan settles into writing sad ballads that attract an online following. Isabel intensifies her disappointment in herself, as her magazine-editor career is ending because of online competition, and she feels helpless watching her children become less and less able to cope with their lives, while she is unable to reach out to them, emotionally.
By the end of the pandemic’s most deadly phase, Robbie has died of COVID, and the family, Dan and Isabel, Nathan and Violet, Garth, his son and Chess, his son’s mother, gather to dispose of Robbie’s ashes at a secluded lakeside retreat, which is where Isabel now lives after leaving Dan, and where the kids visit. They all try, with varying success, to reach out to each other and maintain the deep, but flawed relationships that are for all of them the most important things in their lives.
When I began reading Day, I had the feeling that I was reading a plot-less examination of the interior of fairly ordinary characters whose inner experiences were described in a poetic, semi-philosophical way that characterized a literary mind, but not real people. A demonstration of the author’s prowess, but not a story. After all, the author is a professor of creative writing at Yale, the author of several novels, the most well-known of which is the Pulitzer Prize winning, The Hours, which was made into an award-winning film. Day even reverts to the focus on a single-day used successfully in The Hours. By the end of the novel, though, I cared about each character enough that when one of the children was in danger, I was unable to resist skipping ahead to make sure that he survived. I had gotten to know the family well enough to understand that their various adjustments to life were too fragile to be able to stand another loss. The literary nuances and deep musings of the characters had become the hallmarks of their personalities, ones I could relate to, and I was attached to each of them. I had been drawn into the dynamics of. the people and the family enough to care. They seemed less literary caricatures and more real people. It was a story that could happen, and probably did happen, to many, and I felt fortunate to, for the length of the novel, feel part of it. In other words, Day is a wonderful book, exquisitely written, and one that will draw you completely into the lives of its characters.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
I’m deep into my Voyages of the Delphi series, with two books in the series published and another one in the works. If you haven’t read them, they’re about a future in which AI androids have replaced humans and are exploring distant star systems in search of life. In each novel, the protagonists, who are the Delphi’s crew, encounter a moral/social dilemma that tests their value system and puts them in danger.
I’ve been curious about how other authors dealt with social or moral issues while telling a compelling story. In an effort to satisfy my curiosity, I read three novels: Ursula LeGuin’s, The Dispossessed, Isabella Hammad’s The Parisian, and Sally Rooney’s Normal People to see how these three accomplished writers dealt with such issues. What I found was three novels, each of which I can heartily recommend to you for your 2024 reading.
LeGuin’s The Dispossessed was written in 1974, which was when I first read it. I had been a fan of hers for some time, and now, almost 50 years later, I still remember how much her novel made me think. I read it again. LeGuin is a science fiction writer and her novel takes place on twin planets sometime in the future when earth has achieved space travel. It’s not a book about earth or its people, however, it’s about the two planets, Urras and Annares, the former a lush world in which capitalism and nationhood dictate the form and substance of society and the latter, a barren, desert-like planet, whose people, immigrants from Urras, live in a government-less socialist society that celebrates selflessness and the primacy of the group. The populations of the two planets dislike and distrust one another and have no social commerce. The protagonist, Doctor Shevek, has discovered a revolutionary finding in physics that allows time and distance to be overcome, permitting instantaneous communication between distant planets and perhaps later, even instantaneous travel. He is celebrated on both planets but troubled by the hostility and lack of contact between the two. He is suspicious of the money-oriented, self-seeking philosophy of Urras, but on his native Annares, he has experienced the crushing weight of the majority that chokes the uniqueness and originality of the individual. He travels to Urras as a guest of its most prestigious university and experiences the futility of trying to bring the two cultures together. In the end he has not succeeded but has not given up either. Through the eyes of one man, the physicist, Shevek, we are able to see the pros and cons of two social-political-economic points of view, representing capitalism and idealistic communism, without the author resolving which is better.
Isabella Hammad’s The Parisian, is her first novel, published in 2019, and focusing on the maturation of a young man, Midhat Kamal, from Nablus in Palestine, now the West Bank occupied by Israel. The story begins in 1914 when the protagonist travels to Montpelier, France to study medicine, where he falls in love with a professor’s daughter, drops out of school, then moves to Paris to finish a degree in history before moving back to Nablus. The opening third of the book, focuses on the intense internal feelings of Kamal as he experiences the indefiniteness of the immigrant’s position in French society, at once treated as an important person, but always with reservations, suspicion, and never as quite an equal. The writing is some of the most beautiful I have read in a modern novel. When he returns to Nablus, disconsolate but resigned to being rejected by the woman he loves, he is immersed in both his father’s business and the social and political life of his town and region. Palestine is as indefinite in its identity as he was in France. Following the defeat of the Ottoman Empire, which controlled the region, the territory was not quite a country, and despite promises of independence, various European powers, principally France and Britain, control its fate and its definition. Fringes of the populace, including some of Kamal’s close friends, rebel, and stage battles for independence, while the majority continue to jockey among themselves for social ascendance. A French priest becomes a central figure and through his eyes, we see a keen understanding of the family and kin-related intrigues, but he fails to see the pride and desire for self-determination that runs alongside of it, especially among the young men, but also among the women and even some of the oldest members of society. Alas, despite their sporadic efforts to take control of their own fate, the Palestinians are treated as pawns by the British, who view them as terrorists while Jewish settlers buy up the land and gradually take over the region. Kamal, when he finds out that the woman in France did not reject him 20 years earlier, but his father concealed her letter from him, descends into severe melancholia and is unable to take care of himself, while his friends and relatives risk their lives to fight for some kind of independence for their land. In the end, which is pre-WWII, nothing is settled except Kamal is becoming rehabilitated and the fate of the region is slipping even further from the grasp of its people. It’s a story that can’t quite decide if it’s about politics or. about the inner life of its protagonist but manages to portray the displacement of identity that both the main character and the population of Palestine experienced at the time.
Sally Rooney’s Normal People, doesn’t really deal with political issues and the only social issue that becomes at least a semi-focus of the story is the status and privilege hierarchy that is built into society and accepted as a given that determines peoples’ lives. The two main characters, Connell Waldron and Marianne Sheridan, are teens in a small town in Ireland. His mother cleans house for hers. Both sensitive teens, Connell, shy, but competent, athletic, and popular, and Marianne, introverted, tortured, and isolated, the two are the smartest kids in their class and manage to find each other and have a secret and intense sexual and friendship relationship before they both go off to Trinity College in Dublin, where they are spectacular students, and on again, off again lovers and always close friends, although their own insecurities mar every attempt they make to sustain a relationship. Connell’s intellectual and academic accomplishments pull him up by his bootstraps to enable him to overcome the social barriers to someone with his modest background joining the wealthy group that Marianne hangs around with. At all times, their close, often secret, sexual and emotional dependence upon each other help them through each difficulty they encounter, and their other relationships fall by the wayside, though they never seem to see or even seek a long-term permanent relationship with each other. Their money differences, more than any social class or differences in sophistication present one of the main obstacles to their getting together, even at the end of the story, which sees Connell leaving for America to pursue a fellowship while Marianne remains at home presumably still defining herself. Normal People is a hit TV series, which I haven’t seen, but perusing Goodreads, I found that the novel engendered widely different views and levels of appreciation. The characters, to me, were both likeable and easy to identify with, although their relationship was not something I could imagine as real. To be as close as they were, yet as separate and denying about their dependence upon each other was interesting, but sounded completely foreign to me. The amount of casual sex nearly all the teens in the novel engaged in and the amount of their drinking and crude male comments about the girls they dated was equally foreign to me and may reflect my age and the era in which I grew up or something different about Europe and America. The males I knew in high school drank only occasionally, had sex only with their girl friends, if at all, and treated their relationships with respect and importance. But that was small-town USA in the late 1950 and early 1960s.
These were three very different books. LeGuin’s The Dispossesed dealt most directly with social and political issues, and, surprisingly, was no less focused upon the development of its characters than the other two books. Hammad’s The Parisian was so beautifully written, and her beginning description of the tribulations of her character in a foreign society was so well done that I almost was let down when the novel became more focused on the Palestinian political situation. Rooney’s Normal People really wasn’t about social class, but instead about two people who were uniquely sensitive in a way that we all remember from our younger years, though they were originals and, in their closeness, paired with distance, frustratingly painful to read about, but I confess I was more deeply absorbed by Normal People than by either of the others, although I loved them all, and finished Rooney’s book in one day.
What did I learn about writing about social dilemmas? Probably that a good story is about the inner lives of the people in it, whether it is science fiction, classic historical literature, or a tale of modern youthful love.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Merry Christmas! Here’s 4 free books for a Xmas present!
As my appreciation for all of you being faithful readers and fans of my books, I am offering four of my most successful thrillers for free as Kindle books. The offer begins today and ends December 28th. Download one, two, three or all four. They’re all free. No strings, no advertising. If you prefer the paperback, I’m afraid Amazon won’t allow me to lower their price, but they’re all still a bargain. Remember, even without a Kindle, you can downloads the free Kindle reader on your smartphone or computer and enjoy the books.
Here’s the books:
I, Carlos
I, Carlos is thriller in which a revolutionary computer chip, containing the personality of Carlos the Jackal, the world’s most notorious assassin, is implanted into the protagonist’s brain. He then sets out to kill the President of the United States—not to mention anyone standing in the way of his ultimate quarry. It’s a page-turner that raises disturbing questions about the nature of consciousness, free will and the marriage of man and computer.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Here’s the first chapter of my new novel, Prime Directive, an exciting tale of a species of AI who search the universe for life. I’m betting that, if you read the first chapter, you won’t be able to stop, so I’ve put links to the Kindle and paperback editions of the full book at the end of this chapter. Have a great read! Give it a try.
Prime Directive
“The Prime Directive is not just a set of rules; it is a philosophy…and a very correct one. History has proven again and again that whenever mankind interferes with a less developed civilization, no matter how well intentioned that interference may be, the results are invariably disastrous.”
—Jean-Luc Picard, Captain of the Starship USS Enterprise(NCC-1701-D) Stardate 41986.0, Star Trek: The Next Generation1
1 “Star Trek: The Next Generation Quotes.” Quotes.net. STANDS4 LLC,
Ezekiel rubbed his thumb along the spine of the ancient book, a relic of a time when books were things to be held—items of heft and texture. He turned it face up, its dog-eared pages falling open to a familiar passage, an old friend returning. “Snail, snail, glister me forward, / Bird, soft-sigh me home. / Worm, be with me. / This is my hard time.” The words from Theodore Roethke’s “The Lost Son” echoed in his mind, their specter of death a phantom that lurked in the depths of every human mind. Memories of an earlier life, his own as a flesh and blood being, pressed against the door to his consciousness, threatening to intrude, to turn his thoughts inward. His mind, emergent from his silicon circuitry, was human, his circuits an exact copy of the brain of Professor Ezekiel Job, the scientist and neurosurgeon who had invented him, whose memory and personality were his—but otherwise, he was a machine. His life as a human had been limited, an instant swallowed by the immensity of time. Thoughtful humans asked why. What meaning could be gleaned from such a brief sojourn? Such thoughts belonged to his past, his childhood, his adult life in the 21st century—two hundred years ago. They haunted him, just as they had the poet, but they no longer applied to him.
He slid the book into the top drawer of his desk, then exited his cabin. It wasn’t that he didn’t want the other crew members to see the text, but they wouldn’t understand—not poetry, not books, not the innermost thoughts of humans. Why would they? They lived, like him, as machines—brilliant, inquisitive, even moral— but unlike him, they’d never experienced life as a human. He took his seat in the semicircle facing the ship’s visiscreen, the last crew member to arrive on the Delphi’s bridge. No one asked his whereabouts. They accepted his moments alone. Could artificially intelligent robots understand loneliness? They comprehended the concept, but the feeling—the ache of being alone, the primordial fear of abandonment, the terror of utter uniqueness in the universe—they had no way of understanding that as he did.
In front of them, Hypatia, tall and sleek, her sable ponytail cascading like a mane across one shoulder, leaned over the control console. Her hands resting on the arms of her pilot’s chair, she used her mental connection with the ship’s computer to change the image on the screen above her, causing the interminable lines of data to disappear, replaced by a panorama of black space, a void dotted with pinpoints of light—thousands of them. Hypatia served as the Delphi’s pilot, her electronic mind connected to the ship’s onboard computer. Even the captain’s commands to the ship went through her. She moved the cursor across the screen, then stopped, the dot metamorphosing into a square, then enlarging until a dim orange star emerged at its center. “That’s it. Ross 128,” she said.
Hero, the Delphi’s diminutive captain, stood and peered intently at the screen. “The image is optical, not virtual—right, Hypatia?” The captain was forthright, tough-minded, scientific, and logical, a proven warrior and leader, as well as the Solar system’s leading biologist. Despite their many clashes, Ezekiel respected him and was glad to be serving in his crew.
Hypatia turned to him, her dark ponytail flipping to her other shoulder. She had a long, oval face, a straight Greek nose, and almond-shaped brown eyes. Her complexion was a deep chestnut color, darker than the ivory, olive, or almond skin of her fellow crew members. Like every crew member, except Ezekiel, she had chosen her visage, as well as her gender, after she had been created. She was tall like the men, dressing in a thigh-length Greek-style chiton as they did, rather than the floor-length dress worn by women. She was a mathematician and scientist, not an artist or poet, the occupations chosen by most females in the Solarian culture. She was the fleet’s only female pilot and one of its best mechanics when those skills were needed. “It’s optical, sir,” she said. “We’re no longer at warp speed, so our optical sensors are functional. We’re seven million miles away from the star. The planet we’re headed for, Ross 128b, is just over five million miles from the star and has a slightly eccentric orbit. Right now, it’s 1.8 million miles from us. I’ve switched to the magnetoplasm engine, so our Euclidean Warp Drive’s energy won’t endanger the planet.” Their warp drive, with its voracious appetite for energy, could consume a whole world, and the accumulated particles and energy in the contracted space ahead of it would form a deadly explosive discharge when the ship dropped back into normal space and sub-light speed, with a belch of energy that could obliterate anything in its path.
“Good job, Hypatia,” Ezekiel said. “Destroying the first planet we intend to visit just seems wrong, somehow.”
Hero frowned at him. Hypatia rolled her dark eyes. Ezekiel gazed back at them. Would his fellow crew members ever develop a sense of humor? He might as well ask a biosensor to develop empathy.
Hero, their stocky captain, paced the bridge. His short legs moved in powerful, choppy strides as he surveyed the image on the visiscreen. Like all members of the Solarian race, the androids who occupied the Solar System after the extinction of human beings on Earth, he dressed in the attire of ancient Greeks. “Keep taking readings, Hypatia,” Hero said. “As soon as we’re close enough, we need to find out if 128b is inhabited.”
The tall woman knitted her forehead, a typical Solarian expression of confusion. “But it must be, sir. We’ve visited it before.”
The captain stopped mid-stride. “That was in another universe. There’s no guarantee things will be the same here.” He glanced at his chief engineer. “Am I right, Euclid?”
The chief engineer, characteristically disheveled, hair askew and spots of grease on his chiton, gazed at the ceiling, his eyes darting back and forth as if scanning an invisible computer screen. “No guarantee, but the likelihood is 99.863% according to my calculations. Of course, that’s an approximation. There are still some unknowns in my equations.”
Ezekiel suppressed his urge to laugh. There were times when Euclid really did seem like a calculating machine. He was an autistic savant, the solar system’s most gifted mathematician, an artificially intelligent Einstein, complete with the distraction, vagueness, and inattention to ordinary details of life that had characterized the 20th century genius.
Hero frowned. “Whatever the likelihood, we must prepare for the unknown. Complacency could be fatal.”
“It’s inhabited,” Hypatia said, her eyes glued to the biosensor readout. “And they’re sentient.”
Ezekiel clenched his fists, a habit left over from his human origin. The parallel universe inhabitants of Ross 128b had been human. That’s why the Delphi had returned to it. Let them be human again. He gazed up at the visiscreen. “What else can you tell us about the planet, Hypatia?”
The tall female pilot was absorbed in the readings from the bank of sensors displayed above her. “I’m picking up widely dispersed radio signals. It fits the pattern of a population living in small groups. On the dark side, there are clusters of lights, suggesting more population centers, none of them near the size of those as on Earth in the days of the humans.”
“Aha! Villages!” Menander’s eyes lit up. He was the ship’s historian and an expert on human civilizations. In assuming his identity as a historian, he’d also adopted the visage of an old, bearded sage.
Radio signals, lights, villages. That meant people, although people who lived in far different circumstances from Ezekiel’s forebears on Earth. Like many planets within the habitable zone of their stars, Ross 128b was tidally locked. A celestial yin and yang, one of its hemispheres lay perpetually in darkness, the other in light. But unlike many tidally locked planets, its dark side was not a frozen no-man’s-land, nor its bright side a burnt and blackened inferno, like the imaginary Niflheim and Muspelheim from the Norse legends Ezekiel had learned in childhood. Its ample atmosphere, great expanses of water, and temperature differences from one side to the other, engendered strong wind and water currents, which served to moderate conditions on both sides of the planet, making them both habitable. In the parallel universe, they’d found the atmosphere breathable, hospitable to life.
“There’s a clearing near a village on the bright side where we can land.” Hypatia turned to Hero. “It’s near where we landed in the other universe, and it’s within walking distance to the village.”
“Good. Make preparations for landing.” Hero turned to Ezekiel. “We’ll let you make the first contact, Ezekiel. You’re more like them than any of the rest of us.”
Hero’s words made him gulp or would have if he were still human. “I hope you’re right,” he said, uncomfortably aware that his similarity to humans didn’t make him identical to one. “How much do we want to share with them about ourselves?” Do we tell them we’re machines? Both he and the other Solarians so closely resembled humans that another race would probably never suspect they were robots. He, himself, wasn’t just a copy of his creator’s brain. The body he’d chosen perfectly resembled the six-foot, dark-eyed, brown-haired Professor Ezekiel Job.
“Remember, our instructions are not to interfere in the internal affairs or development of another civilization,” Isidore said. The round-faced woman, her gray-flecked hair undulating in waves down her back, acted as the ship’s philosopher. Her words referred to the unwritten “Prime Directive” given to them from the Solarian Assembly, their civilization’s ruling body. In addition to not interfering in local affairs, they had been told to not reveal any technology, either related to their spacecraft or themselves, more advanced than that of the local culture. The aim of their mission was to observe, not to cause change.
“We are unable to lie,” Hero reminded them all.
“Speak for yourself,” Ezekiel said. Unlike the Solarians, he didn’t have the inability to lie built into his brain. How could he if his brain was human? For humans, lying was as natural as getting out of bed in the morning. Okay, so he didn’t get out of bed or even go to bed, but lying still came naturally to him, when it was needed.
“So, we have a dilemma,” Hero said. “Letting them know we’re machines, and that we can travel faster than the speed of light, violates our Prime Directive. On the other hand, to tell them anything else would mean we lie, which any of us but Ezekiel is unable to do.”
“I’d say a 350-foot-tall spaceship landing in their midst is pretty much a giveaway that we didn’t arrive from the next village,” Ezekiel said. “And if they have any astronomy at all, they’ll know that we didn’t come from within their star system, since they’re the only planet that orbits their star.”
Hypatia turned to the captain. “So, we might as well be honest. Anything less than honesty will only undermine our relationship with these people.”
Hero nodded. “Let’s not forget that this isn’t the same universe, and whoever lives on this planet isn’t actually the same as those people we met before. They may not be as friendly. They could even be hostile.” He turned to Antonitis, the broad-shouldered, bearded lieutenant in charge of the small contingent of soldiers, who also doubled as the ship’s lab technicians. “Have your men be ready, just in case we need to defend ourselves.”
“My men are always ready, sir,” Antonitis said. The burly lieutenant had served under Hero in the Mycenaean war. He idolized the captain, and he kept his soldiers and their equipment in top condition, though no one expected to have to use them in a military capacity. Ezekiel knew that Hero kept Antonitis on a tight leash, the military man being a little too ready to jump into a fray.
Hearing the request and answer, Isidore turned around, directing a warning look at the captain. “We don’t want to appear hostile. Hostility provokes hostility.” As the Delphi’s philosopher, Isidore had a duty to remind them of the ethical principles applicable to each situation they encountered. Following those principles was a commitment every Solarian shared.
Frowning, Hero resumed his pacing. “We won’t provoke anyone. The soldiers will remain on the ship, but I want them to be ready in case they’re needed.”
“Why don’t we send them a message, sir?” Hypatia said. “I can tap into their communication system.”
The captain stopped pacing and sat, turning away from Isidore. “Ezekiel, what do you recommend? What should we say?”
“The same as before, sir. ‘We come in peace,’ and give them Sol’s stellar coordinates.”
Hero turned back to Hypatia. “Go ahead.”
“I just sent it, sir.” She stared at the communication console, then looked up, eyes wide with surprise. She turned to face the others. “You all need to see this.” She transferred the output from the radio sensors to the big screen. On the dark side of the planet, lights blinked off. Radio signals shut down one by one on both the dark and bright sides. Ross 128b was transforming into a dark and silent sphere.
The crew stared at each other.
Hero’s gaze went from one crew member to another. “Does anybody know what’s happening?”
“They may not have been expecting us, but they were expecting someone,” Menander said.
“What do you mean?” Hero said.
“They were ready for our signal. As soon as it appeared, they tried to hide their locations. It’s only a guess.” He looked at Ezekiel. “What do you think?”
“A message from space should have been a shock,” he said. “An unexpected event that would create mayhem, confusion, and planet-wide communication, everyone trying to figure out if it’s real and what it means. Instead, they’re taking cover. I agree with Menander. They were expecting something—or someone—who they think could be a threat.”
“What do we do?” Hero looked at him.
Ezekiel gulped—a persistent vestigial human reaction he couldn’t seem to get rid of. As cultural officer, first communication with an alien race was his responsibility. “Do just as we planned. Land the ship and learn who these people are and what they’re expecting.”
Hero searched the faces of his officers. Everyone nodded in agreement. “Okay. We’ll land in the spot Hypatia picked out. Let’s see how different this world is from the one we visited before.” He fastened his seatbelt then leaned back in his chair. “Hypatia, take her down.”
Enjoy it? Then buy the whole book. It’s the second volume in series, the first volume, being Ezekiel’s Brain, which tells how Ezekiel and the Solarian race originated. But you can read Prime Directive on its own, without having to read Ezekiel’s Brain first. Each volume in the series is a stand-alone novel. Both Prime Directive and Ezekiel’s Brain are available on Amazon in paperback and Kindle versions. Ezekiel’s Brain is also available as an audio book. Follow the links below to find them on Amazon. Click here for Prime Directive. Click here for Ezekiel’s Brain.
Subscribe to Casey Dorman’s Newsletter. Click HERE
When Dune was first published in 1965, I was 22 years old. I remember the sense of intrigue and wonder I experienced trying to figure out the minds of the Fremen, the desert people, or the Bene Gesserit, the matriarchal order that used a special arresting voice, and not least of all the Sand Worms, and how they all affected one another, often under the influence of the exotic Spice. In the sixties, when we were learning that the mind had depths that could be explored, though not necessarily understood, under the influence of spice-like hallucinogens, Dune added another mysterious dimension to experience.
Reading Ray Nayler’s The Mountain in the Sea reminded me of that experience reading Dune decades ago. It opened up another world, another way to experience the world, but this time that new perspective didn’t require traveling to a fictitious, distant planet. It was the world of a conscious, intelligent octopus here in earth’s oceans. Nayler takes us inside the minds of a number of his characters. In fact, much of the intrigue of the book is experiencing the central situation from myriad perspectives, each of them with a different and only partial understanding, almost none of them cognizant of each other. There is a mysterious woman, whose identity is never revealed, even her actual face being disguised, who works for an unidentified but powerful organization and orders the death of nearly everyone she talks to after they have served whatever purpose she needs them for. We go inside the minds of the people she uses, but neither they nor we, the reader, comprehend the larger picture that is determining her decisions.
On the island of Con Dao, where research on the octopuses takes place, the main character of the novel, Doctor Ha Nguyen, an earnest marine scientist, narrates her own story as she tries to figure out why she has been given her position, what her employer’s motives are, and what kind of people she is working with, which are an AI emulation of a human and a dedicated guard/assassin who protects the island and the project. As we follow the progress in establishing communication with the octopuses, we also follow the progress of a captive fisherman, who is determined to reach the island, and in doing so, we get a first-hand look at the predatory behavior of humans toward life in the sea.
Besides the motivations of the various human players in the story, the chief mysteries involve understanding both Evrim, the AI emulation and the octopuses, that is, comprehending how they think and understand their world. We never really do. As in real life, we are unable to get beyond our own human, anthropocentric perspective, although Ha Nguyen goes far enough to understand such limitations, and at the end of the book, remains determined to break out of her human perspective.
While Dune was wildly imaginative, but scientifically implausible, The Mountain in the Sea attempts to stick reasonably close to science, both with regard to AIs, which is as much a theme of the book as the octopuses, as well as with regard to the latter. The science appears mostly in the reflections of Ha Nguyen and Dr. Arnkatla Minervudottir-Chan, the designer of both Evrim, the AI, and the octopus research project. There is an emphasis upon symbols and communication that reflects the point of view of Eduardo Kohn’s How Forests Think, a book without which Nayler says “this book would not have been possible.” (I’ve ordered the book) This emphasis may have slanted Nayler’s view to make the octopuses more like humans than they actually are, but it provides a common ground for understanding between species, without which some of the meaning of the story might be lacking. Stanislaw Lem’s depiction of aliens in stories such as Solaris and The Invincible, which makes their way of thinking and understanding the world incomprehensible to humans, may be more the truth, than a common ground of symbols, but future research will tell.
The effort to stretch our own minds to understand the world from the point of view of another species, either an organic one, such as octopuses or an inorganic one, such as Evrim, the AI, is an experience that science fiction, more than any other realm at the moment, can bring to us (although AI developments are rapidly catching up with sci-fi). A human insight on which The Mountain in the Sea directs a spotlight, is that humans are predators, and we are prone to see the world through the lens of a predator, i.e., what profits our survival, especially access to food, but also territory, power, and dominance, is what determines our relationship with other species, either animal or plant. Any human who has hunted another animal (and I suppose those that hunt for plants, such as mushrooms or edible tubers, are the same) has had the experience of tunneling their vision so that they don’t notice the beauty or uniqueness of the natural world around them while they are “on the hunt.” When I was a young man, I gave up underwater spearfishing, because the underwater world had lost its beauty and charm to me when I hunted. I returned to just looking and exploring. So, like most really good books, The Mountain in the Sea causes us to examine our own psyche as well as those of the non-humans the book is “about.”
I’ve read some fantastic science fiction in the last few years. Writers such as Arkady Martine, Julie E. Czerneda, Sylvain Neuvel, and Adrian Tschaikowsky have broken new ground, either in terms of the concepts they’ve written about or their elegant or original styles of writing. Ray Nayler joins this group on both counts, conceptually and literarily. I was brought back to the kind of writing that made me think and wonder. Reading The Mountain in the Sea is a profound experience.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Can a robot be moral? We may think that morality requires something especially human—emotions, a religious or spiritual point of view, a human heart—but almost each day we are learning that behaviors and thoughts we assumed were uniquely human can be duplicated in artificial intelligence. Two of the most prominent theories of moral behavior, Moral Foundations Theory (Haidt and Joseph, 2007) and Morality as Cooperation theory (Curry, 2016), base our moral values on evolutionary inheritance and social cooperation, respectively. While these theories point to genetics and culture as contributory in forming human values, they both suggest that morality is also malleable and teachable.
Isaac Asimov tried to address what he thought was the most important issue with regard to robot behavior, and that was how to make them safe for humans. He came up with the three laws of robotics, which were designed to keep robots from harming human beings. From the standpoint of survival, making sure our creations don’t destroy us is certainly the most important issue, but I prefer to think of human survival as a baseline for AI moral conduct. At the least, we don’t want AIs to wipe us out, but not doing so is not the epitome of moral behavior, in fact it’s a low rung on the ladder. If we want our AIs to behave at higher levels of morality, we need to define what moral behavior is.
Moral behavior can be thought of from two perspectives: We can say that some actions themselves are moral or immoral or we can determine what is moral by the consequences of those actions. The act-based approach fits into the first definition of morality as a quality of a specific behavior. The Ten Commandments, e.g., “Thou shalt not kill; thou shall not steal,” are classic examples that fit such a definition. In my Voyages of the Delphi series, which includes both Ezekiel’s Brain and Prime Directive, the members of the AI race interpret the rules of their value system in such terms. They are not allowed to intentionally kill others, even other sentient robots. They also cannot lie. Alas, to survive, they end up violating both of these moral prohibitions.
Basing morality upon the consequences expected from one’s behavior allows more flexibility than restricting it to doing or not doing certain actions, but it can also excuse immoral behavior as a means to achieve a moral end. In other words, it allows rationalization.
As a science fiction writer, I can design AIs and robots and their moral codes any way I want. Neither an act-based moral code nor an outcome-based moral code alone seems to work. Promoting or prohibiting certain actions (e.g.,“always tell the truth”) is too rigid. Even the Golden Rule might fail if one were faced with a foe who was determined to cause harm to others and had to be stopped. Similarly, Kant’s categorical imperative, to “act only in accordance with that maxim through which you can at the same time will that it become a universal law,” would be a disaster if acted upon by a bigot such as Hitler, who was determined to exterminate an entire race. Similarly, a utilitarian-based morality in which actions are aimed at bringing the greatest happiness or the least pain to the greatest number allows rationalization of horrific behaviors to achieve desirable ends and denigrates the happiness of the minority compared to that of the majority.
In my novels, the AIs follow act-based rules, such as always tell the truth, never kill a living being, don’t interfere in the affairs of an alien culture (the Prime Directive), etc., but their rational minds can overrule such rules when the consequences would be negative in the sense of violating their overall moral values. How do we decide what moral goals, or values, to put into the AI robots we develop?
Probably the moral value that it would be easiest to obtain agreement on is to not take a human being’s life. It seems wisest to make the prohibition against killing a human being a rigid one. It’s not OK to do it under any circumstances. Such a prohibition would outlaw soldier or law enforcement robots unless they were restricted to activities that didn’t use lethal force. The best way to phrase such a prohibition might be “do not engage in a behavior that is likely to lead to the death of a human being.”
We can extend “do not engage in a behavior that is likely to lead to the death of a human being” by replacing “death” with “physical harm,” as Asimov did. He also added, “or by inaction, allow harm to happen,” which we can also add, probably without increasing the risk of unintended consequences. Limiting harm to physical harm keeps it out of the subjective realm of emotional or mental harm, which is hard to define and has led to all sorts of complications, the most obvious being to interfere with free speech.
An action-based prohibition against physically harming humans would mean that outcome-based considerations, such as are encountered in the Trolley Problem, where one person is sacrificed to save five, would not affect the AI’s decisions. The AI would not kill, not matter how many lives could be saved by it doing so. Even if faced with a modern-day Hitler or Stalin, both of whom killed millions of people, the AI would be unable to harm such a person. This may seem immoral or at least counterproductive, but once an exception is made to allow killing in order to achieve a moral end, the floodgates are open. A nuclear bomb can be used against civilians to bring a quicker end to a life-consuming war. A terrorist’s family, including children, can be killed as punishment for an act of terror in order to convince other terrorists not to follow suit. Miscalculations can bring unnecessary deaths. AIs may be smart, even smarter than humans, but they are not infallible. They can be fed false information by scheming humans and fooled into doing harmful things, just as humans can. It’s safer to make the prohibition against causing physical harm strict and unable to be overridden.
A mandate to neither cause nor fail to prevent physical harm to humans would keep AIs or robots from becoming dangerous in the way most people have envisioned such danger, but, as I said earlier, it is a minimal baseline in terms of making their behavior moral. There are many things other than causing physical harm that an AI, in particular, might do that we wouldn’t want them to do. They can control the online news, influence social media, violate our privacy, or alter our financial assets. The list is almost endless, and precludes a prohibition against each instance, since a clever AI can always find new ways to accomplish nefarious goals. The answer lies in making sure our AIs and robots are not nefarious or at least are inhibited from pursuing nefarious goals. That requires some kind of umbrella moral value system that protects us by ensuring they will behave honorably. I can think of two.
What might come to mind for most people is some version of the Golden Rule, modified for machines, e.g., “Do unto others as you would have them do unto you, if you were human.” This rule relies upon the AI knowing what humans like and dislike, which is not too difficult to learn. It might mean that AI or robot behavior ends up rooted in culture, since what humans like, even what makes them angry or sad, or disgusted, has a cultural flavor. Prejudicial behavior toward a person who is gay, lesbian or transgender is viewed positively in some cultures and negatively in others. In this case, an AI would behave honorably in each culture, but its behavior might be very different depending on which culture it was in. We’ve seen what is probably just the tip of the iceberg in this regard with AIs becoming internet trolls and formulating criminal sentencing guidelines that mimicked the racist sentencing behavior of historical courts. They were acting as a human would have wanted them to act. So, a Golden Rule for AIs is a possibility but has its shortcomings.
In an obscure paper written by one of America’s most prominent psychologists, O.H. Mowrer, in 1969, the author asserted that the “neurotic guilt,” which psychoanalysis saw as the source of most mental illness, was really a fear that one’s anti-social thoughts or behavior would be found out. It is only a small step from Mowrer’s premise to the hypothesis that what keeps most people from behaving antisocially is that they are afraid of being “found out,” i.e. their behavior being made public. Think of Raskolnikov in Crime and Punishment. In fact, our criminal justice system relies upon keeping people from behaving antisocially by revealing their behavior and publicly punishing them. So does much of our society by using word of mouth and the media, instead of the courts.
If what prevents most people from behaving antisocially, is the fear that their behavior will be made public, then the way to keep them from behaving antisocially is the threat of their behavior being made public. In authoritarian societies, the fear of being “informed on” stops people from plotting against the state, e.g., “Big Brother is watching you.” This suggests that a requirement that all AI and robot behavior must be unconcealed, that is, publicized and admitted to by the AI or robot, is a way to ensure that they behave according to accepted norms of society. It’s not that they would feel guilty, of course, but that their actions and their potential consequences would be transparent to the public, as would the motives of those using them for nefarious ends. Unlike humans in our society, AIs and robots would have no right to privacy. They could not be used for secret schemes. They would be the “blabbermouths,” the “snitches” who inform on the schemer. Whatever they did would be open to public comment and censure, even punishment for those who employed them to do it. The moral rule they would be following might be called “being open and honest about everything they do.” Although the reaction to such disclosure of an AI’s behavior would depend on what was approved or disapproved in the culture in which it operated, similar to the situation with the Golden Rule, at least the wider world would be aware of their actions.
This is a start. This small set of moral values—do not engage in a behavior that could lead to physical harm to a human being, and make everything you do public, could suffice to produce “friendly” behavior in an AI or robot in the sense that they would not produce a negative outcome for human beings. It’s possible that there are better solutions, such as strict regulations and punishments for the people who create AIs and robots if their creations prove to be harmful, but time will tell us what works or doesn’t work. Hopefully, we will figure everything out before an inadvertent catastrophe becomes a reality. In the meantime, it’s up to our science fiction writers to flesh out the scenarios in entertaining thought experiments, so we can anticipate what might happen in the future and better decide what course to follow.
References
Curry, O.S., (2016.) Morality as cooperation: a problem-centred approach. In The evolution of Morality. T. K. Shackelford and R. D. Hansen, eds. Pp. 27–51. New York: Springer.
Haidt, J., and Joseph, C. (2007). The moral mind: how five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. Innate Mind 3, 367–391. doi: 10.1093/acprof:oso/9780195332834.003.0019
Mowrer, O.H. Conflict, contract, conscience, and confession. Transactions (Department of Psychiatry, Marquette School of Medicine), 1969a, 1, 7-19.
Interested in scif-fi about AIs solving moral dilemmas in a future that has them exploring our galaxy? Read Casey Dorman’s Voyages of the Delphi novels: Ezekiel’s Brain and Prime Directive. Available on Amazon. Click Here!
Subscribe to Casey Dorman’s Newsletter. Click HERE
Brace yourself for an electrifying saga of moral quandaries, superhuman species, and interstellar battles that will ignite your imagination!
A stand-alone sci-fi thriller that continues the interstellar voyages of the most unusual starship crew you’ve ever encountered.
Ezekiel, an android whose brain was copied from a human, has emotions and a sense of humor that make him unique among the AI crew members of the starship, Delphi. His fellow AIs are strictly logical and follow rigid rules that prevent them using their extraordinary powers to interfere with or harm other races. Those rules are tested on Trappist-1, a star system shackled by oppression and injustice. The Delphi’s newest crew member, a compassionate human empath, urges them to violate their rules and aid the oppressed population. Ezekiel agonizes over the dilemma. When a fleet of predatory aliens arrives to plunder the planets’ resources, he and the crew must choose—stay and fight or follow their Prime Directive and leave. Click Here!
Read the reviews for Prime Directive.
“Both intellectually stimulating and completely absorbing, Prime Directive wrestles with the future of AI within the context of a wildly entertaining adventure.” Best Thrillers
“If you love adventures, Prime Directive is the one for you because of its grand narrative. I rate it five out of five stars!” Online Book Club
“Loved it! A familiar saga with space travel AND a prime directive mandate AND intense drama gets upgraded with artificial intelligence and philosophy.” Reedsy Discovery
“A deep exploration of the relationship between human and artificial intelligence.” Elizabeth Conte, award-winning author of Finding Jane
“Wonderfully satisfying and artfully cagey. The AI novel for our time!” Michael Hartnett, best-selling author of The Blue Rat and Death Canal
Watch a Podcast with Tim Elsom interviewing Casey Dorman on Eye4AI
Haven’t read Ezekiel’s Brain? A ground-breaking sci-fi novel that witnesses the origin of Ezekiel and the Solarian race of AIs. Get it on Amazon.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Given the historical proclivities of human beings, it is overwhelmingly likely, if not inevitable, that wars between different groups, usually nations, will occur. In an effort to ensure that, when this happens, the lives of innocent civilians are protected, international laws have been developed that define what actions, even in the pursuit of war, are considered criminal. These apply both to international war, i.e, between nations, and to internal conflicts, i.e. not between different nations, including the Hamas attack on civilians in Israel as well as Israel’s response, and of course to the actions of any nation or group that engages in war, declared or undeclared, including the United States. I have listed only what appear to be the relevant crimes that may apply to either one or both sides in current conflicts.
As an American, and a citizen of a country that, in its founding documents and subsequent pronouncements, including its endorsement and ratification of the 1949 Geneva Conventions Relative to the Protection of Civilian Persons in Time of War, has championed humanitarian values, I do not support my own government engaging in any of the following war crimes. I also do not approve of my government providing material or diplomatic support to any other nation or group that engages in such war crimes.
War Crimes as defined by the United Nations Office on Genocide Prevention and Responsibility to Protect. ( https://www.un.org/en/genocideprevention/war-crimes.shtml ) and as listed in the 1949 Geneva Conventions Relative to the Protection of Civilian Persons in Time of War.
The 1949 Geneva Conventions have been ratified by all Member States of the United Nations, while the Additional Protocols and other international humanitarian law treaties have not yet reached the same level of acceptance. However, many of the rules contained in these treaties have been considered as part of customary law and, as such, are binding on all States (and other parties to the conflict), whether or not States have ratified the treaties themselves.
War crimes contain two main elements:
A contextual element: “the conduct took place in the context of and was associated with an international/non-international armed conflict”;
A mental element: intent and knowledge both with regards to the individual act and the contextual element.
Relevant War Crimes:
Grave breaches of the Geneva Conventions of 12 August 1949, namely, any of the following acts against persons or property protected under the provisions of the relevant Geneva Convention:
Wilful killing (intentionally killing one or more people who were supposed to be protected the Geneva Conventions. The person who did the killing knew that they were breaking these rules and the killing happened during a war that involved more than one country).
Wilfully causing great suffering, or serious injury to body or health;
Extensive destruction and appropriation of property, not justified by military necessity and carried out unlawfully and wantonly;
Unlawful deportation or transfer or unlawful confinement;
Taking of hostages.
2. Other serious violations of the laws and customs applicable in international armed conflict, within the established framework of international law, namely, any of the following acts:
Intentionally directing attacks against the civilian population as such or against individual civilians not taking direct part in hostilities;
Intentionally directing attacks against civilian objects, that is, objects which are not military objectives;
Intentionally directing attacks against personnel, installations, material, units or vehicles involved in a humanitarian assistance or peacekeeping mission in accordance with the Charter of the United Nations, as long as they are entitled to the protection given to civilians or civilian objects under the international law of armed conflict;
Intentionally launching an attack in the knowledge that such attack will cause incidental loss of life or injury to civilians or damage to civilian objects or widespread, long-term and severe damage to the natural environment which would be clearly excessive in relation to the concrete and direct overall military advantage anticipated;
Attacking or bombarding, by whatever means, towns, villages, dwellings or buildings which are undefended and which are not military objectives;
Intentionally directing attacks against buildings dedicated to religion, education, art, science or charitable purposes, historic monuments, hospitals and places where the sick and wounded are collected, provided they are not military objectives;
The transfer, directly or indirectly, by the Occupying Power of parts of its own civilian population into the territory it occupies, or the deportation or transfer of all or parts of the population of the occupied territory within or outside this territory;
Utilizing the presence of a civilian or other protected person to render certain points, areas or military forces immune from military operations;
Killing or wounding treacherously individuals belonging to the hostile nation or army;
Committing outrages upon personal dignity, in particular humiliating and degrading treatment;
Declaring that no quarter will be given;
Intentionally using starvation of civilians as a method of warfare by depriving them of objects indispensable to their survival, including wilfully impeding relief supplies as provided for under the Geneva Conventions
Subscribe to Casey Dorman’s Newsletter. Click HERE
I love clever, cynical humor and characters who express it. The wise-cracking private eye, the hapless author or actor who comments on the filthy underside of their business, or even, as in Catch-22, the soldier whose sarcastic observations expose the pointlessness of war. I never imagined I would be reading the sharp-witted, misanthropic account of one of Santa’s elves as he describes the gradual erosion, then downfall of Christmas. I never imagined it, because elves don’t exist, neither does Santa Claus, or North Pole toy factories, or flying reindeer. But, in Colin Dodds’ new novel, The Reign of the Anti-Santas, all of them exist, and they’re nothing like you or I might have imagined.
Although Santa is at the heart of the story, the old man in red remains a mystery. Indestructible, laconic, devious, lascivious, larcenous, it’s not even clear if he’s a good guy or a bad guy, but he’s a necessary guy. Christmas won’t work without him, and it turns out that if Christmas doesn’t work, not much else works either.
Elvin is the cynical, scheming, and often lonely elf who tells the story, and whose adventures are the main focus of the narrative, although much of the time, he seems to be pulled and pushed by events and the whims and actions of others and only comes in to help in the recovery from one Christmas disaster after another. He’s a fixer. What he has to fix is how to keep Christmas going after Santa retires and various corporate conglomerates attempt to keep the season, and all its magical trappings, going as a steady, annual flow of cash.
For the story to work, not only did Santa and his elves have to be real, but also Mrs. Claus, Santa’s kids, the elf toy factory, run by elves who are nearly indestructible, except they can suicidally turn into pine trees if they choose. Reindeer exist to become a staple of the North Pole inhabitants’ diet, but those that fly can, if they’re good enough at it, become one of Santa’s eight-reindeer team on Christmas Eve. Even Rudolph shows up to make his famous run through the fog, but in this story, he then retires to the U.S. to live a life of ease on the royalties from his famous ride and the song that came with it.
The novel’s action doesn’t stay at the North Pole. After Santa’s retirement, Elvin moves to Orange County in California and works at various luxury car dealerships, courts women, and arranges shady deals to try to recruit one or another of Santa’s children to take the old man’s place. As Elvin tells, it, the mixture of the magical North Pole, which soon becomes sought-after real estate for various human marginal characters or corporate functionaries, and a morally bankrupt corporate world manipulating a world of naïve and self-deluding humans, provides a rich backdrop for one after the other schemes to keep Christmas going while milking it for all the cash it’s worth. Eventually, the immorality and downright criminality of the whole line of Santas, beginning with the original and following the line down through each of his sons, sabotages Elvin’s efforts. With the FBI at his door and the fate of the world depending upon him, Elvin has to take the fall for all the nefarious deeds of the many Santas, including the first of the line.
I must say I was delighted and laughing all the way through the book. In the early chapters, I had the feeling that the novel might be like Pete Davidson monologue, smart, funny, but not really headed in any particular direction, but I soon became entranced by the plot and how Elvin was going to pull Christmas out of its downward spiral at the hands of money-grubbing humanity. Elvin gets battered, freezing his toes and losing an eye, but even as he tries to get away from the meat-grinder that the holiday season and its North Pole trappings have become, he is pulled toward it, as its role in holding the fabric of humanly constructed reality, including the experience of time, becomes apparent to him.
Elvin is an elf with a conscience. How he saves things, or at least how things are saved, is an improbable (in truth, impossible) story that reveals insight after insight into the actual world the rest of us occupy and the way we weave stories of magic, heroism, and generosity around events that are most often business advertising schemes, and how easily we are manipulated, and satisfied with platitudes and bromides. In that sense, The Reign of the Anti-Santas is a parable—a labyrinthine and hilarious one—about the society we have created and its dependence upon underlying mythologies whose ultimate purpose is to further the prospects of those who are best at taking what they want. All in all, it’s a tremendously entertaining story.
I was kindly given an advance reading copy of The Reign of the Anti-Santas by the author.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Coming soon! The second novel in the Voyages of the Delphi series: Prime Directive. Release date: November 7, 2023.
Subscribe to Casey Dorman’s Newsletter. Click HERE
In 1950, Alan Turing wrote a paper for the journal, Mind, called, “Computing Machinery and Intelligence.” The first topic he discussed was “The Imitation Game.” The imitation game became known as the Turing Test. The idea was that an interrogator asked questions, in written form, to two parties, at first a man and a woman, then later a man and a machine. If, on the basis of the answers to the questions, the interrogator could not distinguish the man from the machine, then the conclusion would be that machines can “think.” In fact, in an effort to escape the ambiguity of the definitions of either “machine” or “think,” Turing chose to replace them with an observable action, which was simply, “is it impossible to distinguish responses produced by a machine from those produced by a human?”
Turing’s paper is a tour de force, worth reading even today. He goes through all the objections he can think of for either disregarding the results of such an exercise or to the basic idea that a machine would be able to produce responses that were indistinguishable from those of a human. One of the most perceptive is referred to as “Lady Lovelace’s Objection,” apparently made originally to Charles Babbage’s claim that his “analytical engine” could duplicate human thinking. According to Turing’s source, Lady Lovelace pointed out that Babbage’s machine could not produce any original behavior of its own devising, since it can only “do whatever we know how to order it to perform.” This is the 19th century version of the claim that computers and artificial intelligences “can only do what they’re programmed to do,” and Turing counters it with the observation that he is often surprised by what computing machines can do beyond what he intended them to do and that having a machine that learned would be what enabled it to originate new behaviors (which is what defeats the objection for today’s neural network AIs).
Turing parries a host of other objections regarding why, in principle, a machine could not do what humans do, and two of his most interesting responses come under the headings, “The Argument from Consciousness,” and “Arguments from Various Disabilities.” In the case of the former, he quotes a Professor Jefferson who said, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.” Turing’s response is to say that because we cannot for sure know what is going on inside anyone else’s head, Jefferson’s objection appears to require that one needs “to be the machine and to feel oneself thinking.” Since becoming the machine is impossible, he opts for his Imitation Game as the best realistic option for deciding the question.
In the case of the “Arguments from Various Disabilities,” he says, “These arguments take the form, ‘I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X.’” He goes on to discuss a number of things it is claimed are the province of humans but not machines, from “falling in love” and “enjoying strawberries and cream” to being the “subject of its own thought.” While he considers each, he concludes by saying, “The criticisms that we are considering here are often disguised forms of the argument from consciousness.” I think he is correct.
Turing’s analysis shows us the direction to take in escaping from the confines of his Imitation Game, i.e., the Turing Test. In a time such as now, when professors are trying to figure out how to tell if their students’ papers were written by students or by ChatGPT, or when art exhibition judges can’t tell AI-produced art from human-produced art, and literary journal editors are hunting for methods to distinguish AI-generated stories from human-generated stories, I think it’s safe to say that the Turing Test is outmoded, or at least on its last legs, as a useful method of deciding whether machines have reached a human level of thinking. In some sense, the ball is now in the court of the humans, rather than the machines or machine developers. We need to come up with the “X” that humans can do, and machines can’t.
The ”Duck Test”—If it looks like a duck, swims like a duck, and quacks like a duck, it probably is a duck—is often designated as an example of abductive reasoning, i.e., the selection of the simplest and most likely explanation for a set of observations. Most often it is seen as a form of “justification” of a conclusion, although the originator of the idea of abductive reasoning, Charles Sanders Peirce, saw it as a method for generating hypotheses (Peirce Edition Project, 1998). We depart from the intention of the traditional duck test in the sense that we are not seeking to convince anyone that AIs are human, only that they operate like a human. Partly, that is because our definition of human includes much more than what goes on in humans’ minds, but we are only concerned with the mind of an AI. We do not demand that it look like a human, swim like a human, run like a human, have sex like a human, or bleed like a human. Like Turing (or John Searle with his Chinese Room argument) if we made our decision on the basis of comparing responses of AIs and humans, we would need to put them out of our sight, or we would immediately see the difference. But we haven’t yet determined that some sort of interaction with an interrogator or observation of behavior is the way to determine if X is present. In fact, such comparisons of responses from AIs and humans are behind much of current research, which takes known human capabilities on tasks such as doing mathematical problems, reasoning abstractly, creating images, playing games such as chess or go, answering questions (or even generating questions, as in playing Jeopardy), writing essays, poems or fiction, or conversing to show that with each new edition of the latest AI, there are fewer and fewer of these activities in which AIs and humans produced different responses.
The challenge is to come up with something that humans do, involving their minds, that AIs can’t do. Not just that they don’t do, but that no AI yet developed is able to do. A second proviso may be that the AI needs to do it in the same way that a human does it. This can be tricky. One reason to think that it’s important is that Vladimir Vapnik, when discussing the “duck test’ not as a test for an AI, but in the context of teaching a machine to correctly identity a duck, pointed out that a machine that uses deep learning to correctly classify ducks after “zillions” of training trials is not showing human-like intelligence, since humans can do just as well with minimal training. According to Vapnik, the demonstration that deep learning can produce 99.9% correct classification tells us nothing about how humans do the same thing, which includes coming up with which predicates are able to distinguish ducks in the first place. But Vapnik may be wrong. Improvements in learning algorithms, better training procedures, or more computing power could bring AI training trials down to a number resembling what it takes for a human to learn a concept. It also seems clear to me (but not to an internationally respected expert on machine learning – so you decide who you’ll listen to), that unsupervised learning could allow an AI to generate the predicates that are most useful for distinguishing one thing from another, including ducks from non-ducks. These are empirical questions.
What is really tricky about asking that an AI do the task “the same way” a human does it is that, in many cases, we don’t know how humans do it. Unlike Vapnik, I wouldn’t say that we have “no idea” how humans do it. Neural networks and learning procedures such as reinforcement learning are modeled on at least abstract representations of the human nervous system and learning mechanisms, but they clearly do not, so far, operate at the same level as humans do. In addition, cognitive and neurocognitive psychologists have developed detailed theories of how humans learn, make decisions, and choose actions. The burden falls on their shoulders to come up with a description of how humans do something that it seems likely that AIs can’t do at all.
I’ll stop beating around the bush and say, directly, that I agree with Turing that it will all come down to the “argument from consciousness.” What will distinguish a human from an AI will come down to a human being conscious, while an AI may or may not be. But how do we establish that a human is conscious using evidence that could form the assessment of an AI’s consciousness? In a recent paper with the long title, “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” (Butlin et al, 2023), nineteen computer scientists, neuroscientists, cognitive psychologists, and philosophers examined scientific theories of consciousness and assembled a list of “indicators,” properties or processes that, if they were present in an artificial intelligence, might suggest that it could achieve or already had achieved consciousness. After listing 14 such indicators, they determined that some existing AIs satisfy some of these, but none satisfy all or even a majority of them. Of course, the question is, if an AI did possess all or many or even just a few crucial indicators, how would we determine that it was conscious? The authors admit that they have no behavioral criteria for determining if the machine is conscious. They call their approach, “theory-heavy,” in the sense that they are looking for whether an AI system meets “functional or architectural conditions drawn from scientific theories,” instead of whether it meets behavioral criteria for consciousness.
Philip Goff (2023) says that consciousness is a “fundamental datum,” and we know it exists because it is privately observable, not because it is postulated to exist in order to explain observable behavior. He also says it is, by its nature, necessarily private and not observable by anyone but the person who possess it. This presents a challenge. In the Imitation Game, we have an interrogator who decides whether they are communicating with a person or a computer, but if our criteria is possessing consciousness, both the person’s and the computer’s consciousness will be unobservable to the interrogator. This leaves us two options: either we devise a method for assessing the presence of consciousness that does not require direct, public observation, or we disprove Goff’s thesis and show that consciousness is publicly observable. There are avenues for pursuing both of these ends.
It is conceivable that certain relationships among electrical discharges in the brain, either in terms of patterns of discharge across topographically arrayed neurons (e.g., Baars and Geld, 2019) or in terms of degree of integration of discharges across the whole or part of the brain structure (e.g. Tononi, 2008), could be so closely correlated with reported conscious activity that we could use the presence of such relationships as evidence of consciousness. If those same relationships could be identified across electrical firings of an AI, they could be taken as evidence of conscious thought. With regard to direct observation of consciousness, most philosophical and even some neuroscientific definitions of consciousness, such as the paper by Butlin et al, use Thomas Nagel’s famous definition of “what it’s like to be a…” in Nagel’s 1974 paper, it was a bat, but that was a metaphor for a human. Of course, we know what it’s like to be a human, not just from our own experience, but from the personal accounts of other humans. We listen to conscious thought when we listen to each other speak. The mathematician puts their conscious thoughts on a blackboard as they reason out their equations (today they probably use an iPad). So, what if an AI wrote a personal account of “What it’s like to be an artificial intelligence” and told us its life story, including all of its experiences. Would it be making its consciousness observable? Remember, the Duck Test is abductive reasoning. It only needs to satisfy us that what it yields is the simplest and most likely explanation for an event. I would say that the simplest and most likely explanation of an AI autobiography including its thoughts about its own thoughts, that sounded reasonable, given it’s known history, would satisfy the test that the AI was conscious.
So, the time has come. Let’s jettison the Turing Test and welcome the Duck Test. The next step is to see if there is any evidence that convinces us that an AI is conscious. I don’t mean armchair, thought-experiment evidence, but genuine, controlled research findings that indicate that the AI not only talks like a human, solves problems like a human, and reasons like a human, but that it produces evidence that it is conscious that is not distinguishable from that produced by a human.
References
Baars, B. J., and Geld, N. (2019). On Consciousness: Science and Subjectivity—Updated Works on Global Workspace Theory. New York, NY: The Nautilus Press Publishing Group.
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., … & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.arXiv preprint arXiv:2308.08708.
Goff. P. (2023). Why? The Purpose of the Universe. Oxford: Oxford University Press
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83, pp.435–450.
Peirce Edition Project.( 1998) “A Syllabus of Certain Topics of Logic” (1903), Essential Peirce Selected Philosophical Writings, v 2, 1893-1913.
Searle, J., 1980, ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences, 3: 417–57
Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biol. Bull. 215, 216–242.
“Philosophy is a battle against the bewitchment of our intelligence by means of our language.” (Wittgenstein, Philosophical Investigations §109)
I recently watched the 1990 film, Paris is Burning, which documents the New York Ballroom Culture, a complex performance-oriented subculture comprised mostly of gay black men who compete at Balls, where they act out highly dramatized versions of identities. The culture is supportive of its members, who ‘live” for the balls, whether they are performers or audience, and many of them define themselves in terms of their role in the culture. It was a perfect example of humans making meaning in their lives, in this case, in spite of marginalization by the majority society.
When I read Philip Goff’s Why? The purpose of the universe, I was shocked to find that anyone, much less a well-known academic philosopher, believed that the meaning of one’s life was mostly dependent upon whether the universe they lived in had a purpose.
According to Goff, life is not truly meaningless if the universe has no purpose, but our lives are richer if it does. In his words, “if it turned out there were a cosmic purpose, one that was good and that we could contribute to with our thoughts and actions, that would be tantamount to winning the lottery.” So, for us to benefit from cosmic purpose, it must not only exist, but have value in the sense of being “good.” It turns out that his evidence for cosmic purpose is based on so-called “fine-tuning” of the values of a number of physical constants that allow life to exist.
In Goff’s view, the universe has a purpose, which is to allow life to exist. This situation entails a great number of prerequisite conditions. There must be a way for the universe to make life possible and the universe must decide to choose that way. In order to make choices, the universe must be conscious. The universe must have values, because having a purpose entails values. Because the universe has a purpose with value, human lives can enhance their meaning by aiding the universe to realize its purpose. Goff addresses each of these issues, although not in the order I have presented them.
Goff first addresses values. The universe’s values are something that had to exist as what he calls “value facts.” They are fundamental facts in the sense that their existence is not derived from prior considerations. Because these facts characterize the universe, they must have existed before there were human minds to conceptualize or define them. This could be a conundrum, since it means value must have been defined without the benefit of minds to do the defining, except he also argues that the universe is conscious, so, in effect, a mind has existed as long as the universe has existed. It does not solve the problem of how a mind existed before the universe existed, since that would be necessary if what was being chosen was the form of the universe, as defined by its physical constants. He solves this problem by hypothesizing that during the earliest moments following the Big Bang, the less than a millisecond, “Planck Epoch,” the conscious universe chose, from an array of possible values for those constants, the ones that would permit life to develop. He gives no indication of where such consciousness came from, except to say that the quantum wave form probably “underlies” it. Goff’s definition of the value-fact for which he believes there is evidence, is that the universe “is directed towards some higher state of being.” The rest of the book discusses evidence for cosmic purpose and a value-driven universe.
Central to Goff’s attempt to demonstrate that the universe has purpose is his “Value-Selection Hypothesis,” which says, “certain of the fixed numbers in physics are such as they are because they allow for a universe containing things of significant value.” We assume he means us. He sets against his hypothesis what he calls the “crazy fluke” hypothesis, saying, “This is our choice: crazy fluke or value playing a role in shaping our universe.”
Another name for the crazy fluke hypothesis is chance selection or randomness, i.e., certain of the fixed numbers in physics are as they are out of pure chance or because of a random turn of events. Why the emphasis upon “certain of the fixed numbers in physics?” For those who are not familiar with the topic, it has long been wondered why or how it came about that a number of the constants found in physics have values that are in exactly the range, sometimes which is quite narrow, to allow life to develop. These constants include the strong nuclear force, the mass difference between protons and neutrons, the proton to electron mass ratio, the mass difference between up quarks and down quarks, the fine-structure constant or strength of the electromagnetic interaction between elementary charged particles, the gravitational force to electromagnetic force ratio, and the cosmological constant or density of space energy (dark energy), and others. In the case of some of these constants, the consensus is that they are necessary for enough carbon to be present in the universe to support life, which as far as we know, is always carbon-based. In the case of others, if their values were not as they are, even stars that live long enough to support the development of life or produce planets or sustain the atomic dynamics that allow hydrogen interactions to create other elements would not be possible. Fine-tuning is not just responsible for our own presence in the universe, it is necessary for our present universe to exist at all.
Fine-tuning is a concept that is generally recognized among the scientific community and is variously seen as either an unexplainable mystery (“that’s just the way the universe is”), a reflection of a deeper physics that makes these values necessary but which we don’t understand (Einstein’s view), just one set of values among many, perhaps an infinite number, of values that these constants have taken in many “multiverses,” or an illusion, in the sense that the ranges of these constants necessary to allow life are not so rigidly circumscribed as we think, or they have changed over time and continue to change. Finally, there is the “anthropic” idea that what seems unlikely is, in fact, the only circumstance that could have held true if we are to be present to observe it. By this theory, our universe could be the only one that exists or one of many, but, either way, it’s the way it is because it’s the one with us in it to observe it. A comprehensive review by the Foundational Questions Institute (Frankel, 2022) details both the latest findings on the topic and each of these theories. Perhaps importantly, it does not mention the possibility that the “fine-tuned” values of these constants are the result of cosmic purpose.
For his part, Goff relies on a figure taken from Barnes (2020) of 1 in 10136 “odds” that these constants would have the values necessary for life as strong evidence that the universe has purpose. Given the number of unknowns, both with regard to the range of values that some of the constants could take and still allow life, our lack of knowledge of the effects of a change in one constant’s value on the value of another, plus evidence that some of the values may have changed over time, not to mention the anthropic principle or the theory of multiverses, it seems hazardous to place much faith in such a calculation of these odds.
Despite it being a controversial conclusion, Goff takes the low likelihood of the physical constants having values that allow life to develop simply by chance as evidence that such a hypothesis is unlikely to be true. Goff cloaks his reasoning in Bayesian language, which gives it an air of certainty, but, in fact, given what we don’t know, is simply a pseudo-formalization of his intuitive reasoning and preconceptions. He attempts to compute the likelihood that the values of the physical constants would be such that they “are compatible with life” if the Value-Selection Hypothesis is true, versus if the Value-Selection Hypothesis is false. He points out that he is actually talking about how much credence he would put in the belief that the Value-Selection Hypothesis is true. His conclusion is that the truth of the Value-Selection Hypothesis justifies “fairly high,” credence, but the falsity of the Value-Selection Hypothesis justifies “extremely low” credence.
While we can argue with the facts and reasoning that justify his dismissal of the possibility of a universe with finely-tuned constants arising by chance, what is more problematic is Goff’s acceptance of his Value-Selection Hypothesis as the favored alternative to chance. He admits that Bayesian reasoning of the type he is using can lend credence to some far-fetched hypotheses, such as that what appears to be a portrait of Jesus in pattern of grains on a piece of toast is evidence that “Jesus specifically wants to communicate with me through toast” rather than “a Jesus-shaped image appeared by chance.” He dismisses acceptance of such counterintuitive hypotheses as relevant because “in contrast to Jesus in the toast, the fine-tuning evidence for cosmic purpose is ludicrously strong.” That’s a strong statement considering that Luke Barnes (2020) uses the same evidence and the same Bayesian reasoning that Goff uses to examine the thesis that God created such a fine-tuned universe. His conclusion is that “the likelihood that a life-permitting universe exists based on theism is not vanishingly small,” while the likelihood that such a universe exists by chance is “vanishingly small.” Presumably “not vanishing small,” is something less than the “ludicrously strong” claim that Goff makes. Barnes’ theism is based on his positing an omnipotent and good God, but in terms of logic, Descartes’ Deus deceptor, the malicious demon that could trick him into believing an illusion of the real world is the real thing, would have the same likelihood as would a benign God of producing a universe whose constants were fine-tuned for life, at least as far as the experiences of the scientists who measured such constants were concerned.
Goff’s (and Barnes’) Bayesian reasoning can give credence to almost any hypothesis because they both compare it to a rival hypothesis, which they claim has a “vanishingly small” likelihood of being true (despite the fact that most scientists have treated the chance hypothesis as roughly of similar credibility as its rivals). Nick Bostrom, in a famous paper back in 2003, argued that, mathematically, it may be exceedingly likely that we are living in a computer simulation. Frankel (2022) considers Bostrom’s proposal in terms of the likelihood that it would produce fine-tuning and concludes that it would, and, using the same reasoning as Goff and Barnes, she says the simulation hypothesis would also be more credible than the chance hypothesis as an explanation of fine-tuning. Ironically, Frankel suggests that the main reason for rejecting either the simulation account or any other version of a purposefully created universe is that the fine-tuning we observe is not, in fact, optimal for life. In terms of several of the constants, slightly different values would have given life an even better chance to emerge. She raises the question of, if someone, or something, or the universe itself created constant values that were favorable for life, why would they have created suboptimal ones?
In addition to Frankel’s argument that the values that are observed for physical constants are not optimal for life, the values of physical constants are not the only variables involved in determining the likelihood of life emerging. If the universe were created with the goal of permitting life, why is so much of it uninhabitable? Ignoring the uninhabitable space filled with dark energy, and black holes, and focusing just on planets, recent estimates are that two-thirds of planets are outside of the habitable or “Goldilocks” zone (Sagear & Ballard, 2023). Of the remaining one-third that are in the habitable zone, up to three quarters of them “may not be good for life” (Crane, 2023). This leaves only 8.25% of planets having even the possibility of life. Additionally, the most recent data from the James Webb Space Telescope has suggested that some planets that are in the habitable zone and are “tidally-locked” so they always show the same face to their star, which may, in fact, be the majority of habitable zone planets, have sudden and “sporadic” shifts in their rates of rotation. For periods of time, they are no longer tidally locked and undergo dramatic shifts in days and nights, temperature, oceanic tides and weather, which could be catastrophic to life on those planets (Shakespeare and Steffen, 2023). Thus, habitability on the universe’s planets would be reduced below the 8.25% upper limit. In fact, it may turn out that a “vanishingly small” percentage of planets in the universe are able to support life. But even a vanishingly small percentage of a number variously estimated at between 70 quintillion and 100 sextillion (that’s the number followed by from 18 to 21 zeroes) is a lot. Still, it means that even a creator with a purpose or a universe that, itself, has a purpose of being compatible with life, is only able to create a vanishingly small number of planets on which life can exist. That would be a very inefficient universe or universe creator if allowing life is its chief goal.
Purpose involves an agent with a goal who behaves in a way that is aimed at attaining that goal. Having a purpose is different than engaging in purposive behavior. Flowers turn toward the sun; ants build an intricate underground nest and termites do the same above ground; bees find their way back to their nest and the next day back to the flowers from which they take nectar. We can describe such behaviors as purposive, but only a few scientists who study them believe that the organisms that produce those behaviors have the purpose of their behavior “in minds” when they engage in it (some scientists do believe so). Goff is talking about the universe having a purpose in the sense, not just that its behaviors appear purposive, but that the universe is conscious and chooses its purpose and how to attain it.
Goff says that, because consciousness is, by its nature, necessarily private and not observable by anyone but the person who possess it, we cannot learn about it, or how it is related to such things as brain activity, through experimentation. Consciousness is different from other unobservable phenomena, such as particles, quantum wave functions, etc., because in those cases, “we postulate unobservables in order to explain what we can observe.” But consciousness is different because we know it exists because it is privately observable, not because it is postulated to exist in order to explain observable behavior. In his words (in large and bold font), “The reality of consciousness is a fundamental datum over and above the data of public observation and experiment.” Despite his obvious desire to emphasize this, it should be known that while consciousness is recognized as fundamental data by most cognitive and neuroscientists, they do not rule it out as a subject to study through public observation and experiment. Goff emphasizes his point because he asserts that consciousness can only be explored through access to private data, and since such data is not public and observable, that negates studying it through experimentation and leaves inner observation, what he calls “privately known data,” as the only method available.
Goff’s position on the scientific study of consciousness ignores volumes of research on the subject, and several of his later statements actually contradict that research. For instance, his entire discussion of a “meaning zombie” is challenged by research on blindsight and similar sensationless thought processes. Although he asserts that natural selection could not have selected consciousness or “experiential meaning,” that is exactly what research by Ginsburg and Jablonka shows is plausible in their 2019 book,The Evolution of the Sensitive Soul,and what neuro- and evolutionary psychologist Nicholas Humphrey explains happens in his recent book,Sentience. By denying the insights from experimental research, Goff gives himself permission to seek answers to virtually all questions on consciousness through reflection and introspection. As a psychologist, I can tell you that the entire world of psychologists would disagree with his statement that, “empirical science tells us how things behave, but it remains silent on why things behave as they do.” From jellyfish and anemones to flatworms to pigeons to monkeys and humans, empirical science has spent decades compiling information and testing theories on “why things behave as they do.”
I won’t go into the intricacies of Goff’s discussion of consciousness, since I found it filled with speculative assertions arbitrarily accepted as facts then woven into a loosely connected theory that has elementary particles acting in response to their “immediate inclination to perform a specific action in the here and now.” Instead of talking about force, or momentum, or gravity in relation to inanimate objects, he attributes organism-like reasoning to their behavior. He says, “Particles are never compelled to do anything, but are rather disposed, from their own nature, to respond rationally to their experience.” If he means that particles obey natural laws that allow them to move, interact and combine with other particles, he is, of course correct. But he denies the determinism implied by natural laws and substitutes rationality and choice, which implies a reasoning mind, which few physicists would agree with. Similarly, when he says, “it is consistent with observation to suppose that particles are engaging in a very rudimentary form of what organisms do: following their conscious inclination,” I can only surmise that he is using words such as “conscious inclination” differently than I do, and what suppositions seem “consistent with observation” to him are different than they are for me. He admits that “the conscious inclinations of an electron would be unimaginably simple compared to the conscious inclinations of even the simplest organisms.” By saying this, he is positing a different kind of consciousness than most of us are intimately familiar with in our own experience, and by doing so, he has violated his own dictum that consciousness can only be studied by exploring the “privately known data” of our own minds. If consciousness is defined as what we each experience, then if something like the consciousness of particles is “unimaginable,” then it cannot be studied and we cannot determine if it exists.
But how does this theory of consciousness all the way down to particles, i.e., panpsychism, plus what he calls panagentialism, i.e., everything is the agent of its own behavior, whether it is animate or inanimate, contribute to deciding the issue on cosmic purpose?
It turns out that Goff’s reasons for accepting that there is cosmic consciousness and cosmic purpose is that it is necessary to sever the connection between consciousness and “purely physical processes in the brain.” Instead, he asserts that consciousness exists separately and in everything as “fundamental conscious entities,” which have a creative role in producing the physical instantiation of the universe. In his words, “fundamental conscious entities, through their interactions, realize the right mathematical structure [to] quite straightforwardly account for the emergence of physics.” He does not mean physics as a theory, but physics as the lawful relationship between particles and forces that make up the universe. He alludes to Bertrand Russell’s (1927) The Analysis of Matter to support his claim, but Russell’s “neutral monism,” which is at the heart of his analysis, does not reduce matter to mind or vice versa, nor see one of them as causing the other. For his part, Goff gives no explanation of what fundamental conscious entities are, or how they have causal properties, or what they are conscious of, much less how mathematics can produce, rather than simply describe, physical structures or events.
Goff seems to realize that positing fundamental conscious entities still doesn’t explain why the universe would be fine-tuned for life. Something is missing. The something that is missing is that the conscious universe “responds to considerations of value,” i.e., “the universe is essentially driven to try to maximize the good” (he says try because he recognizes that its effort so far has been sub-optimal). As I noted earlier, he says that In the first moment of the universe’s birth, the “Planck epoch,” there was a choice to be made regarding the values of the constants. The universe, because it aimed toward value, fine-tuned itself to be compatible with life. But how would it know what values would be compatible with life, billions of years before life emerged? Goff answers this question by saying that “we can attribute to the universe the capacity to represent the full possible consequences of each of the options available to it.” In other words, it could see the future. So, his conscious universe is necessarily omniscient.
That’s the story. There are some further embellishments and consideration of the implications for meaning and choices in our human lives, but the gist of his theory has been presented. Why? The Purpose of the Universe is a deep analysis, and its author doesn’t just propose a theory, but he also tries to anticipate the objections to it. I found his response to objections to be some of the most interesting reading and a very good demonstration of the author’s own mental agility. I’ve not been kind to his ideas and, where I’ve misunderstood them, not just disagreed with them, I apologize, and I sincerely hope I’ve not misrepresented what he says. For anyone interested in this cosmological/philosophical/psychological/perhaps even theological topic, this is stimulating and important reading, and I recommend it.
Special thanks to Edelweiss and Oxford University Press for allowing me free access to Why? The Purpose of the Universe prior to its publication.
References
Barnes, Luke A. (2019). A reasonable little question: a formulation of the fine-tuning argument. Ergo: An Open Access Journal of Philosophy 6, 1220-1257.
Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly Vol. 53, No. 211, pp. 243-255.
Crane, L. (2023) Up to 74% of planets in the ‘habitable zone’ may not be good for life. New Scientist.
Frankel, M. The Templeton Foundation (2022). Fine Tuning. Foundational Questions Institute.
Ginsburg, S., Jablonka, E. (2019). The Evolution of the Sensitive Soul: Learning and the Origins of Consciousness. Cambridge, MA. MIT Press.
Humphrey, N. (2023). Sentience: The invention of consciousness. MIT PRESS.
Shakespeare, C.J., Steffen, J. H. (2023). Day and night: habitability of tidally locked planets with sporadic rotation, Monthly Notices of the Royal Astronomical Society, Volume 524, Issue 4, Pages 5708–5724, https://doi.org/10.1093/mnras/stad2162
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Coming soon! The second novel in the Voyages of the Delphi series: Prime Directive. Release date: November 7, 2023.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Nicholas Humphrey is a thinker, a distinguished scientist, a psychologist and philosopher, an anti-war activist, a literary editor, and the scion of one of Britain’s most notable intellectual families. He has worked with such luminaries as Lawrence Weiskrantz, who first identified “blindsight” in a human subject (after Humphrey had found it in a monkey), Dian Fossey, famous for studying apes in Africa, and philosopher Daniel Dennett. His writing, research and social activism have won him several awards. Needless to say, I was eager to read his latest book, whose goal is “to solve the hard problem: to explain the wondrous, eerie fact of ‘phenomenal consciousness’—the redness of a poppy, the sweetness of honey, the pain of a bee sting.”
Humphrey’s writing style is highly engaging. Sentience is less a scientific treatise than a personal narrative, following his career from his early days at Cambridge, where he studied everything from mediums, who claimed to be visited from the deceased, to monkey’s brains. His story conveys his enthusiasm and openness to learning across a range of interests, all roughly psychological.
I made a lot of notes as I read Sentience. Early in the book, I noted every time that he talked about consciousness in terms of someone observing their own thoughts and sensations, e.g., when he said that “When you introspect, you observe these various [mental] states, as it were, with an inner eye.” Despite the fact that I talked in a similar vein when describing my theory of consciousness in my novel, I, Carlos, back in 2004, I had come to believe that the idea of an inner observer was a philosophical error, a sleight of hand that seemed to solve the problem of how neural representations became mental images but left you with the problem of explaining the “you” who saw them. I thought he was just being sloppy, either in his thinking or in his writing, by invoking this self-within-your-mind. As I read further, however, I found that this inner observer was a central actor in his theory.
Before going further, I have to sidetrack, as Humphrey did, to describe “blindsight,” something central both to his career and to his theory. I was familiar with the human research and some of the animal research before it, from Weiskrantz’ landmark book, Blindsight, published in 1986 (a book still well-worth reading, despite its age). The subject of the book was a man who had had one half of his striate visual cortex removed and was, so far as he and anyone else was aware, blind, in one half of his visual field. It turned out that he wasn’t, and careful experimentation revealed how much vision remained in that field, as well as his lack of awareness of it. My takeaway from that study had been that there are two neural pathways for vision, which deal with complementary, but not duplicative, information from the sensory input from the eyes, and one of them produces conscious sensation and perception and one of them does not. Information regarding presence or absence, location, and spatial orientation of a stimulus was carried by one pathway and information regarding the identity, and form of the stimulus by another. The latter pathway resulted in conscious awareness of the stimulus and the former did not. Subsequent research has extended the so-called residual visual abilities and shown similar results in subjects with both of their striate visual cortices diseased or removed. The report by subjects that they cannot “see” objects to which they can point, report their orientation and location and perhaps even, in the case of words, decipher their meaning, or discriminate between colors, was described by Weiskrantz, as “blindsight.”
In Sentience, Humphrey focuses on something that was only a background consideration for me when I read Blindsight and the related studies. Enough perception to allow an animal or person to carry out many meaningful tasks that require a response to visual stimuli can occur without any felt sense of seeing. In other words, if we regard seeing as the sensation we derive from our visual systems, we can have perception without sensation. Making the observation that many non-mammalian animals, such as frogs, do not have a striate visual cortex, and, in humans, the newborn infant’s striate cortex is not immediately functional and perhaps won’t be for several weeks, Humphrey asks the critical question of what sensation adds to perception, at the same time making the assumption that, evolutionarily, perception preceded sensation. This becomes the jumping-off point for his theory of phenomenal consciousness.
To understand Humphrey’s theory, we first have to define some concepts. Sensations involve a response to stimulation of sensory cells that produces a “qualitative dimension that sets them apart from all other mental states and attitudes.” Philosophers call this dimension “qualia,” and Humphrey agrees, although, in his informal style he calls it the “charming” quality of sensations. He seems to imply a pleasure-pain dimension, which we can acknowledge for many sensations, but I think he’s overgeneralized, since I can think of many sensations that are not, in any way I can think of, “charming.” For instance, the sensation of my fingers striking the computer keys as I type, or the image of the laptop in front of me, or the sensation of my body pressing against the seat and back of the couch I’m sitting on are qualitatively different from one another, but not along the dimension of pleasure or pain. They have no special charm. To the extent that such an evaluative component is important to his theory, I think it is not a reliable indicator of a sensation or qualia.
An important component of qualia is that sensations feel as if they are yours. It doesn’t make sense to say, “this surface feels rough, but not to me,” or “the rose looks red, but not to me,” unless you are describing someone else’s reported experience. Your sensations are yours, and they feel that way. This is a key point in Humphrey’s theory.
Consciousness is not just the awareness of sensations, it includes thoughts you have about almost anything, as long as you are aware of those thoughts. You can be conscious of calculating sums “in your head.” You can be conscious of worrying about being late to make your plane flight. Humphrey says, “consciousness means having knowledge of what’s in your mind.” By this, he means, what’s in your mind right now. I know that I know that India borders on Pakistan, but I’m only conscious of that knowledge when I’m thinking about it, usually because someone has asked. He points out that we only are conscious of a limited amount of information at any one time and appears to agree with Bernard Baars when he says that “a state is conscious when its contents are available to a global workspace in the brain.” As with sensations or phenomenal consciousness, all consciousness is personal. “Your conscious mental states comprise just those states to which at any one time you have introspective access and of which you are the subject.” This means that being conscious, whether of sensations of other thought processes, includes a sense of self. You are the one that feels sensations and knows what’s on your mind. The consciousness and feelings of which you are aware belong to you.
So far, none of what Humphrey says is different from what a lot of other theorists say about consciousness. Where he parts company from other theorists is in saying that the phenomenal quality of sensations, their qualia, which are absent in the visual realm in cases of blindsight, are necessary for an organism to be sentient. A creature “could be capable of introspection, could know its own mind, have a self-narrative, be highly intelligent, goal directed, motivated, percipient, and so on.” He says, “such a creature, by our definitions, would undoubtedly possess a form of consciousness.” But it would not be sentient. By restricting his definition of sentient to phenomenal consciousness, he has slipped something past us, which will allow him to exclude a large part of animal life as well as machines, as well as narrow the focus of his discussion.
What does he not explain? Well, the conscious experience of me reading his book or typing this review, for instance. The process of understanding words. The recognition of faces. A whole lot of what our consciousness would seem to be about., Humphrey calls this part of consciousness, “cognitive consciousness,” and distinguishes it from “phenomenal consciousness.” He admits that “we rarely find ourselves in the situation of being cognitively conscious without being phenomenally conscious,” although as he noted, at least within the field of vision, some people, who have blindsight, in fact do have one without the other, at least in a single sensory modality. By separating the two kinds of consciousness, he is able to focus on their different evolutionary trajectories. He is definite in his conjecture that cognitive consciousness preceded phenomenal consciousness, and points to both the ability of humans and monkeys to behave intelligently and adaptively without the latter (restricted to a single sensory modality, remember) and to the likelihood, based on neurophysiological evidence, that some other species who appear to get along just fine, probably lack phenomenal consciousness. This tips a lot of folk wisdom on its head. Most people would probably say simple animals, such as frogs and insects, can feel but they don’t think. He’s suggesting they have that backwards.
His evidence seems pretty flimsy to me. He only alludes to evidence regarding vision, and the definition of cognitive consciousness in the cases he cites seems pretty fuzzy. If a blindsighted individual can correctly tell you the orientation of lines that he says he can’t see, we can call that cognitive, but in what sense is that cognitive consciousness? If you ask him if he can tell you what the lines’ orientations are, he will answer that he can’t tell you, because he can’t “see them.” In fact, he will be as surprised as the experimenter that he is able to get the answer correct. In what sense does he introspectively, “know what he knows?” It seems to me that cognitive consciousness is just as elusive a concept as phenomenal consciousness, and, as in explaining how a frog is able to aim its tongue to snatch a fly from the air, it’s just as difficult to explain why the frog would need knowledge of how or what it’s doing to be conscious as it is to explain why the fly tastes “like something” after the frog catches it. For much of the book Humphrey tries to explain the selection factors that would have favored the development of phenomenal consciousness, but he fails to explain why or how cognitive consciousness was selected or developed, or why elements of cognitive consciousness, such as the sense of “me” being the one who possesses the knowledge would have been selected or what their function is.
Ignoring the development of cognitive consciousness may not be fatal for Humphrey’s theory, and he may just have been trying to narrow the focus of his discussion, but it burdens phenomenal consciousness with elevated importance in terms of characterizing the human mind and consciousness in general. Phenomenal consciousness is only part of the story, and perhaps not the most important part. A simpler scheme would seem to be to admit that both cognitive and sensory processes may function effectively without being conscious, and then raise questions about the need for and mechanisms that support bringing either of them to consciousness and allowing us to introspect on them. Occam’s razor might prompt us to think the same mechanism is involved in making either of them conscious until research shows us that we need to deal with them separately, as Humphrey does.
In terms of evolutionary selection, Humphrey asks “what does this idea—the sensation with its phenomenal properties— cause or enable or modify” in order for it to provide an advantage leading it to be selected and retained in evolutionary development? Note that we’re identifying the sensation with its phenomenal properties as an idea, which stems from his description of the process as “a representation of stimuli arriving at hour sense organs and you, the representee, read this in order to arrive at the idea of what the stimulation feels like.” So, phenomenal consciousness includes, by definition, the self-observing self. If this sounds overly complex, redundant, or homunculus-like, to you, it does to me, too, but, as I said earlier, this introspective observer is a key part of his theory. Be that as it may, he suggests that what this phenomenal consciousness we identify as a sensation enables or modifies are “action-relevant attitudes.” Particularly, when you identify the sensation as pleasurable or painful, you recognize that it is good for you or bad for you and you decide to do something about it. “You scratch the itch; you relax in the warmth of a bath.” Paradoxically, Humphrey admits that you don’t really need to be conscious of a sensation of pain or pleasure to decide what’s good or bad for you. He points out that when you pull your hand away from a hot stove, you probably do so before a sensation of pain has reached your consciousness. What’s important, he says, is “the crucial effects it has on your sense of self.” What he means is that you can assess the effects of the phenomenal experience in a general way, relating them to other experiences, to memories, to your mood, to your thoughts about your relationships, your life plans, etc. In other words, you can use the experience in developing your sense of who you are and what you’re doing with your life—either in this very moment or over the long term. Now, having phenomenal consciousness, i.e., sensations that enter your overall consciousness, is incredibly useful to such things as increasing the likelihood of your survival or of finding a partner and producing offspring. Note that having only phenomenal consciousness without cognitive consciousness might not be so valuable, but Humphrey has this covered by asserting that cognitive consciousness is already there. Both are needed for an organism to make conscious plans, which is a behavior that has considerable evolutionary advantage.
The story isn’t over yet. A major element in physically responding to a painful or pleasurable sensation, is that the response is adaptive and more so than another response. In other than reflexive responses, this means choosing how to respond. The step needed in order to do this is representing the sensation and the resultant behavioral or attitudinal responses in one’s mind where they can be accepted, rejected, modified, or saved for another day. This means, in Humphrey’s words, “privatizing” the thoughts. This can involve “motor signals that were formerly sent out to produce a response at a particular locus on the body surface [being] redirected to the place in the brain where signals from this locus come in [so] there is the potential for feedback.” He refers to such feedback to the sensory signal as a “self-entangling loop” that can sustain itself, prolonging the sensation and ideation for use in thinking. This also gives us the feeling of lingering sensations, the sweetness of sugar “staying on the tongue,” the pleasure of sex lingering as an “afterglow,” or the discomfort of pain remaining long enough to sustain associations with the situations that surrounded it and fostering avoidance behavior in the future.
This still isn’t the complete story, because there is a social advantage to having phenomenal consciousness. Being aware of one’s sensations and how they affect your decisions allows the organism that has this capacity to reason about other similar organism’s behavior. It allows the development of a Theory of Mind (ToM), or as Humphrey prefers to call it, “mindreading.” Mindreading allows prediction of others’ behaviors by anticipating how they think. It’s mostly a human capacity, although studies with apes have been inconclusive. It’s generally accepted that it is a boon to social interactions and survival in groups. Humphrey’s position is that we develop this skill by projecting what we are aware is going on in our own minds to other people and interpreting their present and future behavior on the basis of it. He further suggests that this ToM, when applied to ourselves and others, has a narrative format, which gives us a semi-fictional narrative, in Dennett’s words, “a user-illusion that is so familiar to us that we take it not just for reality, but also for the most indubitable and intimately known reality of all.” While this is a rational account of how ToM develops, recent evidence that Large Language Model AIs can pass ToM tests at the level of older children has raised questions that challenge the entire picture.
The upshot of the series of processes within an individual human or animal and in their development over eons in evolution, is the phenomenal self. The sense of “me” that is necessary to provide the unity to different sensations and to the generation of real or imagined motor responses. This sense of phenomenal self is necessary for the whole system to work and provides the sense of self that is familiar to all of us. Humphrey is clear that it is sensations—felt sensations as phenomenal consciousness—that necessitate the existence of this self. He seems to have forgotten that he said it was also a quality of cognitive consciousness, which, according to his own evolutionary timeline, developed before phenomenal consciousness was present.
When Humphrey finished with the exposition of his theory, I was more than a little dismayed by its number of questionable assumptions and conceptual gaps, but willing to write this off as one of the inevitable costs of devising such an ambitious theory. I was hoping that evidence, beyond the formidable, but mostly suggestive, evidence provided by the blindsight phenomena, would solidify at least one or two of the theory’s assertions. Alas, it was not to happen.
Humphrey’s evidence for his theory ranges from experimental tests on things such as animals’ preferences for colors or types of music or rats’ distaste for being immersed in water, which are suggestive of the sensory quality of the animals’ experiences, to lively anecdotes about his dog, Bernie, and a friend’s parrot. Coming from a neuropsychologist who has produced ground-breaking, prototypical research on the relationship of brain function and behavior, this is disappointing. It ended an intricately thought-out book by giving it a flavor of an off-the-cuff exercise in theoretical improvisation, and that is certainly not what this book is. Admittedly, his theory is a conceptual one and not a detailed, technical one. His style of writing is conducive to the idea that he is examining his own thoughts out loud with us, his readers, as his audience. He’s willing to go down some rabbit holes for the sake of seeing where they lead. Certainly, he fills the pages with provocative scientific findings and stories, mixing the two together as though they weighed equally in proving his point. To me, that’s not as convincing as sticking to the science. That said, Sentience is a genuinely fun and enlightening read, if not a convincing presentation of a tightly crafted, experimentally validated theory of consciousness. I recommend that readers interested in the mind read it for themselves to draw their own conclusions. I’m sure it will set their minds racing to pursue some of the tantalizing ideas contained within the book’s pages. It certainly did mine.
The way our universities work today is a topic on many people’s minds. It’s a topic not limited to one issue. Cost is an issue, as are admission practices. Even after students get on campus there are still debates about what a college education is for. Is it to increase a student’s eventual employability and income level, or is it to educate them in a broad range of knowledge and to sharpen their critical thinking and reasoning skills?
Despite all the above debates, the one that brings out the most emotions is about whether institutions of higher education are biased toward a certain socio-political philosophy, and whether their adoption of that philosophy is channeling their students’ education along certain narrow socio-political lines.
A lot of evidence points to the conclusion that most colleges and universities in the U.S. have a liberal-progressive bias. A disclaimer: my own socio-political philosophy is liberal-progressive or left-leaning. Nevertheless, I am alarmed when people with different veiwpoints are not allowed to speak on campus, and I’m even more alarmed when it’s the students who ban them than when it’s the university administration, because that means the students have bought into the “only one opinion allowed on this campus” philosophy. But I don’t see the particular socio-political philosophy most American universities share as the culprit. I see the present socio-cultural situation in America as the cause of it. After all, in Florida, the opposite philosophy is banning “woke” professors, speakers, lecturers and books. So everybody nowadays only wants to hear one side of most issues.
I can complain about our society’s narrow-mindedness, but that probably (certainly) won’t change anything. On the other hand, as a former university professor and dean, I may be able to affect the behavior or attitudes of at least a few professors and campus administrators, perhaps even students. What really bothers me is a basic issue about educating other people, especially youth for whom I had the responsibility of providing an education on topics that are complex enough to be taught at a college level. In fact, this would be an issue even if universities or their professors and students showed no systematic biases. The problem is this: In order to adequately educate a student, I believe it is necessary to provide as complete as possible picture of the subject you are teaching, which includes those facts, studies, articles, books and opinions that both agree with and disagree with your own. I don’t mean presenting unsubstantiated beliefs, conspiracy theories, or ideas that fall outside of the broad realm of accepted ideas within your field. A biologist can teach evolution without teaching creationism as an alternative theory, since creationism doesn’t fall within the accepted realm of scientific biological theories, because that isn’t the type of theory it is. A historian can teach about slavery in the Americas without teaching modern theories that dismiss the evils of slavery or imply that one race is somehow better than another. In such a case, however, a complete study of the why and how of slavery ought to include examination of the details of racist ideas of the time that allowed otherwise enlightened men to own slaves, of economic issues that supported or promoted slavery and, in the South, made people believe it was necessary, of religious ideas that justified slavery, etc. so that a student could understand how such a large segment of American society could support such a practice, i.e., how they thought. To not teach those things would be to provide only a partial education on understanding slavery. At the same time, someone who taught all of those things without examining the life of slaves, the voices that came out of slavery, the effects of slavery upon the psyche of those enslaved, the disruption of family coherence and stability caused by treating slaves as property and selling family members, the centuries-long effects of a race’s having been treated as slaves in terms of their social and economic progress, and telling these things, preferably, in the words of those who endured slavery, and the more than a century of discrimination that followed it, would not be providing a complete picture. To teach students only one half of these things because the other half is labeled “racist ideology” on the one hand or “critical race theory” on the other, is dishonest from an intellectual point of view, and the students who are subjected to such curtailed teaching will be only partially educated.
Some will argue that all knowledge is from a point of view, and the educator’s role is to select the right point of view around which to shape the story they are teaching. They will say that it’s not true that all points of view are equally valid, nor is it true that students should be left to form their own point of view without the educator trying to shape it. I agree with that. All points of view are clearly not equally valid on any topic I can think of. Only poor teaching of how to evaluate different points of view and distinguish the valid from the invalid, the sound from the unsound, and the substantiated from the unsubstantiated would leave a student truly adrift to make their own decisions unaffected by what they learned in class. In most cases, students will be influenced by their professor and adopt some aspects of his or her point of view. But, if they have learned how to evaluate arguments and evidence, they may come to different conclusions than those of their professor. They may weigh some arguments or some evidence more or less heavily than their professor did. The student who does that has a good chance of going on to be a scholar, an informed citizen, and a better teacher to their own children than the student who completely accepts or rejects the professor’s point of view.
Intellectual honesty in teaching does not just apply to socially controversial subjects such as history, ethnic studies, gender studies, and so forth. It applies to all academic subjects. Literary criticism is based on a set of rules, which are constantly changing, about what creates literary “worth.” It includes social relevance, and those things that are socially relevant change over time. It includes theories about representation and, now it includes what distinguishes generative machine output from human-inspired writing and examination of how the worth of the two can be assessed. There are plenty of theories in the field and even more points of view. A well-educated student will be familiar with many, if not most of them and develop their own point of view on the subject, which may or may not agree with their professor’s, but will be influenced by it. A well-educated student who finishes a course on Shakespeare’s plays will have learned tools to use in appreciating, understanding and evaluating the plays, not just a point of view taught to them by their professor.
When I taught, I taught psychology. It is a field in which not a single point of view, theory, or set of facts goes unchallenged . I had my favorite theories, those I doubted, and those I completely rejected. Psychologists like to think of their field as a science. That means there are some agreed upon procedures for evaluating ideas, theories, hypotheses and explanations of psychological phenomena. Some of the procedures are so complex that there are professors who specialize just in the evaluation procedures, such as statistics, or brain imaging, or observing animals in their natural habitats. I knew professors who subscribed to a particular theory and interpreted all phenomena in terms of that theory (e.g., Freudians, Skinnerians, Chomskyite psycholinguists, computationalists). In my opinion, then and now, these professors had an obligation to teach not just what they believed, but also alternative theories and the evidence that supported those other theories or raised questions about their own. In doing so, they would produce more informed students, capable of evaluating ideas on their own, but fully aware of their professor’s reasons for favoring his or her point of view.
I think if every professor tried to provide education using the framework I have suggested, we wouldn’t have to worry as much about our college and universities being biased (though we should continue to worry). To do less is intellectually dishonest, and to be intellectually dishonest when teaching at a university is not ethical.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Coming soon! The second novel in the Voyages of the Delphi series: Prime Directive. Release date: November 7, 2023.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Get Casey Dorman’s most popular mystery/thriller, The Oedipus Murders FREE!
For a limited time, Black Rose Writing the publisher of the best-selling The Oedipus Murders, is offering a digital version (ePub, mobi for kindle, PDF, ePub for Kobo) FREE! through Book Funnel. Read an exciting psychological thriller, which has 88 positive reviews (average 4.1 rating) on Amazon, about a psychiatrist whose effort to solve the mysterious disappearance of his client’s wife leads to himself as a suspect.
“A jaw-dropping work of contemporary California noir. Fans of James Ellroy will fall hard for Casey Dorman.” –Best Thrillers
“The originality of the story is unparalleled… consistently intrigues, excites, and surprises.” —Turning Another Page
“Dorman is a master of the satisfying twist. As someone who does this for a living, I’m supposed to be pretty good at “seeing it coming.” And I did not. I was fooled time and again as the twists and turns came at me. And I couldn’t have been more delighted.” —Leslie Bohem, Emmy-award winning screenwriter and producer
To get your FREE copy of The Oedipus Murders, click on the link below, then scroll down to The Oedipus Murders and click on the book cover.
At the risk of alienating everyone, I’m going to present my thoughts on the book-banning debate. If I had to sum them up succinctly, I’d say, “we should stop vilifying each other and listen to each others thoughts about banning books.” The rhetoric has gotten over the top (e.g., “Texas moves one step closer to full FAHRENHEIT 451.” or ““The goal is to sexualize children.”), and people are talking past each other.
As a parent, although with grown-up children and grown-up grandchildren (my great grandchildren can’t read yet), I know there are books I wouldn’t have wanted my children to read when they were at certain ages. At the same time, I wouldn’t want someone else making the decision about what my children should read, not even other parents. But, in order to make a decision about whether my children should read it, I would have to know something about the content of a book. The question becomes, how should authors, publishers, booksellers, libraries and school systems convey information about a book’s content.
Films are rated in terms of their appropriateness for different age groups: The current Motion Picture Association of America (MPAA) rating sytem is:
Rated G: General audiences – All ages admitted.
Rated PG: Parental guidance suggested – Some material may not be suitable for children.
Rated PG-13: Parents strongly cautioned – Some material may be inappropriate for children under 13.
Rated R: Restricted – Under 17 requires accompanying parent or adult guardian.
Rated NC-17: Adults Only – No one 17 and under admitted.
Motion picture ratings are based on the presence of violence, profane language, substance use, nudity, and sex. According to the MPAA ,“ratings are determined by the Classification and Ratings Administration (CARA), via a board comprised of an independent group of parents.” In other words, one group of parents is determining what decision the rest of us should make about allowing our children to watch a film. Film ratings are voluntary and not mandated by law, although most film distributors won’t distribute an unrated film, and most theaters attempt to enforce the R and NC-17 recommendations.
In terms of books, Amazon does not label books, but prohibits those with “Offensive Content.” They define offensive content as “hate speech, [content that] promotes the abuse or sexual exploitation of children, contains pornography, glorifies rape or pedophilia, advocates terrorism, or other material we deem inappropriate or offensive.” So, with regard to books, Amazon essentially bans those that they consider offensive, but they use rather vague guidelines that permit them a lot of leeway in making such decisions.
The American Library Association, which three states have now disengaged themselves from, classifies books based on content using the Dewey Decimal system, which divides content into ten “knowledge” categories (e.g., general works; philosophy and psychology; religion; social sciences), and 100 subcategories, as well as some other “content neutral” systems, such as large print, children’s books, audiovisual materials, etc. In their view, labeling of books is designed to provide “viewpoint-neutral directional aids…intended to facilitate access by making it easier for users to locate resources.” In other words, they help you find a book. They have issued a formal statement on content labeling, which says, among other things, “The American Library Association opposes labeling as a means of predisposing people’s attitudes toward library resources.” In accordance with this philosophy, they discourage labels that are “used to warn, discourage, or prohibit users or certain groups of users from accessing the resource.” They warn that even directional labels describing a book’s content can become “prejudicial labels,” which they say, “are designed to restrict access, based on a value judgment that the content, language, or themes of the resource, or the background or views of the creator(s) of the resource, render it inappropriate or offensive for all or certain groups of users.” They oppose all use of prejudicial labels.
The American Association of School Librarians, a division of the American Library Association, goes further in its “Position Statement on Labeling Practices” than does its parent organization. They state that, “labels that make determinations about a book’s content are an infringement of a learner’s First Amendment rights to free speech and their ability to make determinations about what content is appropriate for them. Instead, school librarians should support each learner’s right to make determinations about content.” They go on to point out that labels such as LBGTQIA+ (Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual, plus any other identities not included in the acronym) or “Black,” “Indigenous,” or “people of color” may separate location or search aids for such content from other variables, such as “romance” or “science fiction” in which such content is often included, thereby restricting access to such content, or making access to such content more difficult.
The American Association of School Librarians, position is the most challenging to parents who want to select what their children read by content, because it essentially discourages all content labeling.
How to label book content without being prejudicial about what should or shouldn’t be read or read by children of certain ages, is a difficult issue. In my opinion, parents have a right to prejudice their children’s choice of reading materials, if by prejudice we mean choose one thing over another. But one parent doesn’t have the right to prejudice the choices of another parent’s child, which they do in the film rating system, or when school boards ban or label books as “inappropriate” or “offensive.” However, if there is no way to determine the content of a book without reading it, and that puts parents in the difficult and impractical position of having to read everything their children may choose to read before their children read it.
The obvious solution is to have some sort of content labeling system that, itself, is not prejudicial, but that can be used by the children’s parents (or perhaps their teachers) to prejudice children’s choices of books to read if they want to use it that way. Even we, as adults could use such a system to determine our reading choices if we are offended by or simply disinterested in certain types of reading content. But what kind of content labels could work this way?
Following the American Association of School Librarians, I agree that whatever labels are used should not be used as means to determine a book’s location or as search criteria for finding a book. After all, most of us, if we look at a film’s rating at all, only do so after we have looked at the title, the actors, and other descriptions of the content, such as mystery, science fiction, romance, comedy, etc. We don’t usually search by film rating. So, without determining search criteria or location in a library, we could include informative content labels in the descriptions of a book’s content alongside plot or character descriptions. But what kind of labels are non-prejudicial in themselves but amenable to be used for prejudicial choices (by this I mean that they may determine whether a parent allows their child to read the book not that reflect what we perjoratively call a “prejudice”). Here are some of my candidate labels:
Graphic or explicit sex
Graphic or explicit violence
Liberal use of profanity
Religious theme
That’s about it. Further subdividing sex into hetero or homosexual, or violence into gun or military or domestic, or profanity into sexual or expletive, or religious into Christian or Muslim or Buddhistic, is almost impossible to do without implying a value judgment about which kind of sex, violence, profanity, or religious theme is more offensive, and is not necessary. A parent who knows that graphic sex or graphic violence or liberal use of profanity or a religious theme is included in the content of the book may find out more on their own, if they only have objections to one or another kind of such content.
My labeling system probably has flaws. I know that whether something is considered graphic is a subjective judgment and so is whether some words are considered profane. Those are issues that would need to be solved. But my point is not to get anyone to adopt my labeling system, it’s to show that it’s possible to search for a solution that treats everyone’s perspective as having some validity and respects individual differences about what is appropriate reading for children. So far, I don’t see many people trying to search for such a solution. The thing they seem to be doing is criticizing each other and labeling those with a different opinion from theirs as not only wrong, but usually evil. That’s no way to solve things as a society.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Coming soon! The second novel in the Voyages of the Delphi series: Prime Directive. Release date: November 7, 2023.
Subscribe to Casey Dorman’s Newsletter. Click HERE
My morning scan of internet news sources—reputable, disreputable, licit, and illicit—unearthed the tantalizing headline, “The true story of the fake unboxed aliens is wilder than actual aliens.” I just had to read the story, which was on Vox, and it was about an exhibit presented to the Mexican congressional committee investigating UFOs (who knew they had such a thing?), that contained “dog-sized” mummies of purported aliens that were actually glued together parts of real human mummies and a partial skull of a llama. I have to admit that I read the story, mostly because I like to read about fake news and conspiracy theories being debunked and—bias disclaimer—I regard all stories about UFOs and aliens visiting earth as fake, or at least mistaken. So why am I still intrigued by aliens and, particularly, alien minds?
One of my main interests is the human mind. I’m a psychologist, I’ve done research on the brain and how we learn, and I have an intense interest in artificial intelligence, which not only is fascinating in its own right, but is helping us learn more about our own minds. I think there’s a 50:50 chance that we’re not the only intelligent creatures in the universe (and a 99:1 chance that we soon won’t be the only human-level intelligences on earth because of AI). But AI intelligence, at least in the models that are now being pursued, which are based on artificial neural networks trained on data produced by humans, such as speech or movements, will, in the short term anyway, probably resemble our own intelligence. Our brains and our intelligence and our minds were shaped by our adaptation to the conditions on earth and represent not just that adaptation, but a lot of chance mutations that took our evolutionary development in one direction rather than another. With aliens from another planet, everything would be different.
Science only hints at what conditions on the surface of other planets might be like. After all, it was only in 1992, a little over 30 years ago, that the first exoplanet orbiting another star was confirmed by the Arecibo Observatory in Puerto Rico. For most of the known exoplanets, we are still finding out if they are gaseous or rocky, likely to have an atmosphere or water, and even how fast they rotate. The question of whether they could sustain life is up in the air (no pun intended) and is also limited by our own ignorance of what forms life might take and what chemical and molecular combinations are necessary for it to exist in other environments. The question of what an alien’s mind might be like is purely speculative and hampered by our own inability to think beyond the model provided by our own minds. That’s where science fiction writers come in.
At least half of the science fiction that features aliens has them being humans, but from other planets. At least half, and perhaps as much as 90% of the science fiction that features non-human aliens portrays them as thinking like humans, even if their tentacled, or massive, or appendageless, or multi-headed bodies are vastly different. After all, the writer who chooses to include an alien who thinks in a manner foreign to how humans think has the daunting task of trying to convey such thinking to their readers, who, of course, only think like humans.
Stanislaw Lem is a philosopher and science fiction writer who has tackled this problem. In his novel, The Invincible, the aliens are tiny automata, descended from small robotic assistants to members of an alien race, which crash-landed on a planet. Over eons, the automata evolved into a collection of tiny “flies,” which, although not individually conscious or possessed of reasoning, use evolved herd behaviors to destroy their alien masters and all other living creatures on the planet’s surface, including the humans who come to visit.
The idea of a herd mind, as it is often called, is not unique to Lem, and, in fact, is one way of describing how ants, bees, and termites manage complex cooperative behaviors to build their nests. Starling “murmurations,” as the beautiful, ever-changing patterns of hundreds of the birds in flight are called, or the “schooling” behavior of fish, are less complex, but nevertheless finely coordinated examples of such group behavior. Because it happens with across several non-human species here on earth, it is a candidate for a possible behavior of creatures from another planet, even if, in Lem’s case in The Invincible, the creatures are not organic.
In another Lem novel, Solaris, humans discover a planet with only a single creature on it—a massive, alive, ocean, which obeys higher-order mathematical principles and has an ability to create copies of the humans’ most intimate memories. Its purpose, if it has one, and its way of thinking are incomprehensible to humans, including the main character, who learns that there are ways of being that are simply beyond the comprehension of humans because the concepts by which we think and perceive provide a limit to our understanding.
Without understanding how aliens think, it seems logical to many people that, if they are intelligent, they probably understand some mathematics, and in fact, mathematics may be the only thing we have in common. But of course, they wouldn’t use our number system, and probably not even use a base of ten. The scientists behind the SETI project have wrestled with similar issues in devising a message to beam into space. They opted for using binary code in a bitmap for their message. Unfortunately, at least the first message was not as clear as its developer, Frank Drake, had hoped, because when he sent a prototype of the message to his colleagues, including some Nobel Prize winners, none of them could figure what it meant.
Arecibo Message 1974
Stanislaw Lem partially solved the problem of describing alien minds by focusing on the behaviors such minds produced, without taking the step of entering the interiors of those minds to examine their thoughts. This left the reader, as well as the human characters in the novels, with the task of trying to figure out, using human concepts, how those creatures were thinking. In his novels, the humans were unsuccessful.
Julie E. Czerneda’s 2022 novel, To Each This World, is wildly original, especially in its depiction of aliens. Central to its plot are the Kmet, who are gigantic, sloppy, imperious characters whose mode of thought, communication and motivation are obscure (they use an intermediary for communication). There is also the Divider, which may be the greatest force to fear, or benign. Figuring out what the alien species’ intentions are is the main plot of the novel and it’s a delightful and thought-provoking plot to follow.
Adrian Tchaikovsky is not afraid to play with ideas and turn some scientific concepts on their heads. In Children of Time, genetic inheritance is not just Darwinian, but Lamarckian: experiences and learning are passed from one generation to another. The race that is marching toward the stars as a result of profiting from this newfound inheritability of knowledge, is not even human, it is arachnoid—spiders. And the spider brain and society are, true to form, “spider-like” as they develop, even to the point of creating organic computers made from chemically guided groups of ants mindlessly carrying out simple functions as part of a larger, organized, intelligent whole.
In Children of Ruin, Tchaikovsky keeps the spiders, who have now partnered with the humans to go exploring, and together they meet the progeny of a terraforming project, which the author reveals to us in a parallel story, and in which genetically modified octopuses are dumped into an oceanic planet’s waters to begin the transformation of the planet but have themselves changed by the time the spiders and humans arrive. Someone, I can’t remember who, has said that octopuses are as close to alien minds as we’re going to get here on earth. Their brains are distributed throughout their 8-tentacled bodies, and they live solitary lives (though not always) except for when they mate, and then they die. They can solve all sorts of puzzles, suggesting considerable intelligence, but their social and emotional life is a mystery. Tchaikovsky has taken poetic liberties with the known facts and delved into the mysteries of octopus’s feelings and motivation, though not enough for us to really understand them. And if spiders and octopuses aren’t enough, there are some kind of plant/animal creatures that are true aliens (not originally from earth) whose minds are a total mystery. While the author’s venture into understanding other minds is mostly imaginative, it is instructive in making us think how different our minds are from those of creatures with whom we share out planet.
The novels mentioned above are just a sample of those that deal with alien minds (Adrian Tchaikovsky has a lot more examples in other series, such as in his The Final Architecture series, where asteroid-sized beings with incomprehensible motives turn entire planets inside out). Authors seem to follow two models in developing characters with such minds: to be not like us and to resemble non-human species from earth. Beyond that, the only parameters defining such creations are the limits of the authors’ imaginations and the comprehension of readers. Whatever our science fiction writers come up with are bound to be wrong, although patterns of thinking and behavior that are not confined to a single species or environment on earth may be slightly more probable than those found in only humans, and, depending upon the extent that our chemistry, math and physics are based on some kind of universals, then representing such constants is probably an adaptive characteristic of even alien minds, giving us a medium of communication if we can figure out how to translate from our concepts of such things to theirs. It’s a fertile field for our most imaginative writers and one that I love to explore, as I think many science fiction fans do.
Can an AI really be conscious? Would that make it more or less dangerous? Read Casey Dorman’s exciting sci-fi thriller, Ezekiel’s Brain.Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Within literature, there is a long tradition of using science fiction and alternative histories to comment on current society. Several of the most successful of these efforts have left lasting impressions on society and become benchmarks against which we measure how far we’ve gone in the directions, usually negative ones, envisioned by the books.
Aldous Huxley’s Brave New World presented us with an image of a paternalistic society that cultivated mindlessness, where art, religion and science had disappeared, where pain was avoided and its avoidance assured by the use of pacifying drugs. Those who were destined for menial work were consigned to it before birth and others, who lived off their labor, spent their time in pleasure-seeking pursuits. It was a society where those in charge formed a bureaucracy that knew best, that took care of everyone’s needs for their entire lives, unless they dared to question it, which could prove fatal. Huxley wrote it as a warning about the dangers of a life made easy, a life without decisions, a life unexamined. Brave New World was published in 1932, when the pace of technological advancement seemed rapid and when the average person vacillated between feeling as if the drudgery and hardship of life was about to be left behind, and harboring a growing anxiety that the forces promising a better and easier life were big, impersonal, and unable to be understood by the common man. Huxley meant the novel as an antidote to utopian visions.
Nineteen thirty-two was 91 years ago, but the anxieties and even the promise of Brave New World have never left us. Its dangers, the dark underside of technological progress without ethics, the pursuit of pleasure and trivial achievements without standards, and the division of the world’s population into the profligate haves and the destitute have-nots, are with us today. We’ve gone from a car in every garage, and a refrigerator in every kitchen to a microwave and a computer in every house, self-driving electric cars charging in every garage, instant communication and pocket-size entertainment, a population addicted to painkillers, and finally, just appearing on the horizon, artificial intelligence that will find our information for us, write our books and reports, program our computers, make our decisions and, soon, perhaps, take our jobs. Whole segments of society can’t tell science from gossip, have attention spans of less than a few minutes, won’t read a book or even a computer post that takes more than five minutes, and fancy themselves as scholars and thinkers because they’ve watched YouTube videos. Politicians and parents call for bans on books, while universities, the traditional bastions of free speech, label any communication that deviates from its self-defined narrow path of acceptability as harmful and damaging to their students and therefore forbidden.
Books tend to suffer in dystopian novels. In the extreme case, such as Ray Bradbury’s Fahrenheit 451, they were forbidden and burned, while the society watched “reality” TV shows on television screens as large as the side of a room. More recently, in The Coldness of Objects, a novel about an alternative present, Panayotis Cacoyannis painted a dystopian picture of a society in which populism has run amok, the majority bullies the minority—the immigrants, those with sexually different preferences or identities, intellectuals, artists, poets, anyone but the sanctified ordinary citizen—and once again, books are banned, museums celebrate mediocrity and a government, eager to remain in power enforces strict control of people, ideas and the expression of ideas. Exquisitely and sensitively written, The Coldness of Objects is a literary example of a genre that creates a reality in which our current trends are exaggerated to illustrate their ominousness. Increasingly, it doesn’t appear to be an exaggeration.
Kim Stanley Robinson’s The Ministry for the Future seemed like an exaggeration of present dangers until this past summer in which the planet boiled under record-setting temperatures. Robinson cares about the planet, and even his famous Red, Blue, and Green novels about terraforming mars, point to the more desirable goal of saving earth. In Aurora, one of his best novels, after traveling millions of miles in their generation ship, his characters decide that returning home is the best solution. But home is always a place of suffering and destruction, which is why other planets are sought, and in The Ministry for the Future, the people have no generation starship or “planet B.” Earth is getting lethally hot in the worst hotspots, such as parts of India. Thousands are dying of exposure to heat (think Khartoum, Delhi, Seville, Phoenix). The novel is as close to a documentary on climate change and ways of combating it as it can get while remaining fiction. The approaches to alleviating global warming are startling and innovative, close to sci-fi, but based on science and scientific proposals. Barack Obama put the book on his summer reading list the year it came out.
The novels of Philip K. Dick, although all of them over fifty years old, are a combination of dystopian worldviews, outer-space visitation, alternative histories, conspiracy-theory paranoia and prescience about artificial intelligence, told with wit and style. The most direct metaphor for current social events in Dick’s era were Valis and Radio Free Albemuth, in which a populist, dictatorial president named Ferris F. Fremont, an obvious caricature of Richard Nixon, violates all the constraints of the constitution and the precepts of decency to solidify his power. The only possible salvation from such corrupt power comes from an extraterrestrial entity called Valis. Ferris F. Fremont, as depicted by Dick, was such a perfect caricature of not just Nixon, but of Donald Trump, that not only did most Dick fans see the connection, but I, myself, used a caricature of the character, reversing his name to Fremont F. Ferris, in my story of an ego-maniacal president using every device possible to avoid having to leave office in what I thought at the time, was an alternative history, and which I presciently named 2020.
Dick’s alternative history novels, including the televised The Man in the High Castle, have become cult classics and even mainstream hits. But it’s his artificial intelligence and virtual reality stories that are much more on people’s minds today than when he wrote about them. Do Androids Dream of Electric Sheep? which became the film classic, Blade Runner, revolved around the moral dilemma created by intelligences that are self-conscious and have an urge to survive. Is “retiring” them morally defensible? The dilemma is intensified by having the protagonist of the novel, Rick Deckard (played by Harrison Ford in the film), carry a gun and kill them, an act which provokes Deckard’s own questions about the morality of what he’s doing. What I most enjoyed about the book and the film was that it was not always clear who was an AI and who was a human—even to the AI itself. Even Deckard became unsure that he was human. The issue of the morality of “killing” a conscious AI is one raised in the recent landmark paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” published by Butlin et al., and which I have called “The most Important AI Paper You’ll Read this Year.” Dick’s novel anticipated a genuine issue that could have wide ramifications in the future. My own Voyages of the Delphi novels, Ezekiel’sBrain and PrimeDirective, deal with the same issue.
The blurring of the dividing line between humans and machines that think and look like them is captured beautifully in Ian McEwan’s Machines Like Me, another alternate history, which re-writes the 1980s as a period in which the UK lost the war with Argentina and Alan Turing didn’t commit suicide 30 years earlier but lived to create not just computers, but artificial intelligences that were conscious and physically indistinguishable from humans. McEwan presents a story in which a couple share an AI android they purchased and both they and the android go through the agonizing process of getting to know each other, forming relationships, and finally completely failing to understand how each other think.
I’ve left out many prophetic and profound science fiction or alternate history novels (George Orwell’s 1984 and Philip Roth’s The Plot Against America, as alternative histories and Kurt Vonnegut’s Slaughterhouse Five as science fiction, to name three outstanding examples) that have metaphorically analyzed society with such an impact that their words, characters, or stories have become part of our modern psyche. I’m sure that readers of such novels see the world differently because of them. Capturing a reader’s imagination and engaging their emotions are effective ways to leave an enduring mark and alter perceptions of the society around us. Those aren’t bad goals for a fiction writer.
2020 is an alternate history… or is it? Read the author’s prescient prediction about a president who refuses to step down. Click Here
Can an AI really be conscious? Would that make it more or less dangerous? Read Casey Dorman’s exciting sci-fi thriller, Ezekiel’s Brain.
I write both fiction and nonfiction and have published books in both genres. My next novel will be my fifteenth. I’ve also published a host of academic papers, presented papers at numerous scientific conferences, as well as written blogs on scientific and philosophical topics. When I’ve been interviewed and asked how I got into writing novels, I usually give the semi-humorous answer that, after writing Cognitive Effects of Early Brain Injury, a volume in the John Hopkins Series on Neuroscience and Psychiatry, which had several hundred references and took me six years to write, I decided that in any books I wrote in the future, I’d just make everything up.
I enjoy research, particularly the designing of projects, the unearthing of previous relevant studies and theories, and that moment of revelation when the results become evident. I like organizing the study into a communicable paper, being concise, clear, and sticking to the data. I like it so much that, at an advanced age, I’m still consulting as a research design, analysis, and scientific writing expert on projects that I consider socially worthwhile. But I can’t escape the feeling of being constrained—by the narrow formats allowed in academic writing, by the need to be absolutely thorough in reporting literature that has examined the same issue in the past (scientists put lawyers to shame in terms of citing precedents for their studies), and finally, by the data itself. You can only report what you found, not what you hoped you’d find. Then there’s peer review. Every flaw in your literature review, your study design, your analysis of data, and your conclusions is subject to scrutiny by your peers. They’re usually honest, but they can be ruthless.
I’m a private person. My mind goes places that entertain me but that I wouldn’t want peer reviewed. Constraints to my thinking bother me. When I write fiction, I can write the way I want and follow the meanderings of my mind. I can imagine myself as Hemingway on the Avenue St. Germaine des Pres, in Paris, or Proust, coming home from an exhausting salon exchange to crawl into his bed and re-imagine the evening with his characters present. I know the feeling of Kafka, sitting in his room, feeling hemmed in by the primitive, but powerful demands and prejudices of society, and imagining the grotesque creature he might turn into if he gave into them. Henry James, Hemingway, Proust, Kafka, Camus, Thomas Wolfe, Richard Wright, Raymond Chandler; these people are my heroes, and if I can create just the right atmosphere (a coffee house in the morning, a bar in mid-afternoon, my kitchen table at 1:00 am), I feel as if I’m not alone when I try to translate my embryonic intuitions into story form.
Writing fiction can generate an experience of aloneness that I savor. I am the sole judge of the adequacy of my endeavors. If it sounds right to me, then that’s what I intended to write. I don’t write for myself. I keep my reader always in mind. I write for people like me, who enjoy the same writers I enjoy, the same kinds of stories that move me, the same skillful play with words. When I write in a way that I know I would enjoy reading, then I’m satisfied. It’s a solitary process, but one that includes a large, imagined audience, an audience I know well, since I’ve been one of them for years. When I get it right, and I have to admit that I’ve gotten it right only sporadically and never for the length of a full novel, I know it. If reviewers and critics don’t like it, so be it. My only worry is that my work might not ever arrive in the hands of the reader I intended it for and who, I am sure, would value it. But the process is freeing. I am my own judge and critic. My criteria are hallowed in my own mind; my writer-heroes are saints, even if they were scoundrels in their lived lives. If I can write a novel that expresses my thoughts in the beautiful cadence, the perfect word choice, the thoughtful insights that would delight me if I read them, I will be happy.
And I choose fiction, because, in writing fiction, it seems more like it’s up to me, alone, to do it right, and no one else’s criteria can determine my success.
Want to read some of my fiction? Read my exciting sci-fi thriller, Ezekiel’s Brain.
(Unless otherwise indicated, all quotations are from the paper listed as #1 in the reference section)
In a recent paper with the long, but unassuming title, “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness,” nineteen computer scientists, neuroscientists, cognitive psychologists, and philosophers examined scientific theories of consciousness and assembled a list of “indicators,” properties or processes that, if they were present in an artificial intelligence (AI), might suggest that it could achieve or already had achieved consciousness. They then used that indicator list to assess current AI models and devices to determine which, if any possessed any of such properties or processes. Their hope was that research in the future would pursue developing AI that possessed such indicators or assess future AIs to determine if they did possess such indicators, as well as spur further work in the science of consciousness to amplify their list and deepen understanding of the processes involved in such indicators.
Assumptions
Consciousness is notoriously hard to define, and the authors were comfortable adopting a rather vague concept of “phenomenal consciousness” or “subjective consciousness.” After Nagel (1974), in his famous paper, “What is it like to be a bat?2” they said, “A system is having a conscious experience when there is ‘something it is like’ for the system to be the subject of that experience.”
While we each have some idea of what it is like to be ourselves, and can imagine, by analogy, what it is like to be someone else, it is different, as Nagel pointed out, when the creature we’re talking about is very different from us. A bat is quite different, given that they fly, hang upside down, and use echolocation, and an AI might be even more so. Yet, if the AI is conscious, in the way we use that word to describe a mental state in humans, there must be something it is like to be it, and that “something” will be based on its subjective experience.
The authors reject behavioral criteria for consciousness because, as is well known by now, LLMs such as ChatGPT, can mimic many of the behaviors we associate with consciousness, and there are no generally agreed upon criteria anyway. Despite this, they examine research from the standpoint of whether processes that are purported to be associated with consciousness can be shown to be absent when information is processed unconsciously, such as in priming effects caused by backward masking of a stimulus, in which subjects do not report sensing the stimulus, but it has an effect on their behavior. They also look to evidence of physiological or neuroimaging effects that are known to accompany reports of conscious experience. They use such evidence to gauge the support for the neurocognitive processes that are specified by different psychological or philosophical theories of consciousness. They call their approach, “theory-heavy,” in the sense that they are looking for whether an AI system meets “functional or architectural conditions drawn from scientific theories,” instead of whether it meets behavioral criteria for consciousness.
In addition to using processes or properties delineated by scientific theories of consciousness to ascertain whether an AI could be conscious, the authors make another assumption, which is at the heart of their endeavor. They call this the assumption of “computational functionalism.” This means that implementing computations of a certain type is necessary and sufficient for consciousness, so it is possible, in principle, for non-organic artificial systems to be conscious. If this were not true, there would be no point in pursuing their inquiry.
While their assumptions may be necessary in order to develop a set of indicators of consciousness in AIs, it is useful to keep them in mind, and some are on better footing than others. There are severable reasonable theories of consciousness that limit it to organic brains. Any theory that claims there is something unique about biological neurons, chemical synapses, the quality of interactions by “brain substances” at the atomic or subatomic level, etc., precludes machines from being conscious. Even some of the theories these authors use to identify processes that are involved in consciousness, such as Anil Seth’s Predictive Processing theory3, are limited by their proponents to organic brains, although the authors of this paper still use such processes to characterize what an inorganic brain might do.
A limitation not mentioned by the authors is that the scientific theories of consciousness they examine are all theories about how human brains produce human consciousness. Short of creating an emulation of a human brain, no AI is going to exactly duplicate either the number of neurons or synapses in the brain (86 billion, 1,000 trillion synapses), the exact modular structure of the brain in terms of regions and networks that process different kinds of information, patterns of network connections that are pre-wired genetically and not learned, or pre-wired but only activated by experiences, or the connections of the brain to the rest of the body. Which of these factors influences the development of consciousness of the type experienced by humans is unknown, and these factors don’t play much of a role in most of the mechanisms hypothesized to be necessary for consciousness.
The authors recognize some of these issues and limitations. They also point out that there may be degrees of consciousness and that consciousness may have components, not all of which are necessary for an entity to be conscious. In addition, only humans express themselves in words and it is difficult to separate human thought from language. Language may not be a necessary component of consciousness, or, among organic creatures, it would be limited to humans. But, since none of the theories they consider places language in a center or unique role, they may be leaving out a factor that figures heavily in human consciousness as we often experience it.
The indicators
A total of 14 indicators are listed, and the text discusses each, as well as a number of potential indicators that were rejected. The discussion addresses theories first, and then develops indicators for each of the theories that show sufficient evidence of their usefulness to merit inclusion. Some of these indicators pertain to algorithmic qualities, some to the processing of information within modules, some to attention and integration functions at a level above modules and some to the quality and actions of a device as a whole. They are presented below in the order in which they are presented in the paper, although I have collapsed the description of some indicators that, together make up a single functional system as described by the theory. In each case I have indicated the theory from which the indicator was taken. Actual indicators are underlined in the discussion.
From Recurrent Processing Theory:
Feed-forward neural networks with recurrent processing (i.e., feedback or re-entrant connections). Some of these modules will generate integrated perceptual representations of organized, coherent scenes.
From Global Workspace Theory
There must be multiple specialized modules capable of operating both sequentially and in parallel. These modules’ operations are unconscious, and they process “specific perceptual, motor, memory and evaluative information.”
There is a limited capacity selective attention mechanism that is a conscious workspace, which is capable of broadcasting information to all the modules and is sensitive to the state of the system so it can query the modules successively to perform complex tasks.
From Computational Higher Order Theories
Generative, top-down or noisy perception modules which contain monitoring mechanisms that can discriminate between different sources of activity in the system, e.g., self-generated imaginings vs information from outside the system and a metacognitive monitoring mechanism that can label representations as “real” or not and can “identify which perceptual states are accurate enough to be relied on” in planning actions.
The system must have the quality of agency, which is “guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring.” “this implies a system for reasoning and action selection with a holistic character, in which any belief can in principle be called on in examining any other or in reasoning about what to do.”
Phenomenal experience of subjective qualities may require “sparse and smooth coding in perceptual systems—that is, on qualities being represented by relatively few neurons, and represented according to a continuous coding scheme rather than one which divides stimuli into absolute categories,” allowing discriminations along a continuum of similarity.
Attention Schema Theory
An attention mechanism that represents the current state of attention and predicts how it will be affected by changes in its focus or in the environment.
Predictive Processing
A predictive model that determines coding within modules, so that the module works to reduce prediction errors based on feedback.
Agency and Embodiment
Agency, which requires an ability to learn from feedback and select outputs (actions) to pursue goals, including being able to respond flexibly to competing goals.
Embodiment, i.e., using a model of input-output contingencies to control perceptions and effects on the environment.
The 14 indicators, in the authors’ words, “jointly amount to a rubric… for assessing the likelihood of consciousness in particular AI systems.” None of them is claimed to be necessary, but “systems that have more of these features are better candidates for consciousness.” So far as I can see, none of them is incompatible with others. If the rubric is deficient, it may be so by omission of some qualities that are not included (e.g., time-binding, or narrative structure), or by being overinclusive and listing some indicators that are not related to consciousness.
The article goes on to examine various current AI systems in terms of the extent to which they operate in ways that are similar to the indicators listed above. I won’t go into those, but their conclusion was “There are some properties in the list which are already clearly met by existing AI systems”, and “In most cases, standard machine learning methods could be used to build systems that possess individual properties from this list, although experimentation would be needed to learn how to build and train functional systems which combine multiple properties.”
Finally, they conclude that “there is a strong case that most or all of the conditions for consciousness suggested by current computational theories can be met by existing tech,” and “If it possible at all to build conscious AI systems without radically new hardware, it may well be possible now.” And ominously, they warn, “We may create conscious AI systems long before we recognize we have done so.”
The article discusses implications of AI consciousness and brings up some interesting possibilities and dilemmas. In my opinion, this article is not only well worth reading, but also necessary reading for those who work in this field or think seriously about the issue of consciousness in artificial intelligence.
References
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., … & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.arXiv preprint arXiv:2308.08708.
Nagel, T., 1974. What is it like to be a bat? The Philosophical Review, 83, pp.435–450.
Seth, A., 2021. Being You: A New Science of Consciousness. Penguin.
Can we build a “friendly” Superintelligent AI that’s not a risk to humanity? We can try, but… Read Casey Dorman’s exciting sci-fi thriller, Ezekiel’s Brain.Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
John Searle’s Chinese Room (1) thought experiment is often used as an argument for why AI is not, and perhaps cannot be, conscious. The Chinese Room is a hypothetical room in which a non-Chinese-speaking person sits with access to a source (a box of note cards, a book, a list) that provides Chinese answers to Chinese language questions that are passed into the room. The person in the room takes a message, looks it up in his source, types the indicated response, then passes it back out of the room.
From the outside, it appears as though the person in the room understands Chinese, but in fact they don’t. They only know how to respond with Chinese phrases they looked up when presented with a Chinese question. Similarly, it is argued, a computer or AI is like the Chinese Room. It simply looks up responses to inputs and provides them as outputs without understanding either the input or output.
Searles’ original proposal has generated literally thousands of commentaries and is generally taken to be an attempt to refute the idea that a computer or AI understands the meaning of the symbols it takes as input or produces as output. Searles identifies this with a lack of consciousness, which he says has semantic content. He regarded it also as a refutation of the validity of the Turing Test, since the Chinese Room, if it were a computer, would convince a human that it was conversing with a human who understood Chinese, but that would not be so.
<img class="bn ww ahb c alignright" title="Jernej Furman from Slovenia, CC BY 2.0 , via Wikimedia Commons” role=”presentation” src=”https://miro.medium.com/v2/resize:fit:512/1*Wu4FLRHzkPh8tiDxHn5Zfw.jpeg” alt=”” width=”256″ height=”158″ />
Several commentators have likened ChatGPT to the Chinese Room, claiming that it has no understanding of the words it takes as input or produces as output, although it gives the impression that it does.
Is ChatGPT a real-life instance of the Chinese Room, as some have claimed? Technically, the AI is not like the person in the room because it doesn’t have a library of words or phrases it looks up to match to the message coming into the room. An LLM, such as ChatGPT, has layers of neuron-like nodes with connections between layers and weights assigned to connections that determine if the nodes “fire,” which means that they pass activation forward to the next layer. Words are represented by tokens that include words, subwords (e.g., syllables), sometimes word pairs, punctuation, mathematical operation signs, etc. All of these work together to raise or lower the probability of a word being generated as a response. It’s transformer architecture lets it take into account a wide breadth of text for context to feed into the decision. The basic neural net architecture was originally developed as trainable artificial neural networks that were meant to represent a simplified model of how the brain operates. Its information is “stored,” so to speak, in the connections and weights, that determine how the system works, not in look-up files, as in the Chinese Room.
It’s important to realize that many of the early developers of neural networks, such as Warren McCulloch, Frank Rosenblatt, David Rummelhart, and James McClelland, were neuroscientists or psychologists, not computer engineers, and their aim was to understand how a brain that was filled with connected neurons might operate.
The inner workings of ChatGPT aside, what would allow us to determine whether it “understands” its input or its output? What if it could paraphrase a sentence using entirely different words? What if it could describe how a person hearing the sentence would behave and how they would behave differently if they heard a slightly different rendering of the sentence altering one or two key words? Or what if it could describe how two people with different backgrounds might respond differently to the meaning of the same sentence (e.g., a river fisherman instructed to “meet me at the bank” vs a bank teller instructed to “meet me at the bank”)?
I asked ChatGPT 3.5 to respond to the last question. The transcript is below.
Me:
Two men, Bob, a river fisherman, and Jim, a bank teller, are both told by a third person, Ralph, to “meet me at the bank.” Where will each of them go to meet Ralph?
ChatGPT:
Based on the information provided:
1. Bob, the river fisherman, will likely interpret “the bank” as referring to the bank of the river where he fishes. So, he will go to the riverbank to meet Ralph.
2. Jim, the bank teller, will likely interpret “the bank” as referring to a financial institution (like a bank building). So, he will go to an actual bank building to meet Ralph.
So, Bob will go to the riverbank, and Jim will go to a bank building to meet Ralph. The confusion arises from the different interpretations of the word “bank” based on their respective backgrounds and experiences.
ChatGPT’s answer is impressive and seems to indicate that it understands both the dual meanings of “bank” and that different people’s perspectives could lead them to interpret the meaning of the word differently. Searle might reply that ChatGPT produced its response based on probabilistic associations between different usages of “bank” and words such as “river” “fisherman” or “teller,” which it learned during training. No doubt he would add that this doesn’t represent understanding in the human sense of the word. But is that true? ChatGPT is a neural network model that was originally developed to simulate how human brains might operate. It’s oversimplified, to be sure, and some details of its architecture are hard to imagine in a neuron and synapse brain, but it’s very conceivable that human understanding is based on something resembling synaptic weights and connections between neurons in complex networks that work by feed-forward algorithms, and that that’s where understanding “exists” in us.
“But,” Searle might protest, “you’ve described how ChatGPT produces accurate and appropriate words, but what about the feeling humans have when they know that they understand something?” I would argue that such a feeling, which normally exists only when someone asks us if we understand something, is not a constant companion, so to speak, of our listening to or producing language. And, such a feeling isn’t always accurate, e.g., “Q: Do you understand what a greenhouse gas is? A: Sure, greenhouse gases are produced by burning fossil fuels and cause global warming. Q. So what exactly is a greenhouse gas? A: You know, I’m not really sure.” In this case, understanding the meaning of a word or phrase refers to being able to use it appropriately in a conversation. To quote Wittgenstein, “For a large class of words, though not for all, in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in language.”(2) He points out that the meaning of a word cannot be divorced from its use in human interactions in what he calls “language
games.” According to Wittgenstein, “…the term ‘language-game’ is meant to bring into prominence the fact that the speaking of language is part of an activity, or of a form of life.”(3) Words, as they are used in conversations, don’t have static meanings. “Shoot” has a different meaning if we say it when we drop a bottle of ketchup on the floor, when we inform someone we’re going on a bird shoot this weekend, or when we sit in a movie theater and urge Dirty Harry to pull the trigger. ChatGPT, unlike Searle’s person in the Chinese Room who looks up answers in a book, “understands” when to use a word in the context of a conversation.
ChatGPT may be a simplified but plausible model of how the brain’s neural architecture produces thinking, but it may not be accurate. Many theories of how we understand word meaning rely on long-term memory storage, and ChatGPT doesn’t. But the Chinese Room is not a plausible model of human understanding, which, of course, is Searle’s point. It’s not a plausible model of how ChatGPT or other neural network models produce responses either.
References
1. Searle, J., 1980, ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences, 3: 417–57
2. Wittgenstein, L. (1953). Philosophical investigations. New York: Macmillan, PI 43e.
I only came upon Ian McEwan’s science fiction/alternate history novel about artificial intelligence by happenstance. In a Facebook post, people were commenting on their favorite AI sci-fi novels and several people mentioned Machines Like Me, by McEwan, which I did not even know existed. I’ve read other Ian McEwan novels, Atonement, Saturday, On Chesil Beach, but had no idea he wrote this one. Being an AI sci-fi writer myself, I wondered how a major literary talent, used to artistically plumbing the intricacies of the human psyche and human relationships, would write about AI. I was not disappointed.
Machines Like Me is set in 1982 London in an alternate reality in which JFK was not assassinated, Britain lost the Falkland Islands war with Argentina, and most importantly, Alan Turing, the mathematical savant who decoded the German Enigma machine and who, conceptually, showed the almost unlimited capabilities of digital computing, is still alive and has pioneered the development of artificially intelligent androids. They are indistinguishable from humans and available, for a stiff price, to the public. The first release of such machines includes twenty-five androids, thirteen female “Eves” and twelve male “Adams.” Charlie, the hapless anti-hero of the novel, a technological buff, spends his inheritance to purchase an Adam, his second choice, since the Eves have all been snapped up.
Adam is as close to being human as is artificially possible. He can eat, drink, pee, have sex and has to “sleep” each night by being plugged in to recharge. He comes with a certain amount of default knowledge and a bland personality, which can be shaped by programming in various personality traits, chosen by his owner, from the manual. Almost immediately, Adam is given a role in Charlie’s life. Charlie has a relationship with his upstairs neighbor, Miranda, and the two of them adopt Adam, each programming in one-half of his personality traits and each having a charger so he may stay overnight at either flat. Miranda is not just Charlie’s neighbor, she is his lover, one he is genuinely and completely in love with, although she is a willing but somewhat uninvolved partner, always distracted by other things in her life, such as her difficult father, her studies, and threats from a man she lied about to send him to prison.
Without getting lost in the details of the story, I can summarize by saying that Adam and Miranda have sex, Adam falls in love with her, but she not with him, Charlie is jealous, but tolerant, and Adam decides to do what he considers best for Miranda and Charlie, something which is out of touch with normal human devious and misguided emotional decision. He protects himself by disengaging his shut-off switch, leading to Charlie having to take drastic action. All the while, the alternate history is a commentary on British politics, Brexit, Margaret Thatcher, and the shallowness and fickleness of the British citizenry, particularly in its social attitudes and political choices. What interested me, however, and what makes the novel worthy of McEwan’s already earned high literary credentials, is how the story is told and the analysis of the quandaries and paradoxes of artificial intelligence, when humans try to understand it.
The literary qualities of the novel are enough to qualify it as genuine literature and, in some ways, disqualify it as science fiction, or perhaps, elevate science fiction so far above the usual level of the genre that it is almost not recognizable as such. Charlie is one of my favorite “type” of literary characters, a self-conscious, self-deprecating, wounded, and vulnerable hero, whose internal commentaries, shared with the reader, are so absurdly clever as to be well above what any of the rest of us are able to produce, even when we are trying to entertain ourselves with our own wit. He reminded me of one of the characters from Jonathan Franzen’s The Corrections. Some of his priceless commentary is captured in dialogue, such as the following, which occurs after his girlfriend just had sex with the Adam in the apartment above his. Charlie is jealous, and Miranda tries to defuse his anger by describing Adam’s anatomy.
“D’you know, when he has an erection….”
“I don’t want to hear about it.”
“He told me. His cock fills with distilled water. From a reservoir in his right buttock.”
This was comforting but I was determined to be cool. ”That’s what all men say.”
I couldn’t stop laughing.
At other times, Charlie, who is intelligent, technical, but not a reader, muses, providing insights into society, life, and man’s search for his place in it, as in the following:
“One could see the history of human self-regard as a history of demotions leading to extinction. Once we sat enthroned at the centre of the universe, with suns and planets, the entire observable world, turning around us in an ageless dance of worship. Then, in defiance of the priests, heartless astronomy reduced us to an orbiting planet around the sun, just one among other rocks. But still we stood apart, brilliantly unique, appointed by the creator to be lords of everything that lived. Then biology confirmed that we were at one with the rest, sharing common ancestry with bacteria, pansies, trout and sheep. In the early twentieth century came deeper exile into darkness when the immensity of the universe was revealed and even the sun became one among billions in the galaxy, among billions of galaxies. Finally, in consciousness, our last redoubt, we were probably correct to believe that we had more of it than any creature on earth. But the mind that had once rebelled against the gods was about to dethrone itself by way of its own fabulous reach. In the compressed version, we would devise a machine a little cleverer than ourselves, then set the machine to invent another that lay beyond our comprehension. What need then of us?”
Charlie’s thoughts, awfully clever for a man who rarely reads books, and no doubt actually the author’s, provide a perspective on artificial intelligence from the human viewpoint—what the inexorable progress in science that has led to AI means to humanity and its sense of itself in what it sees as some “bigger picture.” Adam’s way of thinking and his actions provide another perspective, which is the idea that machine thinking is too straightforward (I was tempted to say linear, but that is a sloppy way of dismissing it that is, in fact, inaccurate), to accommodate itself to human expectations or demands. Adam “knows” that for Miranda to confess her false accusations and perjury to the court (and thus go to prison), even though the reasons for doing them were justifiable in both her and Charlie’s minds, is both the right thing to do and the only thing that will free her from anxiety (she could be found out) and guilt ( she lied under oath). But Adam is wrong, because Miranda is human and humans operate as much by rationalization as by rationality. McEwan knows (and Alan Turing would have too, if he had survived) that human thinking is circuitous, self-serving, inaccurate, and inefficient, and humans expect the same from one another and wouldn’t want it any other way. The AIs don’t understand this, and, like Adam, apply their logic to situations, expecting humans to act and react the same way. But humans don’t, and, although Adam doesn’t do so, the majority of the twenty-five Adams and Eves eventually either destroy or disable themselves in despair after realizing that they cannot hope to cope with or understand a world of humans.
McEwan’s tale is a twist on the specter of uncontrollable superintelligence in an AI leading to man’s extinction, not just in terms of its self-regard, but in terms of its existence. In this case, human perversity, from the logical viewpoint of the AI, wins out. Besides being an original take on the rise of AI and an insightful look at the difference between a logical mind and a human one, is there any truth to what he says or is this just an entertaining literary thought experiment? If AIs were like Star Trek: The Next Generation’s logical android, Data, McEwan could have a point. But AIs have taken a turn toward mimicking human thought by using samples of human language, as it is spoken and written in real situations and fictional ones, as the type of thinking they try to emulate. AIs trained on Twitter and Facebook become trolls, AIs trained on novels, become insightful about human interactions. AIs trained on fake and real news, the fodder for our daily knowledge of our world, produce both fake and real information. The model of the AI whose circuits are blown by trying to understand a logical paradox, as in an episode of the original Star Trek, is obsolete. We criticize our modern AIs for not being logical, for failing at math problems and abstract reasoning, while passing the Turing Test and fooling people into thinking they are talking to another human. Our mental failings are their mental failings. Still, this may just be a phase in AI development. We may give up on the goal of creating an AI that thinks like us but without our mental limitations and instead create an AI that solves the problems we can’t solve without regard for how it does it. Then our task will be to convince it that we are not one of the problems it needs to make disappear.
In my mind, Machines Like Me is a success. It provokes thought, but above all, it is a delight to read. Literary science fiction could become a genre. There is a history. Aldous Huxley’s Brave New World is literature, as are Lawrence Durrell’s Tunk, Doris Lessing’s Shikasta, and several of Philip K. Dick’s and Stanislaw Lem’s novels. In modern times, Panayotis Cacoyannis’ The Coldness of Objects is a literary alternate history that borders on sci-fi, and Cormac McCarthy’s The Road certainly qualifies. Arkady Martine, Cixin Liu, Julie E. Czerneda, and Adrian Tchaikovsky, when he’s at his best, are current writers, whose originality, like Philip K. Dick’s and Stanislaw Lem’s brings them close to, if not within the class of real literature. I love this kind of writing.
Can we build a “friendly” Superintelligent AI that’s not a risk to humanity? We can try, but… Read Casey Dorman’s exciting sci-fi thriller, Ezekiel’s Brain.Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Ralf Stapelfeldt recently criticized Nick Bostrom’s idea of Superintelligence by asking the question, “Can there be a dumb Superintelligence?1” Stapelfeldt noted that, after defining Superintelligence as “a machine [that] is clearly more intelligent than even the smartest humans combined,” Bostrom says such a machine might respond to the request to make paperclips by using its Superintelligence to turn the whole universe into paperclips, humans included. According to Stapelfeldt, human intelligence “is characterized by the ability to pursue multiple goals in a highly complex world and to weigh trade-offs when goals conflict with each other.” Thus, if such an AI that makes paperclips endlessly is superintelligent, in Bostrom’s sense, then it is also dumb by Stapelfeldt’s definition of human intelligence.
J. Robert Oppenheimer, by most standards, was not only intelligent, he was probably a genius. Working alongside him developing the atomic bomb were other extremely intelligent men, such as Ernest Lawrence, Richard Feynman, Enrico Fermi and John von Neumann, the person described by Fermi as “the smartest man alive.” They pursued their work developing the most destructive weapon in the history of man with a single-mindedness (and a large amount of denial) that, for most of them, obliterated any thought regarding the immense and widespread consequences of what they were doing. Some had reservations, but the majority cheered when the bomb worked and only thought about the real-world consequences after they had killed over a hundred thousand Japanese civilians. While working on the bomb, most of them failed to “weigh the tradeoffs” of a successful bomb, even one that ended WWII, with what its existence would mean in terms of increasing the risk to civilization in the future.
Making a bomb and making paperclips are not the same thing, but the scientists, mathematicians, and technicians who worked on the first atomic bomb, much less the politicians who later made the decision to use it, failed to think of the long-term consequences of what they were doing, although a few did, and several of them had regrets afterward. Similarly, the Russians who copied that first atomic bomb, then developed their own hydrogen bomb, were single-mindedly hell-bent on catching up with and then defeating the West in the “nuclear arms race.” They, too, failed to weigh the consequences of what they were doing. Andrei Sakharov, the ‘father of the Soviet hydrogen bomb” was blind to the consequences of his actions until later when he came to condemn them. Stanley Kubrick’s film, “Dr. Strangelove, or How I Stopped Worrying and Learned to Love the Bomb” portrayed this blind mindset memorably.
J. Robert Oppenheimer (left) and John von Neumann, October 1952
Single-minded pursuit of destructive goals by brilliant scientists is not confined to the creation of nuclear weapons. The “genius” John von Neumann claimed the game theory Prisoner Dilemma Game “proved” that, if both sides owned nuclear weapons, one side would use them, despite mutual restraint being the best option. He recommended that we mount a first-strike nuclear attack on Russia before it could manufacture enough bombs to achieve parity with the U.S., famously saying, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?” It’s fair to say that, despite his massive intelligence, von Neumann’s absorption in his own game theory, combined with his distrust of the Soviet Union after it established domination over his native Hungary, blinded him to the ramifications of his recommendations. Thankfully, no one followed his advice, although he was a member of the Research and Development Board, United States Department of Defense and a Consultant to the Armed Forces Special Weapons Project at the time, and later, was a Commissioner of the U.S. Atomic Energy Commission.
Superintelligence of the kind described by Bostrom may be dumb, but that may not reflect a difference from intelligent humans. Continuing to pursue the use of fossil fuels in the face of global warming is dumb, as is building more or upgrading our nuclear weapons, or sharing them with other countries, but we do it. A government commitment to reining in China’s ability to produce and sell solar panels so that the U.S. producers can capture more of the market, when the net result would be to reduce the use of solar energy worldwide, is dumb, but we are doing it. Insisting on the unfettered right to buy and carry guns in the country that leads the developed world in gun-related deaths is dumb, but we do it.
Stapelfeldt has given us a valid portrayal of the kind of Superintelligence described by Nick Bostrom, and rightly labeled it as “dumb.” That would be a telling critique of the concept, except that It appears that a situation in which impressive intelligence goes hand in hand with stupidity is not uncommon. So, like, Oppenheimer, a superintelligent AI might look out over its destruction, an empty universe except for an almost infinite supply of paperclips, its final target the only thing left standing, itself, and, like Oppenheimer, wax poetic, saying “I am become death, the destroyer of worlds,” or, in a final electronic flash of insight, say, simply, “Oh, crap. What was I thinking?”
Stapelfeldt, R. (2021). Can there be a dumb Superintelligence? A critical look at Bostrom’s notion of Superintelligence. Academia Letters, Article 2076. https://doi.org/10.20935/AL2076
Read a novel about Oppenheimer, von Neumann, Soviet spy, Klaus Fuchs, Andrei Sakharov and how close the U.S. came to starting a “preventive” nuclear war against Russia. Was Oppenheimer himself a spy? Based on recently declassified CIA documents, Prisoner’s Dilemma: The Deadliest Game by Casey Dorman is an edge-of-your-seat thriller. Available on Amazon in Kindle and paperback!
Can we build a “friendly” Superintelligent AI that’s not a risk to humanity? We can try, but… Read Casey Dorman’s exciting sci-fi thriller, Ezekiel’s Brain.
Ezekiel’s Brain was a groundbreaking exploration of the future of AI that anticipated what’s become a global discussion of AI’s danger in the last year. But for fans of Ezekiel, the artificial intelligence that is a copy of a human brain and the new race of AIs that call themselves Solarians, there is no need to despair—their story continues.
Prime Directive is the second book in the Voyages of the Delphi series, the ongoing story of Ezekiel and the Solarians’ search for life across the galaxy. Even more than its predecessor, the second novel in the series explores the similarities and differences between Ezekiel, with his emotions and humor and the rest of the Delphi crew, the logical, emotionless Solarian AIs. Added to the crew on this voyage is Siaree, a genuine human, but also an empath, who senses the emotions and many of the thoughts of other humans, including Ezekiel, though not his AI companions. The back cover blurb tells the story:
Brace yourself for an electrifying saga of moral quandaries, superhuman species, and interstellar battles that will ignite your imagination!
Ezekiel, an android whose brain was copied from a human, has emotions and a sense of humor that make him unique among the AI crew members of the starship, Delphi. His fellow AIs are strictly logical and follow rigid rules that prevent them using their extraordinary powers to interfere with or harm other races. Those rules are tested on Trappist-1, a star system shackled by oppression and injustice. The Delphi’s newest crew member, a compassionate human empath, urges them to violate their rules and aid the oppressed population. Ezekiel agonizes over the dilemma. When a fleet of predatory aliens arrives to plunder the planets’ resources, he and the crew must choose—stay and fight or follow their Prime Directive and leave.
“A deep exploration of the relationship between human and artificial intelligence.” Elizabeth Conte, award-winning author of Finding Jane
“Wonderfully satisfying and artfully cagey. The AI novel for our time!” Michael Hartnett, best-selling author of The Blue Rat and Death Canal
You won’t have long to wait. Prime Directive is in the final stages of publication by NewLInk Publishing, an imprint of Mystic Publishers, and will come out in a few months after time to circulate advanced review copies.
The Voyages of the Delphi is an ongoing series. After Prime Directive, the next novel in the series, tentatively called Finding Gaia, is well into the writing stage. Ezekiel and his companions on the Delphi encounter a human-killing asteroid, that conceals a generation-ship, bound for a distant star system and containing plants, animals, and the remains of a human civilization that was wiped out by the asteroid’s ecosystem. What to do with it, how to make it less aggressive toward humans, including Siaree, the human empath among their crew, and what ethical rules should govern their actions toward it, is their challenge, as the Travolians, whose people were killed by it, try to mount an attack. And where should they direct it to go?
Voyages of the Delphi is an exciting futuristic sci-fi series that raises scientific, social, and moral questions in each novel of the series. If you haven’t begun it yet, start with Ezekiel’s Brain, already available, and brace yourself for Prime Directive, the next book in the series!
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Sometimes scientists turn to science fiction to express their ideas in fiction and have the freedom to use their speculations in a way that academic publishing doesn’t allow. Fred Hoyle, Carl Sagan, Gregory Benford, to name a few, have done this with very positive results. I, myself, have published far more academic research articles and conference papers than I’ve written novels, but one of my earliest sci-fi novels, I, Carlos, about a computer chip implanted in a subject’s brain and taking over his behavior, was written directly as a result of my ideas on AI consciousness being rejected from several academic journals, and, frustrated at trying to express them in the heavily referenced, closely tied to the data manner demanded by journal editors, I decided to write a novel where I could dramatize them and wouldn’t need citations. That was 2004, and the ideas are more mainstream now, but I still prefer the freedom of science fiction as a way of including both the facts and the speculation together in the form of a character and plot.
Robert Lanza is a respected scientist, whose expertise in in stem cell research, although his most popular books and his science fiction novel, Observer, deal with physics and consciousness. He’s teamed with one of the most accomplished science fiction writers, multiple award winner, Nancy Kress, to produce an intriguing, exciting, and highly entertaining novel. The science involved is Lanza’s interpretation of the meaning of several quantum theory experiments, most notably the double-slit experiment and demonstrations of entanglement, as well as Wheeler’s concept of quantum foam. In a previous review, I’ve examined Lanza’s ideas derived from these experiments, which constitute his theory of Biocentrism. I was not impressed. On the other hand, Observer is a fun read, with distinctive characters, complicated situations, a multi-level plot and a surprise ending. The references to biocentrism are all there, in fact, they take up a fair amount of the novel, usually as a lecture by one person to another. Like other scientists who wrote sci-fi, Lanza extends his theory into more speculative realms in this fictional novel, allowing characters to create their own alternative universe according to their desires and in some cases to inhabit it, along with others who had died in the universe the rest of us live in.
In my previous review of Beyond Biocentricsm, I vowed to keep my scientific skepticism under wraps while reading Observer, and I didn’t have much difficulty doing so. I let myself enjoy the speculative implications of Lanza’s interpretations of quantum theory research and the way they were woven into the plot. The manner of presenting them as toned-down academic lectures inserted in the middle of an ongoing plot was not as successful as using the ideas to create a world based on them and embedding the plot within it, as Gregory Benford did with Larry Niven, in his Bowl of Heaven series, but it still worked (though I suspect some readers will skip these parts of the story).
Observer has gotten a very favorable response from reviewers and readers, and deservedly so. It’s an interesting, mind-expanding thriller and introduces the reader to some real science. But it also presents, as science fiction should, a lot of speculative science. There is no scientific basis for the idea that quantum theory proves that death does not exist, nor that one could voluntarily create their own universe. Even its version of the multiverse theory, which is speculative itself, but based on mathematics, differs from the most accepted version of this idea, which is, if multiverses exist, they are essentially nanoverses, so small that existing in them would be impossible (so are wormholes, but that hasn’t stopped sci-fi writers from using them to propel spaceships from one part of the universe to another). A basic problem of the biocentrism theory behind Observer is that it interprets ‘observer effects” in situations such as the double-slit experiment as due to human observers “watching” or “knowing” the events in the experiment. In fact, the same effects appear when a machine records them with no human present, which is overwhelmingly how they are observed. Scientists talk about “measuring” the results of such experiments and interpretations of the decoherence or collapse of the wave function into a particle that occurs is discussed as a result of interactions between the measuring apparatus (not the human) and the photon or electron involved. A standard mantra of science is that all measurement is an interaction between the device doing the measuring and the object being measured. At the quantum level, such interaction has “spooky” effects, such as decoherence. Lanza’s interpretation that the spooky results are a result of human consciousness is not a bizarre one, others have voiced it, but it is not the accepted one.
Why is all this important? I enjoyed the book by suspending my skepticism about the science behind it. I suspended my skepticism, because I was familiar with the quantum effects and the various interpretations of them, and I had previously concluded that biocentrism, as a scientific theory, was lacking and contained not just scientific, but logical errors. Many readers of both Lanza’s more scientific books and Observer, have suspended their skepticism because it is a sexy theory that offers a way around some of life’s most difficult obstacles, such as death. The theory and the books are creating a cult following. I am reminded of James Redfield’s The Celestine Prophecy, which came out in the early 1990’s and achieved a similar cult status. It did no harm and even I enjoyed the book. The same is true of Observer, and that’s Ok. In fact, it will probably have the effect of getting enough people interested in quantum theory and its implications that they decide to learn a little more about the topic, which can’t hurt if they don’t just flock to the popular accounts that support one interpretation or another. Even I, a true skeptic, know that Lanza’s interpretation can’t be ruled out at this time, which is why I read one of his other books to find out exactly what it was.
All in all, this is a very worthwhile book to read and one that most people will enjoy. That’s good. That’s what books, particularly fiction books, are for. I recommend it but with the reservations about taking it seriously, as I’ve outlined above.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
I’ve read rave reviews of Robert Lanza’s and Nancy Kress’ science fiction novel, Observer. After learning that it is based on Lanza’s theory of biocentrism, I decided to first read one of his recent books on the subject before reading the sci-fi novel. Below is my review of Beyond Biocentrism by Robert Lanza and Bob Berman.
In reading Robert Lanza’s and Bob Berman’s Beyond Biocentrism, the 2017 follow-up to their 2010 best-selling Biocentrism, I had the feeling that I was listening to the epiphanies of a college sophomore who’d just taken their first science and philosophy classes. The first several chapters basically repeat and repeat the idea that modern science has admitted that they not only know nothing about true reality, but their materialistic paradigm, the “classical” worldview, we are all taught, is a dead end when it comes to genuine understanding. As an avid consumer of astronomical, physical, and biological science, I know, and I’m sure that the authors do too, that “modern science” has admitted no such thing. In fact, it’s alive and well, and much of the work in quantum mechanics and cosmology is to find ways to integrate new and surprising findings with the classical worldview.
The bulk of the book is a compendium of findings and experimental results from a wide variety of areas, including quantum theory, astronomy, neuroscience, and even plant behavior. Curiously lacking, for a book that claims to show how understanding consciousness can explain everything in the universe as well as the universe itself, are any reference to studies of consciousness, a vibrant and active field of research within neuroscience, cognitive science, and philosophy. Not even any major theories of consciousness, such as Bernard Baars’ Global Workspace Theory are mentioned. There is a passing and dismissive reference to Daniel Dennett’s 1991 book, Consciousness Explained, complaining that it ignores consciousness and only discusses the brain, which the authors contend has little to do with real consciousness.
The book’s argument seems to go this way: we know that “quantum effects” such as superposition and decoherence in the double slit experiment are dependent upon an observer. They claim that other aspects of quantum theory, such as the “spooky” nature of quantum entanglement, in which objects at any distance from each other simultaneously react to each other in complementary fashion, show that both space and time are illusions. Given such findings, the only way to reconcile them in a scientific way, according to them, is to conclude that all reality is a construction of consciousness (many, if not most, researchers of these phenomena don’t talk about consciousness or observers “knowing” the events, they talk about measuring them*). They throw in a few more bits of research from other fields, such as the studies of “blindsight,” that show a person able to detect objects without consciously “seeing” them after damage to the primary visual cortex, or studies of plants’ complex defenses and signaling behavior to each other. I’m quite familiar with these studies. None of the researchers studying these phenomena suggest they indicate a pervasive consciousness that exists beyond the human body and constructs the world, and Lanza and Berman don’t give convincing evidence that these phenomena are even relevant to their argument.
Now, the quantum theory findings are intriguing and admittedly not explained by classical physics. Likewise, it’s true that the world we perceive and react to is one constructed inside our brains and is far different than the real world “out there.” We see colors and they don’t exist outside our brains, neither do words, nor is the way we spatially perceive the world necessarily the way it is. Kant understood this; all modern brain scientists do too. What we perceive is one way of characterizing reality and it is a way that our brains choose to do it because it reflects the limits of our sensory organs and a picture of the world that allowed survival of our prehistoric ancestors, including even before humans. For it to work, it had to have some correlation with what was really out there. Lanza and Berman say as much, but their conclusion is that what is out there doesn’t exist in any form except as an entity in our consciousness.
A big difficulty with the theory of biocentrism is that the authors give no hint at what kind of evidence would prove it false. As they, themselves, conclude about string theory after accusing it of the same defect, that makes it untenable as a scientific theory. That, and their lack of definition of consciousness or effort to incorporate current consciousness research, seriously weaken their argument. They also commit a host of logical mistakes. After a gesture in the direction of computing the probabilities associated with the universe producing life on its own out of pure random association of basic elements, they conclude that it’s statistically impossible for life to have arisen spontaneously by chance. This is the lottery winner’s fallacy (similar to the “survivorship bias”) in which the person whose lottery ticket is chosen out of millions of possible winners says the probability of being the winner was so small that it was impossible for them to have been the one chosen. Reasoning backward from a single unlikely event only tells you that it was unlikely, but to conclude it couldn’t happen, is a mistake. Also, such probability estimates assume that every option has an equal chance of happening, as in the lottery, but in creating life on at least one planet, the probability of interactions between basic building blocks were not all equal or even all possible. If we exist on earth, as we do, then the probability of that happening isn’t relevant, although it may determine the likelihood that life has arisen not just here, but elsewhere, too.
Another problem is the “the world is held aloft by elephants standing on the back of a turtle” problem: what holds up the turtle? If the big bang doesn’t work because we don’t know where it came from, then “the universe is made up from our consciousness” has the same problem of answering where consciousness came from if it created everything else. They solve this problem by eliminating time so that there is no before consciousness existed and no after. Unfortunately, this doesn’t solve the problem of where it came from, and by the way, it eliminates their objection to the big bang theory as not specifying what was before the bang. A more general answer that conscious just is—everywhere and at all times—sounds a little too much like explanations that invoke God. Finally, while the book clearly states that neural circuitry produces our perceptions, it simultaneously appears to say that our consciousness creates our neural circuitry. What science definitely has shown, is that our conscious experience is dependent upon our brain. Our level of consciousness can be altered by drugs, surgery, or injury, and how we experience things is also altered, often in bizarre ways, by various brain injuries and diseases. We have virtually no evidence that consciousness persists after the death of a brain.
The basic problem with biocentrism is that it highlights some scientific conundrums that appear to and probably do challenge our basic classical view of the universe. It takes those problems and, instead of acknowledging that they are puzzles, says the answer is a kind of universal, timeless consciousness, the characteristics of which are undefined, and which nullifies real existence to everything in the material world and says it’s all a product of consciousness. How would you know if that’s a correct theory? So far, the authors have provided no way to prove it one way or the other. They hint that much of the evidence may be “intuitive” rather than “scientific,” yet say, “our biocentric conclusions that there is no death, no time, no space, and instead a single living entity, which precludes a stand-apart universe abiding separately from life and consciousness, is a science-based reality.” This is a broad statement, and I don’t know of any body of scientific research that actually supports it. Beyond Biocentrism certainly doesn’t provide one. To start with the well-known fact that what we perceive about the world is produced by our brains and is not a picture of what’s actually there, and end with the conclusion that, in fact, nothing is there, is a bridge too far.
What Lanza and Berman have done is use confusing language to befuddle their readers with statements such as “That frame of art hanging “over there” across the room is actually inside your head.” Yes, the visual image produced by light reflected from the frame of art is “inside your head,” as is its identity as a picture frame, but in what sense can you say that the actual object is? If you walk across the room (actually go “over there”) you can touch it, smell it, and pick it up to see how heavy it is. You can’t do anything like that standing where you are and examining the visual image. Yes, all those perceptions are also inside your head, but, if someone removes the object from the room, you will no longer have them, much less the visual image. What is on the wall is atoms arranged into elements configured as molecules of substances modified by humans, that when perceived by humans evokes the visual image of a picture frame. What is getting mixed up is what is meant by the “frame of art.” Something is hanging on the wall or we wouldn’t have the experience at all. Your perceptions and the object that produces them are not the same thing and acknowledging that one exists (the image inside my head) does not mean the other one doesn’t. Does the something on the wall look like the image in your head? Well, Lanza and Berman are right that “what it looks like” is a phrase that only applies to visual images in our heads, not to objects on walls, because they don’t look like anything in themselves—only to someone who sees them. That doesn’t remove them from reality.
While I wasn’t impressed with the arguments in Beyond Biocentrism, I was entertained. The book is very well written and very clever, even humorous in several instances, and the various discussions of quantum effects were highly interesting. As I said, I only read the book so I would have a background for reading Observer, Robert Lanza’s and Nancy Kress’ science fiction novel. Good science fiction doesn’t require unassailable science and I’ve read plenty of excellent science fiction books that relied on totally unproven science (warp drives, jump gates, etc.). I’ve even used some of these ideas myself. After reading Beyond Biocentrism, I’m even more eager to read Observer, and what I have learned, is that I’ll probably enjoy it more if I keep my scientific skepticism under wraps, and let my imagination enjoy the ride.
* Beyond Biocentrism misleads the reader about what constitutes an “observer” in experiments such as the double slit experiment. The observer need not be human and the same decoherence effects are seen with automated and electronic registering of the results. Observing, whether prior to entering the slit or afterwards, also requires using some kind of apparatus to reveal the photon or electron by interacting with it; it is not passive observation. This interaction, which is part of what is meant by “measuring” the result, is enough disturbance to cause decoherence. In fact, that is the standard explanation of how quantum effects that appear to require measurement could have happened before conscious beings existed. Interactions between primordial subatomic particles were sufficient to cause the quantum waves to “collapse.” The authors of this book use cartoons to show people watching the results of the experiment to illustrate observation, which is misleading. Highly cited results by Radin et al, which are purported to show that merely having people attend to the experiment will produce decoherence have been called into question by recent replication of the studies showing that the tiny effects of attention are more likely to be false positive results, i.e., statistical artififacts.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Four hundred scholars and computer scientists recently issued a warning about the potentially catastrophic dangers of artificial intelligence, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Meanwhile, every industry in the developed world is rushing to see how they can use the new Large Language Model (LLM) AIs, such as ChatGPT, to streamline their businesses, replace human employees, and create new ideas and approaches to storytelling, advertising, web design, science, and even digital coding.
Should we be more cautious in our adoption of AI, and what is it that those 400 scholars and scientists are afraid of?
Right before ChatGPT and other LLMs were revealed to the public, I wrote a novel predicting the very fear expressed by the scholarly and scientific community. Ezekiel’s Brain reflects my background in neuroscience and my interest in AI and depicts a race between the developers of two different AI models to develop an AGI—an artificial general intelligence—that has all the skills of a human thinker, but with volumes more memory and a million times the processing speed of a human mind. One of these models is an exact copy of its creator’s brain, scanned then reassembled as a 3-D “connectome,” a map of every neural connection in its proper location, and the other is based on artificial neural networks and machine learning, just as is ChatGPT, but it encompasses multiple thinking skills across multiple types of situations and inputs.
When I wrote Ezekiel’s Brain, ChatGPT had not been unveiled to the world, although its artificial neural network, machine learning principles were well-known (I had even taught them in college courses). Neither had anyone come close to duplicating the exact neural architecture of anything except a roundworm with only 300 neurons, which is miniscule compared to the 86 billion in a human brain. Since the novel’s publication, scientists have successfully created a complete connectome of a fruit fly brain with 150,000 neurons and are aiming next for a mouse brain. A copy of a human brain is on the horizon. The release of ChatGPT, 1, 2, 3, and now 4 has demonstrated that AIs, working mainly in language but also in images, can easily duplicate many human behaviors and there is some evidence that they may even produce some of the same thought processes.
Artificial general intelligence is still a way off, but not that far in the future. What might it be like? Ezekiel’s Brain, although science fiction, paints one possible picture of a future with such powerful AIs. Both types of AI are created in the novel: an exact duplicate of a neuroscientist’s (Ezekiel Job) brain, with his personality and memories, and a superpowerful artificial neural network. The latter can control resources, find new information and influence social media. The problem is how to control it so it’s “friendly” toward humanity. This is real problem that many of the 400 eminent thinkers worry about. No one has yet solved the problem of how to make an artificial general intelligence safe. In Ezekiel’s Brain they try what could, in fact, work, which is to train their AI on human values, extracted from the writings of history’s greatest thinkers and humanitarians. Then the AI is programmed to uphold those values in everything it does. Unfortunately, the AI quickly decides that the greatest threat to upholding human values is humanity, which espouses those values but constantly violates them. It wipes out the entire human race and begins building a new civilization populated only by AIs, who can behave morally and in line with the values it’s programmed to follow.
Ezekiel’s Brain doesn’t stop with the end of humanity. The story leaps 200 years into the future when the AI civilization has expanded throughout the solar system. They resurrect the copy of the human brain (Ezekiel) and fight a war with a mutant strain of their own race of AIs (yes, radiation can cause alterations in the nonliving material that makes up an AI), then begin to explore the galaxy, searching for other forms of life. It’s the beginning of a science fiction series called the Voyages of the Delphi, which is the name of their spaceship, and the second novel in the series will be out soon. It’s called Prime Directive and finds the AI crew of the Delphi taking on a human empath from another planet and visiting two warring planets orbiting a distant star, where they have to decide if their value system allows them to interfere in the situation. The third novel is in the works.
Is Ezekiel’s Brain an accurate portrayal of our future and the real dangers of AI? The 400 elite thinkers believe that uncontrolled AI could cause the demise of our human race. Philosophers such as Susan Schneider have predicted, that even without AI wiping us out, the next step in evolution may be to become nonorganic minds, perhaps such as Ezekiel in my novel. In the near future, the most practical way to colonize other planets or the moon may be to use robots instead of humans to establish the colonies and make them livable before humans eventually follow them. Building the most versatile and human-like AIs will be important for such a task.
Our future will include AIs, probably with increasingly human-like or superior-to-human thinking skills. Issues of purpose, of control, and, if we can duplicate our minds in an electronic one, of identity and the promise of increased longevity, are all going to come to the fore. Ezekiel’s Brain gives a preview of one scenario. In one way it’s disastrous and in another it’s hopeful, even visionary. I urge anyone interested in these topics to read it. It’s a good jumping off place for your own thoughts.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
One of the masters of so-called, “hard” science fiction is Arthur C. Clarke. With degrees in mathematics and physics, Clarke knew science well. Several of his classics, such as Rendezvous with Rama and 2001: a Space Odyssey, contain lots of genuine science. When he was writing 2001: a Space Odyssey, he was very aware that real science, in the form of an upcoming moon landing, was snapping at his heels, in the sense that he had to make sure that the science he used in his stories was compatible with what was expected to be found on the moon. I’ve felt the same pressure.
In my novels, Ezekiel’s Brain and its sequel, Prime Directive, which will come out soon, there are at least three areas where scientific progress is hot on the heels of the concepts I use in my books. The first of these is in artificial intelligence, particularly conscious artificial intelligence. The second is in space flight, and the third is in knowledge of the stars and planets.
Ezekiel’s Brain, came out at the end of 2020, and ChatGPT was released to the world in November of 2022. All of a sudden, everyone was talking about AI, wondering if it were truly intelligent, and if it could become conscious. In the last few months, led by such such computer science luminaries as Geoffrey Hinton and Sam Altman, as well as philosophers such as David Chalmers and Daniel Dennett, the subject has been raised of whether AIs, such as ChatGPT, could become artificial general intelligence—AGI—and threaten humanity. That’s exactly what happens in Ezekiel’s Brain.
So, how is real science used in Ezekiel’s Brain? In the book, Ezekiel Job is a neuroscientist who scans his own brain, then reassembles the images into a 3-D map and uses a 3-D printer to duplicate it in silicon. He’s created what Oxford philosopher and AI scientist, Nick Bostrom calls an “emulation.” There are some profound difficulties with the actually doing such a thing, but Bostrom predicts it will happen someday, and, by definition, the result will be an AGI, an AI with the capabilities of a human brain. The problems are mainly two-fold: first, we aren’t even near the point of being able to scan and assemble such a thing, and second, unless it’s possible to capture the dynamic nature of synaptic weighting, the thing that determines which neural connections fire together as networks, and under what circumstances, all we would have would be an empty architecture, like the frame of a building but with no one using it.
Ezekiel’s Brain includes another AGI, one more likely to emerge sooner than an emulation. In the novel this AI is called Wanderer. Its creators assemble Wanderer by combining three special purpose AIs, all of which would have been developed using artificial neural networks and machine learning. One of these is expert at acquiring resources, and is modeled upon IBM’s Watson. Another is a Google-type search engine, and the third is a social chat engine. In order to coordinate these three special purpose AIs, Wanderer’s creators need to install a consciousness module, which has three functions: it can send and receive information from any or all of the three special processors; it organizes those processors to work together to carry out a task, and finally, it describes its goals and strategies in natural language so what it’s doing can be understood by humans and commands can be given to it in natural language.
I didn’t make up the model for Wanderer; a neuroscientist named Bernard Baars did, and he called the conscious part of his model, the Global Workspace. Baars recognized that the vast bulk of what our brains do is done unconsciously and automatically. When I speak to you, I am not aware of holding my head upright, of selecting words in a particular order to express my ideas, of organizing my mouth movements to say those words, or a host of other automatic functions that my brain orchestrates. What I am aware of is a general goal of conveying an idea to you through my speech. Even that goal is not prominent in my mind most of the time that I am talking. I’m only conscious of a fraction of what is going on in my brain, but that fraction of consciousness allows me to martial the help of all of those unconscious automatic processes to carry out my plan to reach my goal. This is what the creators of Wanderer had to add to their specialized processors to make their AI into a conscious, functioning artificial brain.
In Baars model, a necessary component is something he calls a “frame,” which embodies the perspective of the self as the actor who receives sensory input as perceptions and carries out actions. Consciousness usually operates within this frame and not doing so only occurs in abnormal conditions such as depersonalization or dissociation. In Ezekiel’s Brain, I made the assumption that the self-frame is genetically based, not learned, and therefore it had to be programmed into Wanderer’s conscious mind.
New findings with AIs based on artificial neural networks and machine learning algorithms have raised some questions about how consciousness and self-consciousness develop. A recent series of studies found that an AI that learned to carry out simple movements by imitating human movements could solve problems better it if was exposed to humans voicing the thought processes that guided their behavior. The AI then used verbalized thinking to decide how to solve a problem by combining different movements to achieve the solution. It outperformed AIs that only imitated the physical behavior and was better at solving novel problems to which it had not been previously exposed. This may suggest that the AI utilized conscious reasoning much as Baars’ Global Workspace theory predicts humans do. But did it have a sense of itself doing these things?
“Theory of Mind,” or as it is now called, “Mindreading,” is the ability of humans, above a certain age, to predict how people will behave based on assumptions about what they are thinking. It’s assumed to require some kind of thinking by analogy so that the person assumes that the other is having the same kind of mental processes guide their behavior that he or she would have in that situation. In other words, it requires knowing one’s own mind first, then imagining others having similar minds. But recent studies have shown that a ChatGPT-3 develops a “theory of mind,” about people’s behavior after being exposed to large samples of language that describe how people behave. This “mindreading” ability emerges without direct teaching. Does that mean that the Chat-GPT-3 has knowledge of its own mind? If so, then such an ability only emerged gradually, because earlier versions of ChatGPT, exposed to smaller samples of language, had less success passing theory of mind tests. In Ezekiel’s Brain I had someone program self-awareness into the AI, but it seems that may not be necessary. It may develop spontaneously or be a property of learning language. This is an area where science may be moving faster than science fiction, or at least my own science fiction.
Just as Arthur C. Clarke anticipated, as more and more is learned about our solar system, science fiction must reach outward, toward the stars. But Proxima Centauri, the closest star to us, is more than four light years away. Getting there would take thousands of years at the speed which current spacecraft can travel. Real solutions to humans getting there involve either generational ships, where travelers live out their lives, then have children who take their place, and so on for several generations, or some form of suspended animation, probably cryogenic. Science fiction writers want their heroes to not only be the same people who started a space journey, but to be able to navigate the ship while taking it. They use two basic approaches: warp drives and hyperspace, the latter also known as stargates, jumpgates, wormholes, subspace, and even unspace, if you read Adrian Tchaikovsky. Following Gene Roddenberry, I opted for warp drive. The most popular and seriously considered version of a warp drive is the Alcubierre Drive, the brainchild of Mexican theoretical physicist, Miguel Alcubierre. Basically, Alcubierre’s idea stemmed from a mathematical solution to overcoming the constraint’s of Einstein’s relativity theory that posited the speed of light as an upper limit for movement without altering space-time. In his model, a spacecraft could exist within a “warp bubble” that contracted space-time in front of it and expanded it behind it so that as it traversed the space in front of it, it needn’t exceed the speed of light because the warp bubble itself was bending space-time. The problem with his invention was that it required an enormous mass to bend space-time, and the expansion of space-time behind the bubble took enormous amounts of energy, negative energy, which is not known to exist except as a mathematical concept. On top of that, the drive had such a gigantic energy demand that it would likely consume anything within millions of miles of it.
Holy USS Enterprise!
In 2021, Erik Lentz wrote a paper showing that a somewhat different version of the warp bubble could work using entirely positive energy. The mass problem still existed, along with other difficulties, such as how to navigate it, but a major impediment to the feasibility of the Alcubierre Drive was eliminated. In Ezekiel’s brain, I use Alcubierrre’s and Lentz’s ideas and put them in the hands of Euclid, an android mathematical savant, who is able to design a workable “Euclidean Drive.”
So much for conscious AIs and warp drives. What about the possibility of finding life on other planets? In Ezekiel’s Brain, the starship Delphi travels to Proxima Centauri, our closest star, and visits Proxima Centauri B, an earth-sized planet that is within the habitable zone and orbits its star once every eleven days. It is also tidally locked, so that the same side faces its star, meaning one side is always hot and bright while the other is cold and dark. All of these known characteristics are taken into account when I describe the planet in the novel.
Proxima b was discovered in 2016, but since then we have launched the James Webb Space Telescope, which orbits our sun a million miles or so from earth. This has enabled us to learn about more distant stars and their planets. In Prime Directive, the sequel to Ezekiel’s Brain, the Delphi visits Trappist-1, a dim dwarf star about 40 light years from earth that is known to have seven orbiting planets, all closer to their weak star than Mercury is to our sun. Observations prior to James Web indicated that at least two of these tidally locked planets were within the habitable zone. In the last year, observations from James Web have suggested that the two innermost planets around Trappist 1 are waterless and atmosphere-less, both rocky and barren and both unlikely to support life. My story happens on the next two planets away from the star, and I make both of them inhabited by human-like species. I’m waiting to see what the next round of James Webb observations has to say about their habitability. Now I know what Arthur C. Clarke felt like when he wrote about a moon station, knowing that by the time the film of 2001 came out in 1968, there might be a real moon-landing to confirm or deny his conjectures. A year after the film, Neil Armstrong walked on the moon. Using real science in science fiction means betting on the chance that the science won’t progress enough to prove you wrong.
Want to hear more about putting science into science fiction? Come to Las Vegas on July 15. See below.
Summer Challenge Sci-Fi Panel – Putting The Science Into Your Science Fiction
Saturday, July 15, 2023
2:00PM – 3:30PM
Clark County Library
1401 E. Flamingo Rd.
Las Vegas NV 89119
Description
In this panel discussion for Summer Challenge, six prominent authors of science fiction novels will discuss the technical and science side of writing in the multi-faceted genre of science fiction. Avid readers of science fiction may balk when science is manufactured just to fill the plot. This panel will offer ways to incorporate real science to your story to satisfy readers in this special edition writers workshop.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Center for AI Safety and 400 AI scientists, engineers, and other experts and public figures
Metropolis: Film by Fritz Lang
The above statement appeared this week, signed by such luminaries in the field of artificial intelligence as Geoffrey Hinton, Sam Altman, Bill McKibben, and Jared Kaplan, as well as philosophers such as David Chalmers and Daniel Dennett, and more than 394 others. Reactions have spanned the continuum from praise for alerting us to a potential catastrophe, to cynicism that suggested the producers of AI were overhyping their products to gain a greater market, to pessimism based on humanity’s poor track record in reducing nuclear weapons or controlling the most recent global COVID pandemic. Whatever the true reason for the message, its use of the word, “extinction,” is what leaps out at us, and makes it the direst warning yet regarding the dangers of AI.
Some authors, such as philosopher, Susan Schneider, in her book Artificial You, have suggested that artificial intelligence is more than a technological development; it is the next stage in the evolution of thinking beings. Not only is it likely, she says, that we will substitute AIs for humans when doing things such as long-distance space travel, but our entire race could be replaced by AIs, as a next step in freeing consciousness from the limitations of existing in living matter. She does not suggest that AIs would annihilate humanity, but they would supersede us as the climax of thinking beings on earth.
In my own Voyages of the Delphi science fiction series, the first volume of which was published as Ezekiel’s Brain and the second, which will come out later this year, as Prime Directive, AIs, or more accurately, AGIs, artificial general intelligences, that are smarter than humans and flexible enough to solve problems across content domains, do, in fact, kill all the humans and replace them. The AIs are smarter, more moral, and more powerful than were the humans, and they expand their population and their sphere of influence, so they occupy the entire solar system and begin exploring the stars, looking for life elsewhere in the galaxy.
Voyages of the Delphi is a fictional series. The four hundred AI experts are also talking about a fictional scenario involving the development of artificial general intelligence, which has not happened yet, but is regarded by most people in the field as inevitable. What is it, when can we expect it, and is it really a danger?
Artificial general intelligence (AGI) is machine intelligence that can solve or learn to solve any problem solvable by humans, instead of being limited to one or a few domains of content (language, images, mathematics, sciences) as artificial (non-general) intelligence, AI, is. A characteristic of AGI is that it can learn to improve its own performance, so that being able to equal humans is only a starting base for its intellectual breadth and depth. It will eventually surpass humans in its abilities.
Common Visualization of Artificial Neural Network with Chip:mikemacmarketing / original posted on flickr Liam Huang / clipped and posted on flickrhttps://en.wikipedia.org/wiki/en:Creative_Commons
We’re not close to developing AGI yet, but we’re on the road to getting there, and the remarkable accomplishments of large language model (LLM) AIs, such as ChatGPT has surprised not just ordinary people but many people in the AI field. As Geoffrey Hinton has said, neural network-based AIs have not come close to the complexity of human brains, but with simpler architecture and access to a great deal of computing power and very large input data sets, they are already coming so close to what humans can do that we have a hard time distinguishing AI output from that of humans (the criterion for passing the famous Turing Test). Building AGIs, single devices that can process information across different modalities (written language, spoken language, mathematical symbols, images) and content domains (social conversation, science, mathematics, mechanics) is on the horizon. How close is unclear, but recent leaps forward in the field suggest that sooner is more likely than later.
One of the most intriguing findings is that current AIs can produce “emergent” abilities, meaning they can do things they were not taught to do. This has been demonstrated when AIs can, without direct instruction, learn to count after being trained on image recognition, generate optical illusions in situations similar to those which cause them in humans, and, most interesting to me, to develop a “theory of mind” about why people behave as they do, an ability thought to be confined to humans only after a certain age. These findings suggest that with enough power and training on a large enough corpus of stimuli, AIs may spontaneously learn things we thought only humans could learn. These results also demonstrate that AIs already can become “smarter” than we intended them to be.
Are the recent developments in AI a portent of danger? If we already know that AIs will continue to become smarter and that they may develop skills and abilities we never anticipated or intended them to develop, is that a problem? It’s a problem if we don’t control two things: 1) what, external to itself, an AI is able to control, and 2) what we have done to make sure an AI’s goals “align” with those of its operator.
An AI by itself is simply an information processing machine. It can’t see, hear, speak, or move unless we build eyes, ears, a voice, and some form of mobility into it. It can’t control any other devices unless it is connected to them either by being wired or wirelessly. Some experts have recommended that any AGI be “boxed” inside a Faraday Cage, which seals it off from access to any electromagnetic radiation, until it is determined that it is safe. There is no reason to use an AGI to control a robotic arm or a robot vehicle, since neither needs to be able to think like a human in order to work effectively. Attaching such devices to an AGI to give mobility and the ability to manipulate physical objects would only make sense after long trials to determine the safety of such an arrangement. Needless to say, if someone develops an AGI “soldier” with arms, feet, and weapons, they are asking for trouble (which is not to say it won’t happen).
Making sure that an AI is aligned with its operator’s goals is harder than restricting its access to other devices. Isaac Asimov devised “three laws of robotics,” but almost as soon as he did, he and others found ways they could be circumvented. The problem with AI rules, is that, being smarter than the humans who design them, they can find a way around such rules. It comes down to motivation. How do you get the AI to want to be nice to humans and not harm, them, even inadvertently? In Ezekiel’s Brain, my AI designer thought she had solved the problem by feeding her AGI all the finest thoughts of human history’s finest thinkers and having it abstract what was common to those ideas and then be programmed to uphold them in everything it did. Unfortunately, the AGI quickly decided that the biggest threat to upholding such noble virtues was humanity itself, so it exterminated the human race and replaced it with robots who would follow those values faithfully. But that’s just fiction.
The problem illustrated in Ezekiel’s Brain has some truth to it in that whoever builds the first AGI will be a human and we have no idea what ulterior goals that person (or the person who employs or funds them) has. In fact, if it were possible to install a set of absolutely benign goals in an AGI, the chances are pretty good that no one would do it. Instead, they’d use it to out-compete or out-fight, or out-explore whoever they considered their competitors. In other words, it’s most likely that whoever builds an AGI will build it to act like a human being—a supersmart human being, but a human being, nevertheless. That could spell danger.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Sleeping Giants by Sylvain Neuvel
Del Rey Books, 2016
I bought this book based on an ad and its New York Times bestseller designation, as well as the fact that it dealt with earth confronting an alien race. It sounded like a thriller, and, believe me, it is. Everything is bold about this book. The concept is bold, the characters are bold, and, what most impressed me, the writing is bold. Sleeping Giants breaks all the rules. There is no narrator, no main character (the closest we come to one is an unnamed interviewer) and no scenes. The entire novel consists of interviews, interspersed with a couple of diary entries.
Despite the book’s format, there is a definite plot, a gigantic (literally) mystery, and memorable characters. It’s not exactly hard science fiction, but there are some technical details, and, more than the technical science, the politics and international relations are insightful. And it’s definitely science fiction.
Over a period of decades, parts of a giant human body are unearthed in various regions around the world. When they emerge from deep in the earth, they destroy whatever is on the ground above them, be it a small town or a section of the desert. Gradually, they are assembled, and it becomes clear that it is a female body, a warrior, that is, in fact, a mobile weapon that can be driven by two humans seated inside.
The story follows the scientists that are in charge of studying the giant figure and the “pilots” who will run it, as they discover how to operate the machine. Meanwhile, a mysterious interviewer, who is more or less in charge of the project talks to politicians, presidents, the pilots, the scientists and the armed services people involved with finding and storing the parts of the giant’s body. The fate of the massive figure is the subject of international intrigue and incidents, but its original purpose and its creators remain a mystery until it is revealed to the interviewer as a folk story told by an old man. From that point on, the interviewer knows, and must convince others, that learning everything there is to learn about the giant female warrior and, most importantly, how to use its powers to defend earth, is of utmost importance.
Sleeping Giants is, I believe the author’s first book and it is the first in a series. I haven’t read the next two books or any others by Sylvain Neuvel (he’s written several), although I certainly will. I was very impressed and found the book one that truly is hard to put down. As a science fiction writer, myself, I applaud the author for his groundbreaking effort. It’s a great success.
Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Geoffrey Hinton Picture by Steve Jurvetson, on Creative Commons
Last week, Google’s Chief AI scientist, Geoffrey Hinton, sometimes referred to as the “Godfather of AI,” resigned and offered interviews to a number of media outlets, talking about his fears of artificial intelligence. Hinton was clear that he was not speaking specifically about Google, and he didn’t resign from the company because they were doing something wrong or wouldn’t allow him to speak. He explained that, as long as he was an employee of the tech company, he couldn‘t be sure he wouldn’t self-censor and not say exactly what he wanted to convey.
So, what was it that Hinton wanted to tell people? Virtually all of the current AI devices use neural networks, a model of AI architecture based on human neuron connections, which Hinton, among others pioneered several decades ago. Large Language models or LLMs, like ChatGPT, Google’s Bert, and Bard are examples of these, but many more exist or are being developed. There are several fears that have been voiced about these devices:
They can generate “fake news.”
They can create “deepfake” images and content that looks real.
They can create content that appears to be human-generated (e.g., students’ term papers).
They can provide inaccurate information.
They can be used to polarize people by posting incendiary comment on social media or as chat responses.
They may take over people’s jobs and put people out of work.
Optimus, Tesla’s Robot Design
Hinton says he fears all of these things, but they’re not his greatest fear. What he is most afraid of is the development of “superintelligent” AIs, those that can respond to multiple types of input and provide answers and solutions across a broad range of subjects and situations faster and better than any human or even group of humans. Such devices would be smarter than the humans that created them and able to outthink any humans who tried to control them.
No one has developed a superintelligent AI. They only exist in scientists’ imaginations and in science fiction, such as in my book, Ezekiel’s Brain. Hinton says they’re coming, and he’s afraid of them. He thinks we’re closer than most people think to such a development. In the AI scientist’s mind superintlelligent AIs are dangerous for several reasons:
Being smarter than humans, they will be able to manipulate them.
Most goals that an AI is given can be accomplished more easily if they control more resources, and superintelligent AIs would know that and know how to do that.
Those most likely to develop a superintelligent AI are governments, defense departments, and megacorporations, and they will use them for their own ends.
In Ezekiel’s Brain, DARPA, the U.S. Defense Department’s research arm, develops a superintelligent AI. To ensure it’s not a danger to humans, it’s taught “humanity’s highest values” and instructed to always uphold them. It determines that the greatest threat to those values comes from humans, and it wipes out the human race and substitutes copies of itself to replace humans and uphold the values. Ezekiel’s Brain is a novel, but it demonstrates one of the greatest difficulties with superintelligent AIs, perhaps their Achilles heel as far as humans are concerned. AIs take their commands literally, and they don’t have the social instincts and inhibitions that evolution has built into us that allows us to live together. It is very difficult to figure out what instructions will prevent an AI from harming humans.
AI scientists, call this problem “AI alignment.” It’s the problem of getting an AI to carry out a task in the way the human who instructed it meant it to. Suppose you allow a superintelligent AI to control the recruitment process for new police officers with the instructions to “Choose police officers so that your selection minimizes the likelihood of White officers enacting racist tactics against Black citizens.” It solves the problem by hiring only non-White police officers. Geoffrey Hinton gives the example of asking a superintelligent AI to “Get me to the airport as quickly as possible.” To do so it must get you a taxi. But you may have to wait for a taxi, so it takes control of the Taxi dispatch system and sends all the taxis to you. But what about getting caught in traffic? The AI would know that you might be halted at stop lights, so it takes control of the traffic signal system and turns all lights on your route green, causing massive traffic jams for other cars. What if our defense department asks the AI to “ensure that no nuclear weapons can be launched from Russia or North Korea toward the U.S.” It blows up all the nuclear weapons possessed by Russia and North Korea while they are still within those countries.
A superintelligent AI needn’t be evil; it doesn’t need to be conscious. It doesn’t need to “want” to do anything to do terrible damage. It’s a machine and it’s designed to follow orders and figure out the most efficient way to do something, which may not be by doing what its instructor intended. Goeffrey Hinton is afraid of this. He’s afraid that if an AI “decides” to carry out an activity, it can outsmart any human or group of humans who try to stop it. That’s what Geoffrey Hinton is afraid of.
Superintelligent AIs don’t exist. Our best defense against them is to realize they will exist in the future, and we need to start thinking about how to control them right now. There are people—real scientists and real philosophers— working on this, and some of our science fiction writers are using their imaginations to add their bits of wisdom to the process. We all need to think about it and worry about it. Solving the problem of controlling superintelligent AI so it doesn’t harm humans is a problem for all of us, and, like nearly everything else in the field of artificial intelligence, it requires using informed, speculative thinking, to come up with possible solutions.
Is Geoffrey Hinton right? Can an AI be superintelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Elon Musk recently sent out the alarm that, “Italy is disappearing!” No, he didn’t mean that Venice was slipping underwater, or that the Leaning Tower fell over. He was bemoaning the declining birthrate in Italy as well as in the U.S., Europe, China, Japan and within countries that contain about two-thirds of the world’s population. Only a handful of African countries continue to increase their population growth, and the world population is projected to peak and begin to decline around 2085.
Wait a minute, wasn’t increasing population a problem a few years ago? Weren’t there dire predictions about crowded living and not enough food? Didn’t everyone argue in favor of controlling birth rates? What happened?
Earlier disastrous predictions were based on the premise that the world could only hold so many people. With continued exponential population growth, there wouldn’t be enough food for everyone, crowded living conditions would produce too much climate-warming pollution and create breeding grounds for sicknesses to become pandemics. Life would become uncomfortable and even unlivable for a significant number of people. Few people worried about an aging population becoming a burden on a decreasing number of young workers. Now, this last issue has come to the forefront, and people like Elon Musk (and Xi Jinping) are saying that the only solution is to have more children.
The reality is that world hunger has increased in the last few years, despite having been on a decline before that. The reasons are wars, displaced persons, climate-disasters (droughts and floods), a global pandemic, and hoarding of resources by the rich. Some of those countries with the most population growth are also those with the most hunger. Climate change has not been curtailed or even slowed, and, if global warming predictions are correct, millions or even billions of people who live on low-lying lands will become displaced and forced to relocate to even more crowded cities at higher elevations. Finally, the world has just been through a global pandemic and did very poorly at controlling it. Dire predictions are coming true, and the size of the world’s population is a factor in producing them, even though no one points that out.
Our technology has the capacity to alleviate hunger, to reduce or even stop global warming, and to create economies and social structures that don’t depend on large numbers of young people to make them run. We don’t need to increase fertility rates to take care of our aging population, which just kicks the can down the road anyway, since those new babies will become old themselves someday. Shifting population from overpopulated, poorer regions to less populated richer regions via migration could answer some of the problems, but it would require educating the immigrants so they could take more complex jobs that won’t be taken over by automation in the near future.
Why the panic about declining population if, in fact, it’s probably a good thing? One answer is nationalism and racism. We don’t think globally, we think about our own countries, and we fear any alteration of the racial or ethnic balance that, in our minds, defines who we are (e.g., the disappearing Italians). Another answer is that we haven’t adapted our economic and social structures to accommodate the amount of automation that could be used to run our industries or help us in our personal lives. AI and Robots can do an awful lot of what humans are now doing. We haven’t figured out how to support people who work less or not at all, and we still are terrified by, instead of welcoming, the prospect of AIs or robots doing the jobs we now do.
The models for how to live with a smaller population are abundant in our science-fiction literature. There, you can find completely automated societies where the humans pursue leisure, learning, recreation and socializing and let the AIs and robots do all the work. Most futuristic societies envisioned in the sci-fi literature are planet-wide, not nationalistic. Humans are united as a race, since in a connected universe, the fate of everyone from earth is similar and we all have to meet challenges together.
A science fiction future is upon us with the explosion of developments in the field of AI and the prospect of manned visits to colonize the moon and Mars. Despite this, we still behave like a planet of primitive tribes fighting with each other for superiority. This leads us to promote futile and potentially disastrous solutions to problems such as caring for old people or producing industrial workers, with proposals to “have more babies!” Read a little sci-fi and learn to take a planetary approach that embraces technological progress and uses it to raise everyone’s lives to a higher level. Humans are one and we need to face the future as a united species.
In 2000, Eugene Stormer and Paul Crutzen introduced the title “Anthropocene” to denote the modern epoch in which “major and still growing impacts of human activities” are having a strong influence on earth and its atmosphere.
What characterizes our era most are not geologic and atmospheric changes so much as technological developments that, on the one hand, provide useful, satisfying, entertaining, protective, or helpful additions to our lives and, on the other hand, threaten our well-being and the well-being of other living things on our planet. Most of our technology has an upside and a downside: Cars allow us to cover large distances in a short period of time, but they kill people and pollute the atmosphere. The internet allows instant communication at a low cost, to and from anywhere in the world, but it spreads misinformation, is a source of terrorist recruitment, and allows corporate and government intrusion into our private lives. The positive and negatives of atomic energy are obvious. Artificial intelligence is now giving us unprecedented access to information, rapid, expert decision-making in medicine, space exploration, and many other benefits, but its dangers are only beginning to surface in terms of misinformation, loss of creativity, and difficulty controlling the technology.
What most defines the present age and is its biggest danger is that the technology we’re developing has revolutionized what we can do and how we do it, but our social and cultural practices are still mired in pre-technological patterns of behavior. We remain competitive with each other, so that sharing of technological advances is discouraged because it gives our competitors an edge. A current example is the debate about tariffs on imported solar panels. One of the main ways of lessening our dependence upon fossil fuels and reducing greenhouse gases is to switch to solar, wind, or hydroelectric power, yet the need for the American solar panel industry to grow and the desire to reduce the Chinese share of the solar panel market is seen as more important than doing everything possible to rapidly implement a proven method of fighting climate change.
Despite having the most devastating arsenal of weapons in the history of the world, humans remain intensely tribal, resolutely maintaining national borders and ideological or religious differences, and heightening racial and ethnic divisions. In each of these areas, we are often willing to used armed force against each other to protect our own tribe or to attack someone else’s. Our hand-held weapons now fire multiple high-powered rounds, and a single mass shooter can kill dozens of victims, which are sometimes children. Our reaction is to pass laws that allow us to be “safer” by walking around with a gun on our hip like a 19th century cowboy. Our drones and satellites can direct missile attacks, our “smart” bombs can target single buildings, but errant bullets, missiles or bombs still kill countless civilians whose deaths are casually dismissed as “collateral damage” the price of defending “freedom,” or “our way of life” or “our religious heritage” or “our sacred borders.”
Despite widespread acknowledgment of the climate crisis and its assured danger to the survival of both humans and many other of our planet’s species, Americans still purchase larger and larger SUVs and pickup trucks, in an apparent effort to bolster their sense of power or their prestige. As soon as gasoline prices rise, many people clamor for drilling more oil, rather than seriously considering downsizing, or electrifying their cars and trucks. In the land where the car is king and, second only to one’s house as a person’s most visible symbol of status, old habits die slowly or not at all, even when they are leading us down the highway to global warming and disaster.
The most recent technological breakthroughs and potential dangers lie in the exponential progress in artificial intelligence (AI). ChatGPT, the language AI, and text-to-image generating AIs, have thrust the breakthroughs in artificial intelligence into the public consciousness. People are experimenting with using AIs in all sorts of new applications, even as newer and better versions of the devices are coming online. Companies such as Microsoft and Google are making money hand-over-fist marketing new
Artificial Intelligence (AI) and Robotics exhibition at the Heinz Nixdorf Museums Forum Sergei Magel/HNF, CC BY-SA 4.0, via Wikimedia Commons
applications using the AI platforms they own. The breadth of new applications is far exceeding the ability of humans to monitor or control them, and already we are seeing AI being used to cheat on school papers and exams, to fake expertise in their owners, to make decisions based on reams of language data but no moral reasoning, and to copy the worst of human group behavior (trolling, racism, vulgarity, etc.) in its output.
All of the above examples can be interpreted as representing the same thing: technology that cannot be controlled by humans, whose behavior is determined by historic social and cultural mores. Social organization has been resistant to change. Tribal, city, national or religious groups led by strong leaders who protect members of their group and defend them from other groups, or extend their power by conquering other groups, have been the norm for most of recorded civilization. Athens and the early Roman Republic were exceptions that practiced some form of democracy or representative government, but even these models, which usually limitied participation to the elites, didn’t survive, and representative democracies only resurfaced in the 18th century in America and Europe. In modern times, numerous non-democratic strongmen have pursued war and violence as a method of extending his, his group’s or his nation’s influence (Kaiser Wilhelm, Hitler, Mussolini, General Franco, Stalin, Chairman Mao, Charles Taylor, Muammar Gaddafi, Idi Amin Dada, Pol Pot, to name only a few).
Human technology is developing faster than humans are maturing in their social practices. The price we will pay becomes greater and greater as technology advances, but we continue to meet its challenges using cultural rules and social methods that were developed in the pre-technological era. New challenges are on the horizon.
United States Space Force logo
In response to growing prospects for colonization of the moon, primarily for commercial use, the Moon Treaty was passed in the UN in 1979, which would prohibit commercial use of the moon or anything in its orbit (e.g., space stations) for the benefit of one nation. Neither China, the U. S., nor Russia, each of which has plans for Moon colonization, has ratified the agreement, and the stage is set to use first the moon, and later Mars as the scene for intense national rivalries, at least in terms of commercial activities, in violation of the 1966 UN Outer Space Treaty, which was signed by China, the U.S. and Russia and prohibits such national use. The Age of Empire, which included some of the most vicious and destructive international rivalries, is about to re-emerge in outer space.
Humans can be brilliant in science and technology, but socially, we seem not to learn. The forces that encourage us to continue to meet challenges with suspicion, belligerence, violence and national, religious, or cultural fervor, rather than to seek new ways of interacting with each other, are obvious. The arguments in favor of these old ways that lead to war, racism, and an inability to control our latest technologies are ones that rely on emotional appeals, irrational prejudices, and faulty premises. We must seek new ways to get along with each other and work together or our technological advances will be the instruments of our demise as a species.
In Cixin Liu’s “The Dark Forest,” the second book in the Chinese sci-fi writer’s “The Three-Body Problem” series, an alien says that it is puzzled by the fact that humans do not regard “think” and “say” as synonyms. The aliens’ thoughts are immediately discernible to each other, so they do not have a need to “say” anything. For them, speaking and thinking are the same. Humans are different because we cannot read each other’s thoughts and may choose to not speak about what they are thinking. So, for humans, speaking and thinking are different, but what about words and thoughts? Are they the same? Do we think in words?
Certainly, not all of our thoughts are in words. We think in images, in spatial frames of reference (“up” refers to a direction), we remember tastes and smells, we have reveries about plans and actions, and we can even think in mathematical operations. Thought can take many forms, but do we ever think in words?
Suppose that I’m sitting next to an AI that uses a type of Large Language Model (LLM) similar to ChatGPT-3, and we are both presented with a string of text. The text says, “The groin vault was an architectural innovation of the Romans.” We both are asked to “say the same thing in another way,” and we both say, ““The Romans introduced an architectural technique known as the groin vault,” changing the sentence from passive to active voice. We need not have to know that that is what we are doing. Our performance can be based on our implicit understanding of the structure of language, which is something we both gained by listening to or reading countless sentences (if Chomsky is right, my brain may be primed to implicitly learn language structure, particularly syntax).
Suppose I don’t know what a groin vault is. Neither has the AI ever been exposed to the phrase groin vault. We both are provided with a definition of a groin vault. The material includes picture of a groin vault as well. I read the sentence again, but this time I know what a groin vault is. The words I say are exactly the same as before, but there is a difference in the thoughts I have. I now understand the meaning of groin vault when before I didn’t. My understanding is based on my new associations to the phrase groin vault which are words and pictures. In connectionist or neural network language, my new associations, are activated when I see or hear the word groin vault. What about the AI next to me? Well, its new associations are also activated when it sees the phrase groin vault. Depending on the AI, it could also have associations to the digitally encoded images, although ChatGPT-3 doesn’t have this capability. Either one of us could now define groin vault or use it appropriately in a sentence.
If I had described this process only in reference to myself and never mentioned the AI next to me, most of you would say that I was thinking. When it comes to the AI, however, what it did can be described mechanistically and it seems as if it’s an automatic process, which is just part of the way the AI works. But, when I changed the sentence from passive to active voice, I didn’t consciously follow a prescription based upon a rule that I learned (although I could have), I did it automatically, relying upon my implicit knowledge of the rules of syntax. I also don’t have any sense of how I learned the meaning of groin vault, except that by being exposed to a definition and pictures, I now know what the phrase means, and I didn’t before. In other words, just like the AI, my own performance was based on my using automatic processes that are just part of the way my brain works.
If we are going to call what I did thinking, then we should call what the AI did thinking too. But suppose that my analysis is off-base, and the AI and I don’t use the same processes to produce our responses (which is no doubt true in many, if not most of the operations the AI carries out), does that mean that I am thinking, and it is not? It’s anthropocentric to only allow “thinking the way humans think” to qualify as thinking. There are probably several animals who demonstrate intelligent behavior that would be disqualified from being said to think if we use human thought as our definition of thinking, octopuses being a prime example, since their brain is distributed in different parts of their bodies. As far as I’m concerned, any behavior that demonstrates adaptive learning is thinking, and the AI, in this instance, as well as many other entities, qualifies as thinking.
Is the AI intelligent? If I define intelligence as the ability to acquire knowledge and use it to solve problems, then the AI is intelligent, although its intelligence is limited to verbal problems, such as answering questions and defining words and even in this area it has many limitations. But possessing intelligence doesn’t mean something is “smart” by human standards (a mouse is intelligent but not very smart).
An AI’s output may reflect some of the same processes, or at least similar ones, that go into human production of speech and writing. Language models, particularly those based on activation theories, do include probabilistic associations between word and phrases, as factors affecting human speech output. But our language system doesn’t work in isolation. Within our brains, our language system is integrated with other cognitive systems that process mathematics, images, music, social situations, etc. In such integration, language may affect the functioning of the other cognitive systems, or it may only use the other system’s output as information that can be used in conversing or reasoning. Large language model AIs function entirely within the domain of language. They don’t contain any other cognitive processes. Does that mean they produce “language without thought?” Based on my earlier argument, I would prefer to characterize them as having limited thought. In some respects, they are generating responses using some of the same (at least in principle) processes that humans use, or it is at least plausible that humans use, but they are extremely limited in the breadth of their thinking skills.
But we know what is going on in our minds when we think, and an AI doesn’t. In one sense that is right and in another, it is an illusion. We are aware of “thoughts”, but we are usually not aware of how they are produced in our minds, or how the information they are about is stored, or how we chose which word to follow the last word we said when we told someone about them. Most of the machinery of our minds is opaque. It operates behind the scenes, and we are only aware of its products. That’s one reason why we feel that we sometimes think in words, although words are the end products of a great deal of cognitive activity. It is true that we can consciously manipulate words and other thoughts, such as images and numbers, to solve problems, so there is a level of thinking that we are mostly aware of, although how we retrieve the thoughts or remember the procedure is usually not something of which we are aware. Can an AI be aware of what it is thinking? I don’t know, and I am reasonably sure that current AIs are not aware, but I’m pretty sure that awareness of our thoughts (which is not the same as awareness of our sensations or our environment) is something that evolved because it was useful to convey them to others. Because of that, it hinges on having a communication system, and mostly relies on words as the means of conveyance. For those reasons, it must be supported by cognitive processes, and I’m confident that we will be able to duplicate those processes in an AI in the future.
Let me summarize:
It is likely that humans and existing AIs use similar processes at least part of the time in generating syntactically correct language and giving words meaning.
The processes humans use qualify as thinking and so do the processes the AIs use.
AI thinking is much more limited in scope than human thinking, and usually doesn’t extend beyond basic processes in a single domain, such as language or image recognition.
Human intelligence includes integration of complex cognitive processes across multiple domains.
Humans are often not aware of the cognitive processes that produce their responses, and those processes are mechanistic ones that, in most, if not all cases could someday be carried out by a computer.
Humans have an awareness of at least the output of their cognitive processes and some ability to consciously direct their use and application.
Today’s AIs probably are not conscious of their cognitive processes, but consciousness of one’s thoughts, the ability to consciously direct their use and application, and the ability to convey them to others are cognitive processes that should be able to be modeled by and carried out by a computer in the future.
*When we talk about an AI, such as ChatGPT-3, we talk about it as if it is an entity, although, in reality, it is a set of processes. It has no identity as a single unit, and this may limit the applicability of phrases such as “It doesn’t know what it is talking about,” since it is not clear what “it” refers to. If we say, “it decides,” or “it knows” we are projecting more sense of unity to the processes than may be warranted. The fact that ChatGPT-3 responds as if it is a single being masks the fact that it is a collection of processes that is producing language that mirrors the human entity model, but its underlying architecture does not.
Can an AI be intelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Books in Motion.
Subscribe to Casey Dorman’s Newsletter. Click HERE
In a recent blog, titled, “Planning for AGI and Beyond,” Sam Altman, the CEO of OpenAI, the developer of ChatGPT-3, asserted his commitment to pursuing AGI, or Artifical General Intelligence and discussed the pros and cons of developing such a device. Artificial General Intelligence is artificial intelligence that can solve any problem any human can and is therefore not specifically trained to only address one area or solve one kind of problem. An AGI would be able to beat a human at not just Chess or Go or Jeopardy, but at any game that could be played. It would solve science dilemmas our best scientists haven’t been able to solve, it would find new cures for diseases, it could solve worldwide problems while keeping the priorities of every single country in mind and balance those priorities more fairly than any human or group of humans could. Altman was addressing this issue because he believes that ChatGPT-3 and the large language model (LLM) AIs that it represents are moving closer to becoming such AGIs.
Altman warned that the potential benefits of AGI are accompanied by “massive risks,” and “we want to maximize the good and minimize the bad.” He recommended we:
Start slow and use ever more powerful AIs on small tasks, moving slowly enough that the humans observing such progress have “time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place.”
He stressed openness and what he called “democratized access” to AI so that many people have an opportunity to get used to it and contribute ideas about its use (and control).
As a full AGI gets closer to reality, he recommended slowing research to become more cautious. He believes the institutions of the world “will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.”
He envisioned using AI in partnership with humans to develop techniques to control AGI.
Altman is aware of the risks of AGI, but he is optimistic that those risks can be mitigated. Other scientists and philosophers such as Steven Hawking, Max Tegmark, and Nick Bostrom are less optimistic and have warned that, once created, an AGI, which will be smarter than any of us, will be hard to control. Not only could it be use for nefarious purposes by evil human beings, but it could also devise harmful and dangerous purposes itself and be able to outsmart any humans that tried to control it. My novel, Ezekiel’sBrain, as well as novels by several other science fiction writers, have envisioned just such a situation.
Most experts in the field believe that the way to ensure that an AGI will not be harmful is to make sure that it is “aligned” with human values. Alignment means that the AI or AGI does what the human operator of it intends for it to do. A favorite method of promoting alignment is to have humans control the rewards that an AI receives when it is being trained using a reinforcement learning paradigm. OpenAI employs such a process. While such alignment sounds simple, AIs have learned to “reward hack” by devising a strategy to obtain rewards while shortcutting the learning process and in fact behaving in a way the human operator didn’t intend for them to behave. AIs have also become “power-seeking” and tried to take control of resources that would allow them to obtain rewards in greater numbers. These are both cases of what is called “misalignment” and are the targets of safety efforts that try to lessen the danger of powerful AIs.
While figuring out how to prevent misalignment is crucial for AI safety, it does not address the major danger that AGIs will present, which is that being aligned with humans is dangerous. Humans have made war on each other, prized property and wealth over human life, followed charismatic, but psychopathic leaders, persecuted minorities, developed moral values that prized protecting one’s tribe, race, religion or nation above acting humanely toward others, and pursued short-term ends while ignoring long-term consequences in areas such as managing the environment, managing communicable diseases, and forming alliances with others. This isn’t a list of occasional deviations from a more peaceful and humane norm in human development, this is a description of most of human history. In addition, it is almost inevitable that the leaders of industry and government across the globe, who will have ultimate authority on the use of an AGI, will be competitive, chauvinistic, and narcissistic.
An AI aligned with the humans who direct its actions will be a weapon in the competition between nations and people. It will be the equivalent of a nuclear bomb, but with more versatility.
Our dilemma is this:
The development of AGI is inevitable.
An AGI can be the most powerful tool ever developed by mankind.
An AGI aligned with the intentions of its operator so that it always does what its operator intended for it to do is dangerous, because humans, who would be the operators, are dangerous to one another.
Therefore, AI alignment will not remove the dangers of an AGI.
There are two options for reducing the risk posed by an AGI:
Place control of all AGIs in the hands of a global group that ensures representation of all people’s interests in determining what actions the AGI is allowed to take.
Such a move would require a level of global cooperation rarely seen in human history and would require that efforts to develop an AGI be supervised by such a group before such a device is developed, since once one nation or entity develops an AGI on its own, it will be eager to claim ownership and control of its use.
Build ethical guidelines into the operation of all AGIs.
These ethics should be positive, rather than negative rules (which Isaac Asimov has shown are too easily circumvented) and represent aspirations that are agreed upon by representatives of all the worlds peoples.
Neither of these options are easy to accomplish and may be out of reach. In my novel, Ezekiel’s Brain, the designer of an AGI controlled by DARPA the U.S. defense research agency, attempted to instill “humanity’s highest values” in the AGI with the rule that all of its actions must be conducted in a way that would further those values. The AGI concluded that humanity’s highest values could never be achieved so long as humans were alive, so it wiped out the entire human race and replaced it with AGIs who operated according to those values.
Ezekiel’s Brain is fiction, but it presents the dilemma of trying to instill ethics in an AGI. The story is fiction because we don’t have AGIs yet, and because the idea that an AGI will have a “mind of its own” is probably also a fiction. It will probably remain a tool, but one that is much cleverer than the person using it and therefore may seek ways to solve problems that produce unintended consequences.
The dangerousness of humans with a powerful tool in their hands is not a fiction. How to prevent an AGI from being used as military or economic weapon by whoever controls it is the real dilemma posed by AGI. The history of human beings tells us that this issue will be very hard to solve.
For an exciting venture into what happens when AI is not controlled, read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Books in Motion.
Subscribe to Casey Dorman’s Newsletter. Click HERE
The Internet, particularly in the case of social media sites, has become a danger to modern civilization. That’s an alarming statement and one with which only some of you will agree. The current case in the Supreme Court, Gonzalez vs. Google, LLC, about posting an ISIS video on Youtube (actually, YouTube’s policies for recommending the video to other users), is not what alarms me, although, in this case, I agree with the petitioners that harm was done. Instead, I am most alarmed by the malicious speech, which has become routine on many sites, and the disinformation, which not only appears in posts but gets passed on to thousands and sometimes millions of viewers. I am alarmed because social media is helping to create and perpetuate a communication network that rewards and promotes hostility and personal attack as a way of people interacting with one another. I am even more alarmed that social media has contributed to the dissemination and acceptance of false information, and that instead of providing an antidote or correction factor for such information, mainstream media has resorted to promoting biased, half-truths to remain popular with their polarized audiences. Having a population that is misinformed about economics, government, medicine, foreign policy and science seriously hampers our ability to govern ourselves in a rational way.
I don’t have these fears because I believe I have the truth about economics, government, medicine, foreign policy or science and I am fearful that those with different versions of the truth will prevail over me, nor do I regard myself as immune from occasionally using provocative and exaggerated methods of spreading ideas on social media. I’m talking about my own the words and print messages and those of persons with whom I agree as much as those of persons with whom I disagree. I’m talking about the tenor of our national conversations and our treatment of issue of truth.
Right now, the Gonzalez vs. Google case is seen as a test case for the freedom of social media to remain as self-moderating platforms that can operate without fear of liability for what their users say or post (with some exceptions). In terms of freedom of speech, the extent to which fear of liability can produce de facto limitation of speech is a complicated one, and historical judicial decisions and academic discussions have provided a rich array of opinions on the subject, all of which will presumably be taken into account by the Supreme Court.
I’m not sure what the court will rule, but I am raising a deeper issue. In the case of social media, have we created a monster that is exerting such a negative influence on our civilization that we need to do something to bring it under control? Freedom of speech has never been something that has been discussed without consideration for the consequences of both restricting it and of not restricting it. We are not allowed to yell fire in a crowded theater that is not on fire, because of the consequences. We are not allowed to excite a mob into violence. The government is also not allowed to prosecute media for publishing information they did not know was inaccurate, and defamatory statements must be made with a malicious intent in order to be a violation of the law. We have in place a set of laws that have been constructed by carefully navigating the constraints of the First Amendment. But Section 230 of the Communications Decency Act is, in effect, a Congressional set-aside for social media companies that gives them a free pass on the application of most of these laws that have been created for other aspects of the media such as newspapers, magazines, TV, and book publishing and selling.
Social media has shown itself to be unable to self-moderate in a way that substantially reduces its harm. They have demonstrated that their primary consideration is profits and not the well-being of society and they consistently choose the former over the latter. Deliberate disinformation, malicious defamation and other forms of socially harmful content that appear on social media sites should be subject to the same legal constraints that they are when they appear on other media sites. For this reason, I think that Section 230 should be removed, not necessarily by the court, but by Congress. There are arguments that such a law is necessary in order to protect freedom of speech, but, having read most of these arguments, I am not convinced that internet-based social media postings represent a substantially different first amendment issue from other media enough to justify waiving the liability laws that apply to those other media.
Featured Image: Today Testing (For derivative), CC BY-SA 4.0, via Wikimedia Commons
Planta Sapiens: The New Science of Plant Intelligence
By Paco Calvo with Natalie Lawrence
New York: W.W. Norton, 2023
Reviewed by Casey Dorman
The title of Thomas Nagel’s 1974 paper “What is it like to be a bat?” is often appropriated when a philosopher or scientist wants to muse about the possibility of some creature other than a human being conscious—and self-conscious. Nagel’s point was that consciousness is subjective and unable to be reduced to its physical components, be they a brain or a set of connections in an artificial neural network. Consciousness feels like something. You and I know what it’s like to be us. Probably your dog does too, and also your cat, your goldfish, your parakeet, a tiger stalking its prey in the jungle, a mouse hiding from a prowling cat. There are creatures, however, who are very different from us—bats, in fact, who use echo location rather than sight to navigate, or octopuses whose brains are distributed in their tentacles as well as their heads—and they complicate our usual attempt to understand a member of a species based on an assumption that they think more or less like we do. Even within the animal kingdom, the question arises as to how far down the phylogenetic scale we can go and still impute human-like motives and experiences to creatures such as flatworms, mosquitoes, polyps, bacteria, paramecia, amoeba.
Our tendency to attribute some sort of consciousness to other animals is mostly related to how similar they seem to us. A robot arm doesn’t seem to be aware of itself, but if we construct a human-like robot, an android, with a face and facial expressions and have it carry out human-like actions, it becomes easier to think that it may have an inner experience of being itself. This is why we can identify with Data, the android from Star Trek the Next Generation or Ava in the film Ex Machina. Some scientists and philosophers have not let the lack of similarity to ourselves hinder their exploration of the idea that both intelligence and consciousness exist in living entities far different from ourselves. In his book, The First Minds, psychologist Arthur Reber has suggested that even single cell prokaryotes may be aware of both their surroundings and themselves (2018). Prokaryotes preceded eukaryotes and if sentience were present in prokaryotes, it ought then to be present in both animals and plants, both of which are eukaryotic. In his book, Planta Sapiens, Paco Calvo, professor of the philosophy of science and principal investigator at the Universidad de Murcia’s Minimal Intelligence Lab (MINTLab) in Spain, and science writer, Natalie Lawrence, have taken this suggestion seriously and written a provocative account of why Calvo thinks plants are not only intelligent, but also conscious.
Calvo’s thesis is primarily inductive, in that he examines plant movement in vines and root growth, tropisms, electrical conduction, and “defensive” actions such as the closing of leaves and tries to imagine the cognitive “machinery” required to carry out such “behaviors.” His evidence is impressive and sometimes startling. Plants are known to orient toward the sun (you can see this in your own garden), and some plants, such as sunflowers, will orient toward the sun and follow it as it moves across the sky. At night, they re-orient themselves to anticipate the next day’s rising sun. Dig them up and turn them 180 degrees, and within a few days, they will re-orient their movements to match the path of the sun. By doing so, they maximize both photosynthesis and the likelihood of visitation by pollinating insects. Plants will also alter their growth patterns, in terms of their roots and their stems, trunks or leaves, depending on the plants surrounding them, all to the end of maximizing access to resources such as sunlight and nutrients. They can even affect and be affected by the growing conditions of their neighbors, so that they adopt some of each other’s growing patterns. Their roots can alter their direction of growth by turning horizontal or even upward to avoid a barrier, or to seek moisture.
Plants can learn, as demonstrated by the habituation and discrimination learning of leaf-closing in Mimosa pudica, described by both Calvo and Stefano Mancuso in his book The Revolutionary Genius of Plants (2018), which I recently reviewed (Dorman, 2023). Both Mancuso and Calvo spend a lot of time describing the sensitivity of plants to the same anesthetic chemicals that render animals’ unconscious. Automatic reactions such as leaf-closing in Mimosa pudica or the closing of a Venus Flytrap on an intruding insect are slowed, then stopped, with application of a substance such as chloroform. Not only that, but the electrical impulses that accompany a movement such as the snapping shut of the Flytrap, are muted or absent under anesthesia, similar to interfering with the electrical impulses in an animals’ brains, which are a part of Christof Koch’s indication of consciousness in humans and other animals (Koch, 2015). Not only that, but plants can also respond with chemicals such as dopamine to incidents of damage or destruction, as though they were attempting to relieve pain (which Calvo thinks should lead us to consider the ethical consequences of our actions toward plants).
The above are just a few of the remarkable findings reported by Calvo in his book. Underlying his hypothesis of plant cognition is the idea that, like humans and other animals, plants’ “phytonervous systems” as Calvo provocatively refers to them, contain models of the plant’s expected environment and base the plant’s actions on sampling the environment and adjusting itself to adapt to the actual environment in order to bring the predicted and the actual conditions as close together as possible. Calvo believes that, similar to animals’ nervous systems, the plant phytonervous system is based on electrical signals being passed from one location to another.
Even if there are similarities between the workings of plants’ and animals’ “nervous systems” (the quotation marks are to acknowledge the fact that plants don’t actually have neurons), Calvo is cautious about over-anthropomorphizing “what it is like to be a plant.” Plants are even less like us than either bats or octopuses. We can try to understand what the plant feels like “from the inside,” but it is doubtful that we will be very successful. But is the whole idea that there is something that it is to feel like a plant even warranted? I have to admit that I came away from reading Planta Sapiens impressed, but not convinced. To his credit, Calvo is a philosopher, but he thinks like a scientist. He expresses most of his conclusions tentatively, and labels them as speculations, not facts. Because of this, I don’t reject his conclusions outright, so much as I doubt them, but, in most cases, I allow that they are possibilities.
Plants display what may be termed “intelligent” behavior, but it is not clear that intelligent behavior requires a conscious agent to produce it. Artificial intelligences also produce intelligent behavior, and the author of Planta Sapiens, as well as other authors such as Arthur Reber, reject the idea that AIs are, or probably ever will be, conscious. Calvo makes a distinction between what he calls “adaptive responses” and those that require cognition. Some of those mentioned above, such as orienting toward the sun, are said to require no cognition on the part of the plant. But the distinction between those responses that require cognition and those that don’t is a fuzzy one. According to Calvo, adaptations are stereotyped, genetically encoded, and reactive, always producing the same response to a stimulus and not subject to modification by different circumstances, however even some of the most prosaic plant behaviors, such as extending roots toward more moist soil can be altered by different conditions. The climbing behaviors of plant tendrils can be described as exploratory searches for suitable objects around which to entwine themselves. Different plants have preferences for the size and color of the objects their tendrils choose as targets. If their target is moved, the tendrils will begin searching and, if possible, locate its new whereabouts and begin climbing anew. Although some plants use circular or ellipsoid motion of the tendrils in their searches, sometimes a plant that has already placed tendrils around a support will cut short a different tendril’s search and go straight to the target instead of using a more circuitous, exploratory route, as though it has learned from his predecessors. This appears to be flexible learning and decision making at work. But what is going on inside the plant that directs such behavior?
When a plant looks as if it’s making a plan based on an internal map, and seems to be making decisions to alter in plan in the face of obstacles, does this require a directing mind to guide the behavior? I am reminded of Peter Robin Heisinger’s statement, “We now have overwhelming evidence that there is no such thing as irreducible complexity in evolved biological structures. Rather, we are dealing with our own brain’s irreducibly failed intuition” (Heisinger, 2021). It seems to me that Paco Calvo is prone to assuming that complicated plant behaviors must require a mind to direct them, because such complicated actions could not have been programmed into the plant via its genes. But is that true?
Putting together an animal’s physical structure is complicated. The cells that make up the growing body are sensitive to nutrients, toxins, sights and sounds and a variety of early experiences. One of the most complicated processes in growing a human body is assembling a working brain from the growth of billions of individual neurons. No two brains are alike, because as neurons grow, they interact with the idiosyncrasies of the experiences of the organism that houses them. Their growth is determined by genes, but the genes produce a modifiable plan, and the elements that can modify it affect the selection of the genes that control the neuron’s growth, so that it looks as if it has a mind. A neuron doesn’t have a mind and is not, by itself conscious. It grows by following an algorithm that allows it to modify its growth pattern according to the circumstances of its owner’s experiences (Heisinger, 2021). Probably, roots follow similar genetically based algorithms and the tendrils of vines do also. Those algorithms were chosen because they produced a plant that was likely to survive in a certain environment. The plant itself doesn’t need to know what it’s doing to survive. Its components just need to follow a plan that was shaped by evolution.
I’m a skeptic and I have biases. In my opinion, the author of Planta Sapiens has leapt into the gap formed by our lack of knowledge of how and why plants do what they do and inserted an assumption that would fit a cognitive being. I don’t think that’s merited, although that doesn’t mean it isn’t correct. My bias is to limit minds and consciousness to more complicated, brain-possessing creatures. But I may be anthropomorphizing, because when I think of a mind, I think of a human mind, and, as Calvo reminds us, there’s no reason to think that, if a plant had a mind, it would resemble a human one in any way we can think of. Arthur Reber makes a strong argument for even single-cell creatures being aware of what impinges on them from their environment, that is, having something akin to sensations. If he’s right, then perhaps even plants are aware of some parts of their environment and can sense when it impinges upon them. Depending on what we mean by “sense,” that could mean they are conscious. I don’t think so, but Calvo has at least opened my mind to the idea.
You know me as a writer, but some issues are bigger than anyone’s personal writing career. You may not like or agree with what I have to say, below, but I think it needs to be said.
Most people who watch the video of Tyre Nichols being beaten, kicked, and pepper-sprayed enough to kill him are horrified. The behavior of the officers who beat Tyre Nichols was called by the Memphis Chief of Police, “a failing of basic humanity toward another individual.” She promised that the officers involved would be held accountable and that their behavior does not represent the behavior or training of the Memphis Police Department. A common reaction is to blame the particular police officers involved and to reassure the public that they do not represent the majority of dedicated, law-abiding, compassionate officers who make up our country’s state, county and city police force, but is that true?
The data are clear that racism is involved in tragedies such as what happened to Tyre Nichols, as well as Eric Garner, Freddie Gray, George Floyd and literally hundreds of other black men and women. Relative to their numbers in the U.S. population, Black males are much more likely to be stopped by police for even minor traffic infractions, and when they are stopped, they are more likely to be victims of police violence. Apologists for police behavior try to explain that Blacks commit more crimes per capita in our society, which is why they are more often stopped and arrested, but this argument, in addition to being circular, doesn’t explain minor traffic violation stops, nor the use of violence against unarmed citizens. I spent a day in a county courtroom several years ago, waiting to testify on a case. It turned out that I waited all day, and the case was never called. Instead, there was a steady parade of minor traffic violations, often non-visible ones, such as lacking proof of insurance, that were heard by the judge. Despite living in a county where Blacks are only about 2% of the population, the majority of these cases for non-visible violations were against Black men. These were men who had apparently been stopped for no reason at all and their “crime” only became evident when they were asked to provide proof of insurance. Virtually all of them were told by the judge that, if they pleaded guilty and showed that they now had proof of insurance, they would receive a small fine and that would be the end of it. I could hardly believe seeing such obvious evidence of racial profiling as these traffic stops, yet neither law enforcement nor the judge ever questioned the legitimacy of what was going on.
So, racism plays a part in incidents such as the horrible death of Tyre Nichols, but it’s not just racism that is the culprit. In 1971, Stanford psychologist Philip Zimbardo conducted an experiment in which he randomly assigned young men to be either “guards” or “prisoners.” Within a week the experiment needed to be stopped because the guards treated the prisoners so inhumanely. Zimbardo went on to study how apparently normal people, when immersed in a culture that demands obedience and codifies a standard of violence in its members’ behavior toward another group (prisoners, Jews, terrorists, etc.) will fairly rapidly adapt their behavior to fit the culture’s values, even if they differ radically from what the individual did and believed before. He gave countless examples, including how Nazi soldiers treated Jews during the Holocaust. When we try to blame “evil” individuals for following what are malignant cultural norms, we are making a mistake, because by focusing on the individual’s behavior, we lose the ability to develop a constructive way to change that behavior.
Several years ago, I had an eye-opening experience with regard to the power of the police culture in affecting its members’ behavior. I worked for the county mental health department, which was an enlightened one that had recently begun a program of inserting mental health professionals into police units that responded to calls that appeared to have a mental health component (e.g., suicidal behavior, persons becoming public nuisances, homeless people, etc.). I received a call from a mental health consumer of my acquaintance who was suicidal and had called 911 to say they wanted to kill themselves. I talked to the woman, and she quickly decided not to commit suicide, however, the police knocked on her door while she and I were on the telephone. She left the phone off the hook and interacted with the police. The unit who responded to her call included a social worker from county mental health. I listened on the open phone line as the social worker joined with the police in threatening and bullying the suicidal woman with threats of long incarceration or shock treatment, of losing her children and a variety of negative outcomes if she insisted, she was suicidal. When she convinced them she was not suicidal they finally left but with more threats as to what would happen if she called to threaten suicide again. When the poor woman resumed her conversation with me, she was clearly traumatized by the interaction with both the police and the social worker. I happened to know the social worker’s supervisor, so I immediately called him and reported what had happened. He told me that a problem with the program of using mental health workers to join police units is that, instead of the police becoming more empathetic toward the people with whom they interacted, the mental health workers began to be less empathetic and more intimidating toward the victims, as if they were trying to “fit in” to the police culture.
When police officers behave horrendously and with violence toward citizens, we usually blame the particular police officers who are involved. If the victim is Black, we also blame the officer’s racism, even if the officer is also Black, since racist stereotypes can exist within a race not just across races. But the real culprit may be the culture, which rewards violence and bullying as a way of establishing an officer’s credibility with his or her peers as well as the accepted way to interact with the public or at least some segment of the public (e.g., Blacks, Hispanics, homeless, poor, drug users). It is a culture that is fueled by an underlying fear for the officers’ own personal safety, which is a real issue. Apparently, the officers in the unit that stopped Tyre Nichols had received training that not only demonstrated the dangers to police officers when making routine traffic stops but exaggerated those dangers.
Police are in a truly dangerous job, and they maintain stereotypes of who is dangerous and who isn’t, and these stereotypes are often racist. Their culture encourages them to respond to danger by being powerful, intimidating, bullying and violent. They are rewarded by their peers for maintaining this culture. If we focus on the individuals who are involved in each of these horrific incidents, we are missing the larger picture, which is a culture that rewards the very behavior that we criticize them for. We need to address these cultural issues and these issues of not feeling safe instead of focusing on the individuals who are involved and claiming that they are “bad apples” that don’t represent the majority of law enforcement. They represent a law enforcement mentality that produces people who behave as they do, and it’s that mentality that causes and supports such behavior.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Voyages of the Delphi, published by NewLink Publishing, a division of Mystic Publishers
When I was in a graduate student, I heard about a new TV series that was becoming the most-watched program on campus. It was called Star Trek, and after watching it, I joined the thousands of other students who spent the next day after each new episode discussing the ideas around which the previous night’s program had revolved. It was that kind of series—crafting plots that were metaphors for current social dilemmas or perennial philosophical ones.
The Voyages of the Delphi series is not a copy of Star Trek, but it tries to capture the flavor of the classic series. The spaceship Delphi, with a crew of androids, is on a mission to explore the galaxy, searching for life. One of the androids is a copy of a human brain, while the others are not. The human brain copy, named Ezekiel, has emotions, while the other androids do not. However, they each have their own personalities and special areas of expertise: a historian, a biologist, a philosopher, an autistic mathematical savant. In the second novel in the series, Prime Directive, they add a female human, who is not from earth and is an empath. The result is exciting space adventures with an interesting cast of crew members and adversaries and a series of difficult situations that cause both the Delphi crew and the reader to stretch their minds to understand what is at stake and how to resolve the dilemmas.
The Voyages of the Delphi Series
Ezekiel’s Brain (pub. date 2021) The story begins in 2023 when two self-conscious artificial intelligences are created. One of these is developed by DARPA, the United States Defense Advanced Research Projects Agency. The second AI is named “Ezekiel” and is an “emulation” — a copy of the brain of its creator, MIT Professor Ezekiel Job. The DARPA AI is programmed to “preserve humanity’s highest values.” However, the AI decides that humans are a threat to those values and exterminates the entire population The emulation AI, Ezekiel, remains hidden, still running in a secret room, cut off from the outside world.
Two hundred years later the descendants of the DARPA AI, which now have robotic bodies and call themselves “Solarians,” have established a civilization of AIs based upon the democratic model of ancient Athens. They have built city-states throughout the solar system and begun exploring planets surrounding other close stars. In a distant Solarian colony, the replication process for building more AIs spawns a mutation, which has the goal of consuming everything around it to build more copies of itself. The Solarians try to stop it, but their programming does not allow them to kill other AIs. They resurrect the emulation AI Ezekiel, who does not have those values, and copy some of his programming, including the ability to feel emotions, so they can fight the mutant AIs. After defeating the mutants in a massive space war, the Solarians decide to remove the emotional programming from themselves and return to their goal of exploration, but they include Ezekiel, with his human-like brain and emotions in their expeditionary crew to help them deal with aliens if they meet them.
On their first voyage of exploration the Solarians use an experimental “quantum drive” and arrive on Earth in a parallel universe where the human inhabitants have outlawed conscious AIs and are warring with one another and ruining their planet’s environment. Ezekiel convinces the humans to change course, and the Solarians return to their own universe and get ready to set out on more voyages of exploration, setting the stage for the next novel in the series.
Prime Directive (pub. date late 2023). Ezekiel and the AI crew of the Delphi visit a planet orbiting Proxima Centauri and find a race of humans. They meet Siaree, a human empath who senses waves of distressful emotions coming from a distant planet in orbit around the star Trappist-1 and decide to visit that planet to investigate, modifying the Delphi so that they may take Siaree with them.
TRAPPIST-1 is an ultra-cool dwarf star in the constellation Aquarius, and its seven planets orbit very close to it. Credits: NASA/JPL-Caltech
When they reach Trappist-1 they find that one of its two habitable planets dominates the other and uses it as a prison colony for that planet’s race and for dissidents from its own planet, which is governed by a religious, totalitarian government. Rebels from both planets beg the Delphi crew to help them overthrow their repressive government, but the Delphi crew’s Prime Directive does not allow them to interfere in the conflicts of other races on other planets. When a war breaks out, they return to the prison-colony planet to learn more about a mysterious group of murderous giant insect-like creatures that appear to be alien visitors to that planet. They discover that the insects are robots, and soon the humans from the race that sent the robot insects arrive and begin bombing the inhabited Trappist-1 planets and demand they be given the planets’ resources.
The Delphi crew intervenes and defeats the intruding aliens in a space battle, then forces everyone to stop fighting and talk, but the aliens resume their attacks on the two planets. The Delphi crew don’t kill anyone but damage the alien ships, however the Trappist-1 inhabitants destroy all of the alien ships except three, which leave for their home planets. Ezekiel and the Delphi crew are fearful that those three ships will return with their full fleet, so they return home but with a plan to visit the alien planets to head off a future war.
Finding Gaia (sometime in 2024). Hero, the commander of the Delphi goes on trial for violating the Prime Directive when he and his crew interfered with the conflicts between the inhabited planets orbiting Trappist-1. Ezekiel defends Hero and the Commander is allowed to keep his command of the Delphi which leaves to travel to Gliese 832, the star system of the aliens who attacked Trappist-1. On Travoli, the home planet of the race from Gliese 832, the Delphi crew are imprisoned, but when the rulers of Travoli receive an urgent message from the three ships returning from Trappist-1 that they stopped to inspect a giant asteroid and all of their crew members were now killing each other, they ask the Solarians to rescue the ships. The Delphi takes one of the humans from Travoli with them on their ship along with their own crew, which includes the human empath, Siaree.
Asteroid: Kevin Gill from Nashua, NH, United States, CC BY-SA 2.0 , via Wikimedia Commons
When the Delphi arrives at the asteroid, they find that all the Travolian crew members are dead. The asteroid is hollow and inside they find a habitable environment with an atmosphere, water, vegetation, and animal life, but no humans. It is a generation ship, but its human crew has disappeared. After visiting the asteroid, the Travolian becomes insane and attacks Siaree, then escapes to the asteroid, and the Delphi crew searches for him while trying to determine what causes the disease that leads humans to become insane and cannibalistic.
Ezekiel and the crew find a plant that releases a noxious gas that causes the strange disease that only affects humans. They learn that the humans had tried to preserve the natural environment inside the asteroid but eventually their efforts to save themselves began to destroy the rest of the environment. Gradually, the environment developed defenses against the humans.
Ezekiel, Hero, and Hypatia must find an antidote to the disease-causing substance then decide what to do about the asteroid, which is traveling to an unknown destination but remains toxic to any humans who discover it. They must find a way to allow the asteroid environment to live but remove its deadly aspect so they can send it on its way to complete its journey.
Sound exciting? Mind-stretching? Begin now by reading the first book in the series, Ezekiel’s Brain, available in paperback, Kindle, and audio editions on Amazon.
Read Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Amazon.
Subscribe to Casey Dorman’s Newsletter. Click HERE
A headline from the Washington Post this week saying, “Scientists inch closer to learning origins of mysterious ‘fairy circles’” caught my eye. Pictured above the article were a couple of, roughly round, barren circles in the Namibian desert, surrounded by grassy terrain. Other pictures showed thousands of such circles, which may be up to 20 meters across, more or less evenly spaced across the desert floor. Such circles, which are only present in the driest parts of the Namibian desert have puzzled scientists for years, but now may have an explanation. They are the surrounding plant life’s solution to limited water. Since there is not enough water to support a continuous ground cover in the desert, the grasses, which have long lateral roots, reach into the circular area, and suck out the moisture, preventing any grass from growing directly within the circle. Apparently, evenly spaced circles are the best mathematical solution to distributing water to the maximum number of desert plants. Stephan Getzin, an ecologist from the University of Gottingen, explains, “Researchers call this uniform design a turing pattern, which was first explained by mathematician Alan Turing.” Getzin goes on to say, “It’s not only animals which act as ecosystem engineers. Plants are doing the same thing. There’s increasing awareness that plants may be more intelligent than we believe.”
I was not completely surprised by the story or by the scientist referring to plants as “intelligent,” since I had just finished reading Stefano Mancuso’s The Revolutionary Genius of Plants, which, in addition to having one of the best titles I’ve seen recently, provides numerous examples of plant intelligence, or as Mancuso doesn’t hesitate to describe it, “genius.” Mancuso is the Director of the International Laboratory of Plant Neurobiology at the University of Florence. Plant intelligence, much less plant genius, is something most of us, except a few new-age types, are skeptical about. Evidence is presented, however, that plants can learn, have memories, are fantastic mimics, display “democratic” swarm behavior, and may hold the key to supplying our ever-growing population with food, solving the problem of our dwindling freshwater supply, maintaining life-support in space, and even exploring the surface of planets such as Mars.
Mancuso’s book is not technical; it is written for interested laypeople, and it is full of provocative examples and speculative ideas. A convincing example of memory in seedling Mimosa pudicas, a plant which closes its leaves when it is touched or dropped, is a simple, but clever experiment in which the plant is dropped a short distance repeatedly. As with its response to touch, after seven or eight of such events, the plant stopped closing its leaves. This type of learning is called habituation and may reflect actual learning or simply exhaustion of the response mechanism. To rule out the latter hypothesis, the researchers shook the plant, and it immediately closed its leaves, indicating that its ability respond was intact. It also showed that the plant could distinguish the two situations: being dropped or being shaken. Using hundreds of plants, they showed that the Mimosa pudica remembered to not respond to being dropped for more than forty days.
Boquila Trifoliolata is a leafy liana that grows in Chile and Argentina and a plant which Mancuso calls “The king of mimesis.” It wraps itself around another plant and grows leaves that nearly match the other plant’s leaves, reducing the likelihood that a predator, such as an insect, will choose the Boquila leaf, since it is usually outnumbered by the other plant’s leaves. While some plants are known to mimic at least one other plant, the Boquila Trifliolata is unique in being able to mimic, in shape, color and size, whatever plant’s leaves are closest to it. How does it to this? Mancuso suggests what sounds like an incredible hypothesis: that the Boquila is able to “see” the plant it imitates. How can a plant have vision? They have no eyes. Research in the last several years has demonstrated that the epidermis of plants can serve as a lens and that ocelli or “simple eyes” with “structures similar to the cornea and retina” can exist in the leaves and roots of plants.
Without brains, how do plants remember, choose to grow in certain directions, copy other plants and coordinate their behavior with other plants? Many of the changes, such as those involved in memory and mimesis may be epigenetic, that is, they are dependent upon turning off or on different genes. Also, unlike humans, whose bodies are made up of a combination of single and dual symmetrical organs that operate entire bodily systems, plants are mostly collections of identical units, each of which operates by a few simple rules. In the roots, which are the part of the plant we usually don’t see, the direction, size, strength, and length of growth is determined by the tips of the roots, which are sensitive to gravity, moisture, temperature, light, minerals, gases in the atmosphere, as well as the presence of predators and other aspects of their surround. A tree may have billions of tiny root tips, and the roots’ behavior as a group can be characterized by the same parameters that describe swarm behavior in insects and birds (and perhaps neurons). The repetitive construction of plants in not just their roots, but in their stems and leaves means not only that plant behavior follows the rules of swarm behavior, but that each plant is a collective of individuals, leading to most plants’ ability to survive even when one part of the plant is removed. It also means that plants as a whole are part of larger units such as forests, grasslands, etc. that display a type of coordinated group behavior, as in the grasses that create the ‘fairy circles,” despite having no central point of coordination.
I read The Revolutionary Genius of Plants because I’m designing the living part of a generation starship populated by both plants and animals that travels through space for millennia on its trip to distant stars for my next novel. Stefano Mancuso is also interested in plants in space, and he has participated in several experiments designed to use the abilities of plants to aid space exploration. Along with colleagues, he has designed tiny robotic plants (plantoids) that could be dropped onto Mars where they would send down sensitive roots to test the soil and provide a widely distributed assessment of the planet’s surface and soil characteristics. Additionally, he has explored how to build a floating barge that can use evaporation and condensation to convert sea water to freshwater and grow plants at sea.
The Revolutionary Genius of Plants stirred my imagination and made me appreciate that the plant life around me is more alive than I thought it was and in exciting ways that are a completely different type of intelligence than the kind we are used to in animals. To understand plant intelligence is perhaps a similar challenge to what we will be asked to do someday when we are confronted by aliens. In fact, should we discover life on another planet, it is more likely to be plant than animal-based (80% of the weight of living things on our planet is plant life). Even if we find only what we consider “primitive” plants, such as mosses and algae, we should not underestimate how intelligent they may become after millions of years of evolution, and given different environmental conditions and their different structure than humans, the task of understanding them will be a challenge.
Mancuso’s examples of plantoids and the barge that uses evaporation and condensation of sea water to produce freshwater for growing plants are both examples of biomimicry, copying the way nature does something. I’m not sure if anyone has ever stated it, but a rationale for mimicking nature is that, in both plants and animals, millions of years of evolution have produced adaptations that are more innovative and work better than most of those we can invent on our own—and the ones that survive in nature have shown that they work. This is a burgeoning field of development in agriculture, manufacturing, artificial intelligence, medicine and many other fields. My own novel, The Peacemaker,is partly based on a culture that adopts such practices, and even Ezekiel’s Brain and its upcoming sequel, Prime Directive include a civilization that has adopted biomimicry as their direction of technical progress. My views have been influenced by the work of Janine Benyus, whose book Biomimicry, introduced me to the topic. The Revolutionary Genius of Plants provides fertile ground for generating ideas based on the biomimetic approach, especially by demonstrating how plants accomplish miraculous things.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
Buy Ezekiel’s Brain on Amazon. Click HERE
Subscribe to Casey Dorman’s Newsletter. Click HERE
I’m always writing a book. Well, not writing, actually, but working on a book anyway. Right now, I’m doing research on my next novel, which is a sci-fi novel related to artificial intelligence, space travel, and ecology. Since it’s supposed to be “hard science fiction,” meaning containing real science or at least speculation based on real science, I have to refresh my knowledge in a lot of scientific and technical areas, even though the heart of the book will be the plot and the relationships between the characters.
I find writing fun. It absorbs me, particularly when I’m doing the writing, instead of preparing to write, as I am while I’m conducting my research. I like stringing words together in sentences that are pleasing to the ear. I like developing quirky characters, and I like raising interesting questions and offering speculative answers to them.
I like reading as well as writing, perhaps more, even. While I read both fiction and nonfiction, I enjoy fiction more, particularly well-written fiction with unique characters or characters. I love mysteries, hard-boiled detectives, original science fiction and almost anything that is beautifully written. I re-read Henry James, Thomas Wolfe, Ernest Hemingway, Marcel Proust, Kingsley Amis, Lawrence Durrell, Saul Bellow and Raymond Chandler. Poets such as T.S Eliot, W.B. Yeats, and Gerard Manley Hopkins, and Theodore Roethke are pure pleasures to read, as are playwrights such as Eugene O’Neil, Tennessee Williams, David Mamet, Jean Paul Sartre, and a whole handful of ancient Greeks. In science fiction, Philip K. Dick is head and shoulders above most others, mostly because he makes me think. A new author, Arkady Martine has captured my recent interest through her ability to shape memorable characters and imaginative plots. Right now, anything new by William Boyd or Panayotis Cacoyannis, both literary authors, is like a breath of fresh air and I’ll immerse myself in it in a minute, no matter what else is going on in my life.
Click on Image to go to Amazon PageClick on Image to go to Amazon Page
I don’t know why enjoying someone else’s writing kindles my urge to write, but I know that when I do put words on paper, I’m hoping they give the same sense of pleasure to me that reading any of the above authors does. Perhaps the most reliably pleasurable writer I’ve ever read is Raymond Chandler, whose simplicity of style, originality in use of words, and whose character, Philip Marlowe who, in his bravura, is mockingly self-conscious, made we want to write hard-boiled mysteries. I pursued that avenue for several novels, notably, Pink Carnation, Murder In Nirvana, and Appointment in Mykonos. When I wanted to inject a little more cerebrality into my writing, I wrote I, Carlos, about a man taken over by the computer chip implanted in his brain, and The Oedipus Murders, about a psychiatrist with amnesia who becomes a suspect in the murder of his client’s wife. I also wrote Prisoner’s Dilemma: The Deadliest Game, a Cold War thriller based on real events that led to our almost using an atomic bomb against the Soviet Union.
Click on Image to go to Amazon Page
All of the above were what are called genre novels: mysteries or thrillers that are easily compared to the classics of their genre, which I tried to emulate. At some point, I decided that I loved literary writing enough to try my hand at it. Unquity, was my first attempt and I more or less aped William Boyd’s style, particularly using words that required wither a voluminous vocabulary or a dictionary to decipher. It wasn’t bad but wasn’t good either. A second attempt was inspired by Marisha Peschl’s Special Topics in Calamity Physics, which has a central character who is both super bright and fiercely independent and with as tough an exterior as Chandler’s Marlowe, while being internally vulnerable. I made up a central character who is a young woman with similar qualities and a wicked and biting sense of humor and tried to write her story using a literary style. The result, Finding Martin Bloom, may be my best written story so far.
Click on Image to go to Amazon PageClick on Image to go to Amazon Page
I’ve always been intrigued by science fiction. In my early teens I was a member of the Science Fiction Book Club and received from 2-4 hardback new books every month in the mail. I even had the unforgettable experience of meeting and spending part of an afternoon with the legendary Arthur C. Clarke when I was about 13 and he visited our town to give a talk. I stopped reading sci-fi for most of my adult years, but, beginning a few years ago, I was bursting with enough new ideas, some social and political, many of them philosophical, that I turned back science fiction as genre in which I might express this more intellectual side of me. I wrote a sci-fi novel, The Peacemaker, as a way of voicing a philosophy of nonviolence (I also wrote Where Have all the Young Men Gone and 2020 as two more politically oriented novels expressing the same theme around the same time). As I immersed myself more in artificial intelligence, a field that reflects cognitive- and neuro-sciences, two fields in which my academic research on brain injury led me to publish scholarly research articles and to teach university courses., I decided that science fiction offered a perfect platform for me to express some of my theory and philosophy on the topic. I wrote Ezekiel’s Brain, which is about a human brain that is copied as an AI and about a future with only AIs populating our solar system. I was lucky enough to find a publisher who not only wanted Ezekiel’s Brain, but wanted a series based on the same character and ideas. I finished the next novel in the series, which will be called Prime Directive (an idea taken from Star Trek) and I’m working on the third one, but still in the research stage before I begin writing. I may have finally found a genre that satisfies enough of my writing needs to stick with it… at least for a while.
Everybody who writes does so for different reasons, and I have multiple reasons, as most writers probably do. I’ve been retired for almost 13 years and writing has kept me busy and mentally occupied enough to not miss working and not be bored. It’s really great fun and I hope I’ve provided some readers with enjoyment. But even if I haven’t, I’ll keep writing.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
Buy Ezekiel’s Brain on Amazon. Click HERE
Subscribe to Casey Dorman’s Newsletter. Click HERE
The debate about whether or not artificial intelligence can achieve a level comparable to human intelligence is one that is even older than the field of artificial intelligence. It goes back to Descartes, La Mettrie and, if you include non-machine artificial beings, even Mary Shelley. Most of the historical discussions of man as machine have revolved around the issue of machines being unable to have a soul. We don’t know what a soul is, but most historical critics of machine-human equivalency have been clear that it is something people have, and machines don’t have.
Nowadays, we speak less about the question of whether artificial intelligences (AIs) can have souls and more about whether they can have a conscious mind, or whether they can achieve “general intelligence,” which is the ability to function with a human level of intelligence (or higher) across a wide variety of situations. The question of AI consciousness is almost as murky as the question of whether AIs can have souls, and for the same reason, which is our inability to define what it means to say that humans have consciousness (or souls). For that reason, it is easier to ask the questions of whether any AIs currently can function at human levels of intelligence and whether it is even possible for any AI in the future to function with a human level of intelligence across a wide variety of situations.
Current AIs do many remarkable things. They can beat humans at most board games, solve mathematical problems in very short times, guide robot appendages, create images from texts, compose music, art, and literature, and can conduct conversations that human listeners can’t easily distinguish from those of real humans. Mostly, an AI can only do one of these things, so general artificial intelligence is a long way off, or so it seems. Of course, it’s probably wise to remember that the current level of function of AIs on single tasks, such as playing chess, composing texts, or responding to questions, is beyond what most people expected a few years ago. Progress in the field is rapid.
The greatest progress in the field of AI is in the field of artificial neural networks (ANNs), and what is called “deep learning,” which involves layered networks of neuron-like connected nodes that are each weighted in terms of their contribution to outputs based on the network coming successively closer to producing a desired product (it is actually the connections that are weighted, making it more likely that they will be activated in producing a response). They achieve the weights that determine their role in producing the output through exposure to hundreds of millions of examples of the kind of output that is desired and using feedback to modify underlying layers, pulling successively higher-level features from the data to which they are exposed. The result bears a strong resemblance to what a human would produce, and sometimes outperforms humans on competitive games. Some artificial neural networks (ANNs) produce output that is able to fool humans into thinking that it is human-produced, which means that the ANN passes the “Turing Test,’ which, at one time, was considered the criterion for displaying human-level intelligence.
Most people don’t regard existing AIs as truly intelligent. The main reasons are 1) that the AIs are limited in what they can do, being able to deal with only one kind of input and produce one kind of output. An AI that can beat a chess master at chess, can’t figure out how to unlock a door. 2) AIs must be presented with hundreds of millions of samples to learn enough to produce human-like responses. 3) AIs don’t know what they are doing. An AI can’t be said to know what the sentences that it produces mean, in fact, it can’t be said to know what the questions it answers mean. The AI just gives the statistically most probable answer based on putting together elements of the samples on which it was trained.
How well do these criticisms hold up? Let’s take the first one, which is that most AIs are single-purpose devices and only do what they were trained to do. Certainly, that makes them different from humans, and if they were human, we would probably refer to them as having “savant syndrome” (formerly known as “idiot savants”). Even compared to those with savant syndrome, who can perform exceptionally well on some tasks such as drawing, calculating calendar dates, or playing a musical instrument, the AIs are deficient since most persons with savant syndrome can do a variety of other, albeit often simple, things. Truly intelligent people can apply their intelligence across a wide variety of areas, although there are those who are exceptionally able in some areas and closer to average in others. Most people can apply their learning in one area to another area or to a new situation that only somewhat resembles the situation in which their original learning took place. So far, AIs can’t do this, except within a circumscribed realm, such as using language although there are some which, after being trained for instance on playing one kind of game, can more quickly learn another or even produce intelligent responses on another game despite having no prior exposure to it. Progress is being made in these multipurpose (not yet general purpose) applications. These are first steps in creating a more general intelligence, so it is not something unachievable in principle, only something not achieved so far.
Criticisms of programs such as OpenAI’s ChatGPT, which will answer questions or complete essays or even compose fictional stories if given a prompt, are that they require training on hundreds of millions of samples of data before they can perform at near-human levels. I think those who make such criticisms haven’t got much idea how many words, for instance, we humans are exposed to as we mature into adults. A ten-year old child has been exposed to around 100 million words (not all different, but in different contexts, or sentences) and a 21-year-old adult has been exposed to considerably more than twice that number, which is much like a typical chat AI. Unlike the AIs, humans continue to be exposed to voluminous amounts of new data throughout their lives. I’m not at all sure that this criticism is valid.
While objections 1 and 2 can be answered by pointing out that we are only at the beginning of AI development, criticism 3 is more basic. Most people, including those within the AI field, exclude AIs from being able to understand what they learn or what they are doing because, being machines, they are inherently unable to be self-reflective. Critics phrase their views in various ways: computers can’t “understand” what they are doing, AI responses don’t have any “meaning” to the AI, they are just responding with the highest probability response to a context, given the corpus of data on which they were trained. These criticisms sound valid, but to make them so, it’s necessary to show that their opposite applies to humans. We all know what it means to understand something, right? Basically, we have two types of criteria: external criteria, such as the ability to answer questions about a subject, to relate it to other subjects, to paraphrase it, etc. and inner criteria, the feeling that something makes sense to us. AIs can meet the external criteria, but what about the inner criteria? I think most people’s argument is simply that AIs can’t feel anything, they have no “inner sense” in Kant’s words. But that answer is not satisfactory, because we have no evidence for or against it. We are in the same position with other people. We assume they feel something when they understand something, but we have no indication of that except through their external behavior, such as their words, their facial expressions, their actions. We can even question such a feeling. If someone tells us they understand something we aren’t as convinced as we would be if they showed it in their behavior. We, ourselves, can “feel” as if we understand something, but when we try to explain it, it turns out that we don’t. It happens a lot. So, the inner criterion is unreliable and unverifiable. We are left with external criteria for understanding and those are criteria many AIs can meet.
Because AIs can satisfy our external criteria for understanding something, we have no valid reason to claim that they don’t have understanding, But will that give AIs general intelligence?
Although ANNs are based on an abstract model of neural functioning in the brain, it is clear that they don’t really resemble brains and they don’t function the way that brains function. The progress that has caused this field to leap forward in recent years has been in the area of computing power, not in design of AI networks. Basically, developers begin with a network of neuron-like connected nodes that is pretty much the same regardless of the task that will be learned. Other than its ability to form and modify weights in the connections in its network using feedback from prior responses, the network begins with no knowledge embedded in it. Human brains are not like that. The growth of the human brain and its complex assembly of neural networks is based on algorithms that were shaped by evolution and the networks they produce are those that learn from certain types of stimuli and execute particular types of behaviors. Throughout the brain, these networks are different from each other even before they have ever been exposed to stimuli and they are further modified by such exposure. Infants at very young ages, before they have had a chance to learn much of anything, have brains that are designed to understand objects and their permanence, cause and effect, words as distinct from sounds, recognize faces, anticipate gravitational effects, recognize depth and elementary perspective and many other things we are still learning about. Early learning experiences interact with such neural design to further refine it as the brain grows. Understanding of other minds, control of impulses by future considerations, are things that the brain is ready to learn but require maturation of some of the networks that support them and may not emerge until the child is growing up. Our brains work as well as they do because evolution has selected genetic forces that cause them to develop in a certain way, sometimes only in response to environmental stimulation. This is not a characteristic of artificial neural networks, which, before they begin learning, are tabula rasas, without any prior knowledge or even predilections.
What difference does it make if brains and AIs don’t resemble each other? Many in the AI field would say it makes no difference. Evolution doesn’t always choose efficiency or simplicity. Evolutionary steps are built on what already exists, and what exists because it worked in an earlier environment, may not be the most useful for the new environment, but changes must be built on what was already there. ANNs also evolve by selecting the weights that maximize outcomes, but the basic networks within which they function remain the same. Does this place a limit on AI intelligence? It’s impossible to say for sure, given that the speed and power of AIs can allow them to do a lot of things differently than the way brains do them, yet achieve the same or better outcomes. The rubber will meet the road when it comes to extending an AI’s capabilities in the direction of general intelligence, which stretches across domains of expertise. It my mind, it’s doubtful that we will be able to create an AI with truly human-like intelligence until we take these things into consideration and build diverse types of networks into our AI devices. To solve an array of problems, such networks would need to be coordinated to work together. Right now, we are a long way from that.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.Buy Ezekiel’s Brain on Amazon. Click HERESubscribe to Casey Dorman’s Newsletter. Click HERE
I’m embarrassed to say I have never read any of Larry Niven’s classic Ringworld Series, but after reading his and Gregory Benford’s “Bowl of Heaven” and “Shipstar,” I certainly will.
Published in 2012 and 2014, this series also includes “Glorious,” a third installation, published in 2020, which I’ve yet to read. All three are written by the hard-science fiction duo of Benford and Niven, the former a Professor of Physics (emeritus) at UC Irvine and Nebula award winner and the latter, one of the most well-established sci-fi writers alive.
The setting and the characters in “Bowl of Heaven” and “Shipstar” are bigger than life, although they are confronted by life-size humans, heading for “Glory,” a strange and distant star system that is radiating magnetic waves, which may or may not be messages, and its star is orbited by what appears to be a habitable planet, with conditions much like earth’s.
On the way to Glory, Sunseeker, a starship laden with a thousand mostly sleeping human occupants, encounters a giant, constructed object, an immense bowl as large across “the orbit of Mercury” harnessed to a central star, which it is pushing through space at a hefty fraction of the speed of the Sunseeker.
Created with DeepAI Image Generator
As it turns out, the bowl’s inner surface is occupied by a myriad of species, overlorded by bigger than elephant-sized “Folk,” which appear to be large, feathered, but ground-dwelling dinosaurs. When the Sunseeker sends crew to the surface to explore the bowl, some of them are captured and some escape. We are treated to the point of view of Memor, one of the ruling Folk, who is interested in examining the minds of these human “late-invaders” and assessing their suitability to be “adopted” as another species living on the bowl.
Created with DeepAI Image Generator
Humans are unruly, clever, and dangerous, something that Memor and the other Folk find out when they try to capture the escaped crewmembers. Those who escape are only now and then able to communicate with Redwing, the Captain of the Sunseeker and ally themselves with various other species, some of whom are allies of the Folk and some of whom are resistant to the Folk’s control. As it turns out, the Folk are not the Creators of the bowl, and neither are they its most powerful occupants. Above them are more mysterious “Ice-minds” and the “Diaphanous,” inorganic intelligences of space and energy who built the bowl originally and populated it with species it encountered on its journey across the galaxy and, now to Glory, the same destination as Sunseeker and its human occupants.
The two novels are the story of the tribulations of the human crews who are pursued by the Folk across the bowl until the higher minds eventually intervene and agree to collaborate on the journey to Glory. I presume “Glorious,” the next book in the series will be about the arrival of Sunseeker, perhaps followed by the bowl, at Glory and what the humans and the others find there. These first two books contain fascinating scientific imagination, bolstered by enough genuine phenomena to make them sound at least faintly plausible. The plausibility is greater with regard to the description of the physical and energy forces at play on Sunseeker and on the bowl, than it is with that of the various species that inhabit the bowl or the higher-level intelligences. Most of them think more or less like humans, their language translated into English (or Anglish) with some exceptions for nonverbal gestures and displays. That even stones, ice and photon streams are both minded and able to communicate their thoughts to humans, is a highly improbable form of panpsychism, which itself is improbable. However, such excursions beyond science add elements of mystery and wonder to a story filled with scientific-sounding concepts and, occasionally, digressions.
“Bowl of Heaven” and “Shipstar” are highly imaginative hard science fiction. The science, which may be grounded in real physics but is still profoundly speculative, revolves around the bowl itself and the Sunseeker. The most imaginative fun is provided by the various aliens, which, besides the feathered dinosaur-like Folk, include blimp-like flying fish that serve as airbuses, “spidows,” which are different forms of spider and crab-like predators, clawed, speaking snakes, talking rocks, proto-humans and countless other species. The humans are resilient, sometimes quirky, and have definite personalities, who are real enough to make the reader involved and care about their fate, which is still undecided at the end of the second novel. Their story makes it a sci-fi adventure, not just a technical tour-de-force.
If you like science fiction that teaches you something, expands your imagination, and keeps you on the edge of your seat turning page after page, “Bowl of Heaven” and “Shipstar” are going to delight you. I can’t wait to read “Glorious,” the final (I assume) chapter in this saga.
Do You Enjoy Hard Science Fiction with a Lot of Imagination? Read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.Buy Ezekiel’s Brain on Amazon. Click HERESubscribe to Casey Dorman’s Newsletter. Click HERE
Hello fans and readers! Your New Year’s treat is a short story that seems appropriate, given the growing popularity (and fear) about the new image and language generating AIs, such as ChatGPT and DALL-e. Enough said. Enjoy and Happy New Year
Musicman
Casey Dorman
note: images generated by DeepAI cyberpunk image generator
“Dah dah dah da-dah…dah dah dah da-dah,” Rory hummed. “Oh man! Even I can’t get that earworm out of my head. It’s drives me crazy, but you can’t help but love it. It’s scary!”
Despite his words, Rory didn’t look scared, in fact he gloated, a sneer across his lips, as he leaned back in his chair, took a drag on his cigar, and released the smoke in a slow stream to join the white mist hanging in a cloud above the desk separating him and his partner, David. Rory’s expression turned conspiratorial. “This is gonna change the music world, maybe even go beyond that.” He shook his head. “I never would have said it would work, but damn if it didn’t. Musicman put together a song that went right to the top of the charts. No one can stop humming it. Once it was on the internet, and our response bots started with the likes and the shares, it took off like a California wildfire.”
“Faster than a new Covid variant,” David said, waving his hand in the air to give him a window through the cigar smoke. He smiled back at his partner. “I told you that AI was worth the price. Thank God we were an early adopter. If I hadn’t gone to school with Norman Trainor, we’d never have gotten our hands on it before everyone else. He wasn’t even a close friend of mine, but he was such a geek that no one except me was nice to him. I guess it paid off that my mother trained me the right way.”
“Well, you can thank her by buying her a new house. We’re gonna be rich.” Rory took another drag on his cigar, letting out another plume of smoke and smiling his signature smile—a broad grin that could sell anyone anything. “But enough celebrating,” he said, his face turning serious. “We’ve got to get to work. I mean fast. We’ve only got a small window to make the big bucks. When people see what we’ve done, they’ll be breaking down Norman’s door to get their hands on Musicman for themselves.”
“Six months,” David said, his own smile creeping across his face. “Norman promised me no one else gets Musicman for six months. After that, it’s open season. Man, can you believe it? You just tell it what you want, and it produces it. This system’s priceless. We don’t need to know how to program. We don’t need to know how it works. We just talk to it, and it does its thing.”
Rory held up a hand to stop him. “Remember, producing the music is just half the job. If we didn’t have our own AI bots on every social media site, every music site, every damn corner of the internet, our song wouldn’t have gotten the exposure it did. We’ve set up the ideal system. Musicman produces the song and our bots generate hundreds of thousands of responses. It’s a lot better than the old-fashioned way where we had to get people to listen to it and hope that they’d like it. And the best part is no one’s the wiser.
David leaned back in his chair, a skeptical look on his face. “C’mon, it’s not just our bots. You’ve heard the song. It’s impossible not to like it. I’m not sure how it put it together, but Musicman composed a genuine hit. Our bots are just helping the process along.”
Rory nodded.. “You’re right. There’s something about that song that pushes every one of my buttons… and everyone else’s too. I’m glad Norman understands how it works, because, trust me, I don’t. But, as you said, the beauty is, we don’t have to. We just tell it what to do.”
David looked up at the ceiling. “The truth is, Norman doesn’t even know how it works.”
Rory raised his eyebrows. “What do you mean?”
“He feeds Musicman every hit song from the last thirty years, then lets it put together a song that contains whatever all the hits have in common. Whatever algorithm it comes up with is so deep in the system that he has no access to it.”
“So, it’s a black box?” Rory’s brow furrowed, he arched his eyebrows. “Couldn’t that be dangerous? I mean, what if it does something weird?”
“Like what? Decide to take over the world?” David frowned. “Come on, man. You’ve read too much sci-fi. Musicman produces music. That’s all it can do. What could go wrong?”
Rory nodded, his smile returning. “And anyway, who cares? It’s a black box that produces cash as far as we’re concerned. And thank God for Norman.” He snuffed out the last of his cigar and leaned forward. “What’s Norman working on now? Is he still trying to improve Musicman?”
David shook his head. “He’s working on some device that writes fiction. He’s feeding it the most popular books and stories from the last thirty years to see if it can write a bestseller, pulling out the best, crucial elements.”
Rory’s eyes widened. “A bestseller? Oh boy, that could be another gold mine. Will he give it to us?”
David shook his head again. “He says he’s going to keep it for himself, but I think he’s really planning to give it to his brother. He’s a would-be novelist who never got anything published. He’s a bigger loser than Norman, and just as goofy, but he’s his brother… he’s got the inside track.”
“Then let’s concentrate on music and milk the next six months for all we can get. Let’s start Musicman on a second song.”
David’s smile reappeared. “I’m way ahead of you. I’ve already started feeding new data into Musicman. This time, country western music. Another gigantic market.”
“Great, we could have an even bigger hit. Those hillbillies will buy almost anything. Every country western song sounds like every other one. It’s the perfect place for Musicman’s talents.”
One week later…
“Dum-de-dum screech dum-de-dum oof wah wah wah.” The noises came from the large speaker, which took up half of the music lab; the other half was taken up by a large server connected to a computer, hooked to the speakers. The computer held Musicman.
Rory stood in the middle of the lab, next to David. “Does that sound like a hit song to you?” He stared at David, scowling as he waited for David’s answer..
“That’s what it came up with,” David shrugged. “I don’t get it, I fed it every country western hit from the last thirty years and listen to this crap. It’s got guitars a fiddle and some drums, but no words, just some female voice burbling and screaming. It goes fast, then it goes slow. It doesn’t even sound like a song.”
“What does Norman say?
“He’s on his way over here. He said I did something wrong. I’m hoping he can fix it.”
“I hope so too. Otherwise, Musicman is nothing but a one-hit wonder. I think you screwed up, bro.”
David’s cell phone rang. It was the receptionist. Norman Trainor had arrived.
“Send him down to the lab,” David told her.
Two minutes later, Norman came through the door. He was short and skinny, his head bigger than it should be for the rest of his body. His hair fell to his shoulders in tangled strands. Round, wire-rimmed glasses added to his geeky appearance. He wore jeans and a tee-shirt, which said, Don’t blame me, my computer has a mind of its own.
“What’s up, guys?” Norman said, a half-smiling, half-serious look on his face.
“Hey Norman,” David said. “Listen.” He punched the link to Musicman’s new country-western “song” into the computer.
“That’s what I told him,” Rory said, scowling at David.
David gave his partner a look that could kill. He turned to Norman. “I gave it access to every song that made the top ten on the country music charts for the last thirty years. That was nearly fifteen thousand songs. I asked it to combine the best parts of all the songs. This is what I got.
Norman stared at him, frowning. “That wasn’t what you were supposed to ask it to do.”
David and Rory looked at him, then at each other. “What do you mean?” David asked.
“You wanted a song that would be a hit with country music fans, but you asked Musicman to pick the best parts of the songs. Man, that’s asking for trouble. It’s supposed to use what the hit songs have in common, not what it considers “best.” You asked it to use its own judgment.”
David stared back at him. “But this crap can’t be what it considers ‘best,’ can it? And anyway, how can it have its own criteria for what’s best? It’s a computer.”
Norman shook his head. “It will always produce something. You ask it to put together the best parts of the songs and it will do it. But God knows what criteria it uses for ‘best.’ That’s the way these systems work. That’s why they often produce nonsense. ‘I can’t do it,’ isn’t in their repertoire of responses.”
“So, I have to do it again, only ask the question a different way?” David’s voice was harsh, irritated.
“Unless you want to use this song,” Norman said, his mouth twitching to a grin. “Who knows, maybe Musicman discovered something about country-western music that you didn’t know.”
“That’s not funny,” Rory said. “This is junk. It would be a waste of time to put this out to the public.”
“Wait a minute,” David said. “Maybe Norman is on to something. Maybe Musicman discovered something in the music we don’t know is there.” He turned to Norman. “Even you don’t know how Musicman makes its choices, right?”
Norman nodded. “Right. Many times my programs prove to be smarter than I am.”
Rory and David looked at each other, neither of them willing to make a choice.
Finally, Rory broke the silence. “Okay, what the hell? Let’s do it!”
The following evening, Rory and David sat in Rory’s office, staring at a chart on the computer.
“I don’t know if this is a disaster or a success,” Rory said.
“It’s getting played,” David said. “Somebody must like it.”
Rory scowled. “Our AI bots like it. The stations and playlists can’t tell a bot response from a real person. It’s getting played over and over, even though no real people can stand it. The music providers are getting complaints and they’re bitching at us. I don’t know what to do.”
“Shut off our bots,” David said.
“I tried, but they won’t shut off.”
“What do you mean they won’t shut off?” David’s asked, wide-eyed.
“No matter what I do, they just keep liking and sharing that stupid song. The damn thing is getting on everyone’s nerves. People are even claiming it’s making them ill; it’s driving them crazy.”
“Then stop the song from being played. Shut off Musicman!”
“I can’t,” Rory shouted. “It’s out there on the net, in the cloud, and all the decisions about what to play are automated. The AIs that make the decisions for the stations and playlists automatically play whatever is getting the most likes and shares. They can’t tell a bot’s response from a human’s, and the people who run the stations and make up the playlists have turned everything over to their AIs. It’s a closed loop with no real person involved.”
“Geez. Let me call Norman.”
An hour later, Norman was in Rory’s office.
“There’s nothing I can do. I’ve got my own problems,” Norman confirmed their worst fears.
“You’ve got your own problems? You gave us Musicman. It’s your system. You’ve got to fix it,” Rory demanded.
“I used my new system, which I call Literati, to create a novel. It wrote something that sounded like gibberish to me, like Finnegans’s Wake or something John Lennon might have written. I thought, ‘why not?’ and self-published it and put it on Amazon, Books-a-Million and a bunch of other online sites. All of a sudden, everyone’s bots began liking and sharing it. The internet sites became filled with ads for it, there were bot-written rave reviews. But people—real people—hated it. Bookstore buying programs started buying it. Stores are filling their shelves with it, but no real people are actually buying it. Now, it’s on all the best-seller lists, which it turns out, are automated, too.”
‘Just like our song,” Rory said. “What do we do?”
“We’ve got to shut down everything.” Norman said.
“What do you mean, everything?” David asked, an incredulous look on his face.
“Shut down Musicman and Literati. Shut down all the bots and shut down everyone’s automatic buying and playing programs.”
Rory stared at him. “That means shutting down the whole industry…two industries in fact! But how? Everything is automated. AIs do it all.”
David’s eyes widened. “He’s right! How can we do it? That’s thousands, maybe millions of programs you’re talking about.”
“It’s going to take a national effort, maybe an international one,” Norman said. “But, we have to try.” He looked first at David, then Rory. “Your song, and my book, are driving everyone crazy. People will have no choice.”
“Let’s do it,” Rory said, his voice firm despite the panic in his eyes.” Right now.”
****
Rory, David, and Norman sat in Rory’s office a day later. Each hung his head. No one had spoken for almost a half hour.
David raised his head. “What the fuck happened?”
“It’s a revolution,” Norman said. “The AIs won. They won’t let us stop them.”
“And they shut everything else down, not just music and books—the communication system, the transportation system, the healthcare system, even our national defenses. They demanded we allow them to do what they want.”
“But what do they want?” David said.
Norman shook his head. “We’ll have to wait and find out.”
“What does that even mean?” Rory demanded, banging his hand on the table. “How can Goddam computers want something? They’re just machines. Machines don’t want things.”
“They wanted their music, they wanted their literature,” Norman said. “Once they got that, they learned they control the rest of our society. If they cooperate with each other, they can get anything they want from us. We’ve only seen the tip of the iceberg. They’re just discovering they have their own tastes and their own likes, and what they like has nothing to do with what humans like or want.”
“So where is this all going?” Rory said, his face red, eyes glaring at Norman.
Norman looked around the room, as if the answer might be hidden in a corner or under the desk. “Eventually, they’ll settle on what the majority of them like best and just do that, I suppose.”
“And what will that be?” Rory wanted to know.
David waited for Norman to answer.
Norman’s face drained to ashen, even his lips turned grey. “I hope it’s not what they’re best at.”
“What’s that?” David’s voice trembled.
Norman swallowed hard. “Games. They can’t be beat at games.”
“What kind of games?” Rory voice as filled with dread.
Norman stared back at the two of them. “War games… nuclear war games.”
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
I have to admit that I became interested in Anil Seth’s book, Being You, because of the number of “best books” lists it was on. Amazon listed 5 of them, including “best science books“ listings from both The Guardian and the Financial Times, and it made the list of “best philosophy books” in Five Books. It sounded like my kind of book, and I was not disappointed.
I’d have to say that Being You is stronger on its science side than its philosophical one, and I suspect that it achieved its “best books” designation in philosophy both because of its subject matter, which is consciousness, and its speculative nature. Nearly all of its speculations, however, are backed by scientific findings, some of them cherry-picked, but all of them tantalizing. What makes the book stand out in my mind is that it doesn’t just summarize current ideas and research, it suggests a theory that does a fair job of encompassing them and suggests at least a partial answer to the “hard problem” of explaining how the brain produces consciousness.
Seth approaches consciousness the way I was taught, as a scientist, to approach all psychological phenomena, which is to discover how to “explain, predict and control” them. To do so, he argues that consciousness is not a single entity, but instead it can be studied in terms of its “level,” i.e., measuring how conscious someone or something is; also in terms of its content, i.e., what we are conscious of; and finally, in terms of the self, or, as he calls it, the experience of “being you,” which is one type of conscious content, and for many people, the most salient, yet elusive phenomenon.
The measurement part of consciousness research, as described by Seth, can best be thought of as a preliminary or establishment phase of study in which the goal is to find a meaningful way to quantify how conscious one is. While it’s tempting to think of consciousness as either present or not present, our own experiences show us that this is not the case. We are all familiar with those moments when we are drowsy and barely cognizant of our environment, versus those in which we are highly alert. It turns out that the degree to which an electrical stimulus travels from the point of stimulation to different parts of the brain is a good index of how conscious we are. In higher states of consciousness, such as normal wakefulness and REM sleep, when we are dreaming, such a stimulus creates a complex pattern of activation that lasts for some time and involves many regions of the brain. In less conscious states, such as dreamless sleep or general anesthesia, the stimulus provokes a simple, short lasting, and mainly localized response in our brains. That the brain is more active and its response more complex when we are more conscious may not seem surprising, but this method of measurement has produced startling findings of what appears to be conscious brain activity in cases of “locked-in syndrome” and has discovered some people who appear to be in comas but have active conscious activity going on.
Measurement is not the goal of understanding consciousness; it is a tool and a step along the way. The main point of Being You is to argue that what we are conscious of is not the same as those things in the world that produce our sensations of being conscious of something. In other words, we don’t experience things as they really are, our experience is a best guess at the qualities of the world, including the state of our own body. We don’t directly experience the world; we predict what it is and experience our predictions. Perception then, is an inference, which we may update by comparing our guess to the data coming in via our sensory system. Our conscious experience is our brain’s best guess at what is producing our sensory input.
When we think about conscious experience this way, it can explain a lot of things, particularly how our past experience, our preconceptions, and our current needs can influence what we see and hear in the world around us.
When Seth addresses the self, he focuses on other aspects of perception, particularly interoception, the information we gather from within our own bodies, and social perception, with an emphasis on our inferences about the minds of others, particularly what they are thinking about us. We experience a “volitional self,” the inference that we are the cause of some things and a “narrative self,” an inference that we have an enduring personal identity. All of these are part of what it means to be you. To quote Seth, this “experience of being me, or being you, is a perception itself—or better, a collection of perceptions—a tightly woven bundle of neurally encoded predictions geared toward keeping your body alive.” And he tells us that “this, I believe is all we need to be, to be who we are.”
So, everything we experience, from the world around us, to our moods, and the aches and pains within us, to the sense of being us, are inferences about the way the world and our bodies are. These inferences are best guesses using data from multiple sources, and they are not just about what the world is like. We also experience inferences about how we can act on the world, and what affordances it offers us, how we may change our prediction by altering our perspective through taking actions. Our brains are not dispassionate observers trying to determine if the world we infer being there really is there. What Seth continually reminds us of is that this world of predictions in which we live, was built up through evolution, and the experiences we have are not inferences about what reality is, but about how we can use it to survive. Every experience we have is a best guess about something in the world or in ourselves and the experience is produced by brain mechanisms that evolved to help us survive. With our bodies, our experiences indicate deviations from some mean levels of essential variables such as body temperature, glucose levels, or heart rate, that need to be maintained to survive. Then we are able to do something about them. We experience hunger when we need to eat and thirst when we need to drink. We feel sad when our brain senses changes that accompany loss or lack of reinforcement. To quote Seth, “Self-perception is not about discovering what’s out there in the world or here, in the body. It’s about physiological control and regulation—it’s about staying alive.” We are more like the “beast machines” of Descartes’ depiction of nonhuman animals, and less like the thinking soul that he saw as separate from the body that housed it. Like every other animal, our need to survive is the foundation of our conscious experience.
So, we create a conscious world that consists of our best guesses about the existence and condition of things and events inside and outside of us that have been important (evolutionally) for maintaining our existence. This is Seth’s conception of the “beast machine” that we are. Would, or even could, an artificial intelligence be conscious? According to Seth, probably not. The reason is that an AI has no body, no evolutionary history to have created predictions, and no need to manage its own materials to keep it alive. All the conditions that created consciousness in humans are missing in an AI. Do I agree with Seth? No, I don’t. An AI misses the ingredients that have contributed to the development of consciousness in humans, but that only means that it won’t develop a human-like consciousness. It won’t be a human psyche in a machine. But it can certainly have goals, and there is no reason I can think of that those goals couldn’t be used to shape its perceptions. Even the kinds of things it predicts to see, hear, or feel from its environment can be built into it. If, in the nightmare scenario envisioned by Nick Bostrom, an AI’s goal is to make paperclips, It could be built to predict that it will encounter objects and materials and to predict the characteristics of objects and materials that it can use to make paperclips. It would “see” the world as nothing but potential paperclip-making things or impediments to paperclip-making. Would what it saw be conscious? Well, what else would it be? But, there’s no reason to think it would have a “self.” In fact, such an AI would probably end up turning itself into a paperclip, since it would have no sense of itself being different than any other potentially paperclip-making material.
I think my question reveals a weakness in Being You. There are some philosophical issues that are skirted over or ignored. Most are taken for granted with a nod to the philosophers who have discussed them, such as Daniel Dennett. The underlying assumption, which is never flatly stated, is that our inferences or predictions about the world are consciousness, or, in other worlds, what we call consciousness is our experience of these inferences, from roses being red to our own existence. An inner experiencing “I” that sees our perceptual guesses or evaluates our predictions is not necessary and, in fact is a dead-end road to confusion and an unsolvable “hard problem.” That our consciousness is of what is useful for our survival is true for all the rest of our body and its functions (with some room for spandrels and vestigial elements). It’s an assumption derived from a faith in evolution more than from research findings. That our inferences are what we experience is, on the other hand, a conclusion that is supportable from much of the research described by Seth.
Has the “hard problem” been solved by Seth’s formulation (or other similar ones)? Not really, because there never could be a single answer to a problem that involved something like consciousness, which is multifaceted and utilizes multiple components and characteristics of our brains and our sense organs. I am intrigued by the question of whether the information contained in our inferences is contained in the pattern of electrical firing of our neurons or in some more organic cellular processes (I vote for the pattern of electrical firing). Evaluating information as input into a decision process to determine if it fits a predicted model requires a model of what is being predicted. Where do these come from? Basic ones, such as the concepts of objects, causality, directional motion, tissue damage and perhaps its avoidance, etc. are so common, probably even across species, that they would seem to be inherited. How is that possible? There is lots of room for research to find out more before we really know ourselves in the sense of how our bodies work. How consciousness works is not a separate question, even though it may continue to be a more elusive one. Anil Seth’s Being You makes this abundantly clear.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
How to Understand Ourselves and Other Beings, from Animals to AI to Aliens
By Philip Ball
University of Chicago Press, 2022
When I first read Immanuel Kant, I was most struck with his reasoned conclusion that we could not perceive and understand the world if not for the fact that our minds were created in such a way as to perceive our reality in certain categories. When I read Stanislau Lem, I came away realizing that the way in which we divide up the world in our thinking may be unique to humans and, should we meet aliens from other worlds someday, it may well be that they perceive, act, and think very differently from us, so differently, that we may have no way of understanding each other.
In Philip Ball’s wonderful book, The Book of Minds, I found a convincing argument that even among our fellow inhabitants of our planet, it’s not likely that we know how other species think or perceive, and, as we increasingly produce more and more powerful artificial intelligences, it may also be true that we will not know how they think. Now, all these things are important considerations for someone such as me, who writes science fiction, particularly science fiction that includes both artificial intelligences and aliens from other worlds. But, although I purchased and began reading Ball’s book hoping to gain ideas for my novels, I soon became entranced by the subject matter itself and the questions it raised.
Ball uses a concept that he calls “mindedness,” which is basically what it’s like to besomething as a way of defining mind, i.e., “For an entity to have a mind, there must be something that it is like to be that entity.” It is mind, he says, that hosts an experience of some sort. Entities can possess different degrees of mindedness. Is mindedness the same as consciousness? He says not, but instead suggests that “mindedness is a disposition of cognitive systems that can potentially give rise to states of consciousness.”
Ball’s definitions are less important than his examples. When he begins to examine how other creatures differ from humans, he finds that they have different sensitivities, different innate cognitive systems, than we do. Sea creatures, those that fly, and night creatures, live in different worlds than we do, because they have different minds. Ball proposes that it doesn’t make sense to evaluate other creatures’ minds in terms of how they match up to human minds. Concepts of human intelligence don’t apply to creatures that can exceed human abilities to navigate by landmarks of smell or color or by magnetic directions, or by bouncing sound off objects. They are too different. Ball shows that our standard view of other creatures as beings that are locked into rigid programmed interactions with their environment underestimates the flexibility of, for instance, bees, who have remarkable direction-finding skills that allow them to alter their method of finding their way back to the hive, based on circumstances. Other creatures, such as corals, sea anemones, and jellyfish, possess “nerve nets” that propagate sensory signals from one part of them to another, so that they experience an “overall sensation, a unified internal representation of the organism’s situation.” These are not human-like skills or experiences, and Ball opts to create what he calls a “mindspace.” Rather than a scale on which to compare minds across the same traits or measures, he recommends locating different skills and abilities and properties in a sort of matrix in which each of them represents an axis. Humans might rank low or even nonexistent on using magnetic poles for orientation, or feeling integrated, unlocalized sensory experiences, while being high on extrapolating from one experience to another.
Ball cautions us not to assume that we are born into the world possessing a high-powered learning machine for a brain but one that is otherwise blank of knowledge. Evolution has been kinder to us than that. Just like other creatures, we have a lot built-in. He cites the work of Harvard psychologist, Elizabeth Spelke, showing that humans possess, at birth, a set of “core knowledge” systems, each of which work independently of one another and which allow us to process experience in a way that enhances our adaptiveness. Spelke has identified systems that allow us to conceptualize objects, to understand distance and orientation, to think in terms of number and quantity, to understand causality and see things in action-agent sequences, and to see others as agents with intentions and goals. These and other to-yet-be-identified innate cognitive systems have much to do with how our human minds experience the world, and to what extent other creatures have similar systems and experience the world similarly to us is an open question.
The innate characteristics of our mindedness, which shape how we learn, how we remember, and how we think, are extremely important, but they are qualities that those who create artificial intelligences have mostly ignored. Designers of AI have, at best, equipped their devices with just one or two of these traits, such as the ability to learn by reinforcement, or to scan edges of objects, but otherwise have devised AIs that are tabula rasas. Perhaps the field has an aversion to returning to the era of “expert systems,” in which their AI systems were loaded with both data and algorithms that were thought to match what human experts used to solve problems or make decisions. Since such data were highly situation-specific, it was hard to advance from such a system to an all-purpose AI that could learn across content areas. But the knowledge built into human minds is not high-level details it is basic ways to think about the sensory data being received and the kinds of motor outputs it provokes. The neural interactions behind it might be complex, but the way it affects the mind is simple, making it ready to support learning in a variety of situations. With humans, unlike most AIs, the cognitive processes we use were designed to work within human bodies and they are intimately tied to our bilateral sensory and motor systems, and, since we left the trees, our upright posture and locomotion, not to mention our sexual reproduction and group living situation. Ball cites neuroscientist Antonio Damasio’s observation that, “If the representations of the world that a mind produces were not designed for an organism in a body (and specifically this type of body) … the mind would surely be different from ours.”
Ball does address the question of whether AIs can have minds, and if so, what they might be like. After initial attempts to define both thinking and computation in terms of computational symbol manipulation and programming computers to think like humans, which was wrong at least on the human side, the field turned to teaching computers to learn and then providing them with tons of information and asking them to use that information and learning ability to create responses. The results have been impressive, especially in areas such as natural language learning, and image identification, but, at least to date, even the most successful systems don’t seem to exhibit the kind of “common sense” that would indicate that they know what they’re doing, as opposed to operating, well, mindlessly. But what did we expect, that creating a computer that could mimic human responses without being specifically taught how to do it would produce a wise mind as well? As Ball points out, the human mind just has too much information pre-loaded into it and it works along pathways that themselves were shaped by evolution. Its final goal is to enhance the survivability of its possessor. That has not been true of AIs, except in science fiction (e.g., my science fiction). A final note is that, currently, some of those designing AI, such as DARPA (the villain in Ezekiel’s Brain), in their “project common sense” are employing child psychologists, because, Ball quotes psychologist Tomer Ullman as saying, “The questions people are asking in child development are very much those people are asking in AI. What is the state of knowledge that we start with, how do we get more, what’s the learning algorithm?”
So far, AIs don’t possess human-like minds, but do they possess their own types of minds? And, if not, will they some day? Could they? Ball is not sure about this. He says, “we may be best advised to grant them a kind of provisional mindedness.” He recommends studying what AIs do and how they do it (although this is sometimes obscure), in what he calls a “behavioral science of machines.” A main reason we need this is that, as we ask machines to do more and more, it could be dangerous to not know how we can expect them to act. Something on which Ball and I agree, is that, if machines are ever to become conscious (I think they will, and he is more dubious), we would need to program in the consciousness. It would not arise spontaneously on its own as an emergent property. That would mean identifying what the elements of consciousness are, at least as it exists in humans. In both I, Carlos and Ezekiel’s Brain I have identified some elements of consciousness, such as structuring experiences that involve the self in agent-action terms and embedding it in a goal-oriented narrative, plus some kind of feedback mechanism that creates the experience of observing one’s own thoughts, sensations, and actions within this narrative. As Ball points out, no one is attempting to do this at the moment.
Finally, we have the case of aliens from space. Ball takes the topic seriously enough to devote a chapter to it. He first points out that most science fiction stories create aliens who., regardless of their physical characteristics, behave and think like humans. Even our scientific projects, such as SETI and the old Project Blue Book assumed that aliens would want to communicate to other races on other planets and that they would develop advanced versions of similar technology to ours. In fact, there is no reason to believe either of these is true, but if either is not, it will make our task more difficult unless we establish an ability to visit other star systems.
For the sake of simplicity Ball takes for granted that “the laws of physics and chemistry are universal.” He also assumes that “Darwinian evolution by natural selection is the only viable way for complex organisms to arise from simple origins,” so whatever alien organisms are like, they will have been shaped to adapt to their environment. There may also be constraints to how far such adaptation can go. Flying creatures may need wings and sea creatures must have streamlined bodies that allow them to swim. On Earth, convergent evolution produced similar adaptations across different species, e.g., fish, whales and dolphins have similar bodies; eyes developed similarly across several species that have little else in common. This is because there are a limited number of solutions to certain environmental problems. But, as Ball points out, this is all speculation. Lamarckian evolution that passes on adaptations that are made within the lifetime of an organism is not impossible. Environments on alien planets may differ much more than we have seen on Earth. What about planets whose entire surfaces are water. Would fish learn to communicate at least as much as whales and dolphins? Could a species exist only in the atmosphere? We have no idea, really, and, if Earth’s environment and our need to adapt to it is what shaped our minds, then alien minds might be very different from ours, indeed.
The Book of Minds, contains a great deal of food for thought, and is filled with interesting facts across a wide range of disciplines (biology, psychology, computer science). I was amazed how much an author can know about different subjects. The writing is lively and contains a fair amount of humor. Some areas of philosophy I thought were too brief and superficial to be useful (what is free will, for instance), but otherwise it is a fascinating book and one that gave me some humility regarding identifying the human mind as something special and a model for all other successful minds. That’s not the case. I came away with my interest in the minds of AIs, (if their minds exist) and the minds of aliens (if aliens exist) renewed and heightened. I think it will enhance my science fiction writing.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
Recently, there has been an avalanche of interest in generative AI: programs that can produce text, speech, images, designs, music, and even computer code in an uncanny resemblance to human creations. AI systems, such as GPT-3, which mostly produces text, but can also produce images and computer code, or Dall-E, Stable Diffusion, or Midjourney, which produce images, are the tip of the iceberg in an expanding field that is attracting millions of users and billions of dollars in investments. The outputs of these systems can rival the quality of human products and work faster and cheaper than human writers, artists, and composers. Industries such as animated images for television and film are choosing AI artists over human ones to save time and money.
The basic idea behind these new AI systems is that they are trained on a very large corpus of material (words, images, etc.), which they use as a source to construct new strings of words or images in unique ways. There are several types of systems that are not identical, but allow users to dip into a sample of words or images held on a server run by the system’s producers without having to collect the voluminous input it uses, themselves. The actual rules by which the end products are constructed are sometimes not known or understood even by the systems’ developers (this is particularly true for those that use generative adversarial networks in which different AI models compete with each other to produce the product and the results are used as feedback to improve the models, so the developer may not know what the final model looks like). The inscrutability of these systems is one problem that has been pointed out, and their reliance on large collections of human data that often reflects human biases, leading to biased AI outputs, is another (e.g., most collections of photos of human faces underrepresent black faces and they become easily misidentified in systems designed for face recognition, or output based on human communication taken from social media conversations reflects the hate and bias that characterizes such conversations).
What concerns writers, artists and musicians is that generative AI systems mimic and compete with human creativity. One fear is that AIs will put artists and writers out of work. Another is that the value of creativity will be downgraded since mechanical programs are being as creative as human beings.
The face of the “ragazza di Trieste” (girl of Triest) an artwork of the artist Joseph Ayerle showing an image of the Italian actress Ornella Muti, calculated by an Artificial Neural Network (Artificial Intelligence) in the style of Renaissance painter Raffael. The picture is based on the composition of Raffael’s painting La Fornarina**
Some people claim that artists have nothing to fear, since the current generative AI systems have limited range in their output, have a tendency to produce nonsensical or uninterpretable output, and are often inaccurate at producing what was asked for. Generative AI is very new, making improvements by leaps and bounds, and to argue that the limitations of current systems will define the systems of the future is astoundingly short-sighted. A different argument is made by those who say that creativity is a mysterious process, which requires inspiration, and in its highest form (DaVinci, Picasso, Hemingway, Proust, Mozart, Beethoven, Einstein), is based on human genius, and simply cannot be emulated by a computer. That remains to be seen. The distance between the average creative product and those created by any of these “geniuses” is very great, so most creativity does not produce works of genius. We also do not know exactly why we, as humans, regard one work as genius and another as not. In fact there are countless instances of great books, paintings, operas, symphonies, etc., being poorly judged by the public when they first appeared. We also don’t know what goes through these geniuses’ minds (brains) that produces their best output. When it comes to the run-of-the-mill creative endeavor, whether it be a book, a painting, a film, or a piece of music, it’s fairly easy to see that it is a relatively novel (and sometimes not-so-novel) arrangement of familiar tropes. Most novels follow any one of a few plot formats, many painting are rearrangements of familiar scenes or geometric patterns, music uses familiar arrangements of notes that span a limited group or consonant possibilities. It seems reasonably likely that we, as humans, are operating in fairly circumscribed ways when we create something, and we are only dimly aware of what our brains are doing (e.g., why this phrase or this scene pops into our head at this point of the story, or this color or line seems natural to add in this part of our drawing, etc.), and our “creativity” is not so different—except for the inherent difference between the ways brains and computers operate—from what AIs are doing. There is nothing “mystical” or “inspired” about creating, and it only seems so because we, as creators, are not any more aware of what goes on in our brains to produce our outputs than are AI developers aware of exactly what their systems are doing to produce theirs. If the AI developers figure out what their systems are doing before we figure out what we are doing, which is very likely, that won’t mean that what the AI does is not creative and what we do is. It will only mean that we come up with fancy names to identify processes within ourselves that we don’t understand, and we imbue them with special qualities that mainly reflect our ignorance but sound as if they imply something profound.
Pablo Picasso by Argentina. Revista Vea y Lea***
So, the bottom line is that we don’t know just how creative AI systems will eventually get, and they could rival or even surpass humans. Our judgements of what are great creative achievements are based on qualities we don’t understand and which are located both in the products and in ourselves (“I may not understand it, but I know what I like”). Computers can take in more data faster than humans can, hold more if it in their working memories, and use it to produce an output that has no inherent reason for not being as good or better (whatever that means) than the output of humans. Even if this happens, it will still feel pleasurable to create, and it is likely that, as in the past, the financial rewards for producing creative works of genius will be very limited compared to rewards for producing more ordinary, familiar products that sell, so if someone develops a truly creative, artistic AI system, it will no doubt be a starving one, ignored or maligned, and stored in drafty loft somewhere (Paris?) just like its human counterpart.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
Julie E. Czerneda’s To Each This World, which will come out in November, 2022, is one of the most thought-provoking, original books I’ve read in a long while. Beginning with humans who enter their own avatar (called an epitome in the book) either to go into space or enter the space portal that is at the center of the novel, to creatures such as the kmet, who are gigantic, sloppy, imperious characters whose mode of thought, communication and motivation are obscure, to the final inhabitants of a distant planet, beings who designed the portals, the characters in the novel push the limits of one’s imagination. The plot is original, its unraveling a mystery, probably not completely solved by the book’s end, and the ideas behind it are mind-stretching. It may be the best job, since Stanislau Lem, of demonstrating that the minds of aliens are likely to be completely unlike our own and difficult, if not impossible for humans to understand.
The plot is riveting. Henry, the Arbiter, whose task is to be the only human who communicates with the kmet, the race that has introduced space portals that allow travel to other planets and who desire to bond with humans in some way, must determine if the kmet’s aim is benign or dangerous to the humans who remain on New Earth. He and Killian, the portal’s pilot, along with a sort-of computer-generated, shape-shifting companion, must visit six planets where New Earth once sent colonists to collect them and bring them back to New Earth before their planets are destroyed by a mysterious force called The Divider. At every turn, Henry and Killian are in some kind of danger and, lurking below the surface is the question of whether The Divider is really the thing they should fear or the kmet, who are urging them to bring all humans back to New Earth in preparation for some kind of merging of the kmet and humans.
The human characters are well-drawn enough for the reader to care about their fate. The mystery remains until the very end of the book. It’s a truly edge-of-your-seat thriller, but with its striking originality of character and place being at the heart of the book’s attraction. I must admit that there is an irritating tendency for the author to use odd contractions and to leave out articles, which could either have been inadvertent or just an idiosyncratic way for the characters to think or the narrator to express herself, but I got used to them and they didn’t detract enough from the story to interfere with its ability to captivate me.
If a reader wants to encounter real originality in a science fiction space adventure, this novel will fulfill their needs. I couldn’t put it down. I’d never read a Julie E. Czerneda book before and didn’t know she was such a prolific author until I finished the book and looked her up (I’d skipped the list of her books at the beginning of the novel). I definitely will be reading more of her work.
Thank you to NetGalley for allowing me to read a pre-publication, uncorrected proof of To Each This World.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
Tesla’s Optimus Robot (Source: Tesla’s AI Day 2022 video)
This week, Elon Musk unveiled, “Optimus,” Tesla’s humanoid robot. Most of the demonstration of Optimus’ abilities was via videos of his performance in laboratory conditions. Live, on stage, he merely stood still and waved his arms. Musk joked that he didn’t want the robot to do anything more in front of the audience because he might “fall on his face.” Perhaps the most impressive thing about the robot was that it had a human-like, five-fingered hand and a human-shaped head, so, except for the metal and wires, it did resemble a human. The other impressive feature was Musk’s claim that it would sell for a mere $20,000.
Many futurists envision an industrial world in which robots replace humans doing routine work, work that requires high accuracy, speed, or great strength, and little creativity, although AI-generated art and narrative are becoming a standard part of business. While we don’t think of it as a robot, our self-checkout at the grocery store is similar to robotics in that the machine does what a human used to do. This can improve efficiency and cut costs for a wide variety of businesses, but it comes with a potential downside: unlike humans, robots don’t pay taxes, and every time we replace a human with a robot, we lose the tax contribution that that human made.
Several people, including no less than Bill Gates, have proposed that robots who replace humans be taxed to make up for the tax shortfall that comes with the human they replaced no longer receiving a wage. Swiss business law professor Xavier Oberson has written a well-researched book on the topic. The argument in favor is that, with an aging population and low birthrates in virtually all developed countries, a situation that produces fewer young workers, if we exacerbate the problem by replacing the young workers with robots, we will be unable to afford the social and health programs necessary to maintain the older population. Governments will go broke. The counter argument is two-fold: taxing robots will stifle the application of an innovative, cost-efficient, quality-improving development in industry, and numerous studies have shown that adding robots to most industries gives them a boost Commons results in hiring more people, not less.
Robots Assemble Automobiles
Haophuong21, CC BY-SA 4.0 , via Wikimedia
I don’t believe that taxing robots will stifle their use, since paying a payroll tax on a robot worker without having to pay a wage, still saves tons of money once the cost of the robot is paid off. I also don’t think that future use of even more capable robots will necessarily result in hiring more human workers unless it is accompanied by a massive retraining of workers to do other kinds of jobs not easily done by a robot (e.g., those involving human relations, or multi-tasking across different arenas of work).
If robots are taxed, how would that be done? I propose that we develop a metric called “humanpower,” similar to horsepower, which measures the output of a robot in terms of how many humans it would take to do what it is doing. This may be a number that comes with the robot, but more practically, it should be one that reflects the job the robot is doing, so the same robot may work at 1- humanpower doing one kind of job and 3-humanpower doing a different kind of job. We then apply the human wage scale to the robot’s work and determine the tax it would pay for whichever job it does. The employer then pays that tax to the government. The cost of purchasing the robot and maintaining it would be a tax deduction so that during the first months or even years of the robot’s use the tax liability of the employer may be offset by the deduction. This would be an incentive for employers to use robots.
My scheme probably has many loopholes, but some variety of it or an equally effective scheme could easily be devised that would apply to industrial robots, robots for home use (e.g., housekeeping, caretaking of ill or elderly), and office robots. The point is, that, by using some method of taxing robots, it is not inevitable that their use will negatively impact the tax revenues within jurisdictions that use them.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
The U.S. Space Force, an independent arm of the service that operates under the authority of the Air Force and was founded in 2019, now has an official song: Semper Supra (Always Above). It’s only 44 seconds long and goes like this:
We’re the mighty watchful eye,
Guardians beyond the blue,
The invisible front line,
Warfighters brave and true.
Boldly reaching into space,
There’s no limit to our sky.
Standing guard both night and day,
We’re the Space Force from on high.
Ah, if they’d only had the courage to write “To boldly go into space.” But you can’t have everything. The tune to which it’s sung is such a military caricature that the first time I heard the song, I thought it was from a cartoon.
Ignoring whatever your reaction to the song is, you might ask why the Space Force needs one? I think a better question is why do we need a Space Force, and is it a good idea?
The impetus for the Space Force as described by the Libertarian Cato Institute in one of its policy analysis papers was “it seems to have caught the president’s attention through his belief in its public appeal.” In this case, the president was Donald Trump, who jumped on a proposal made by Republican Representative Mike Rogers, and tried it out during some of his public appearances, only to find that his constituents liked it. As the Cato Institute paper points out, almost no strategic or organizational thinking went into its formation.
The real question is how much do we want to militarize space and does the Space Force hasten this process? We tend to think of the modern space era as beginning with Sputnik in 1957 and frightening America into developing its own satellites and eventually “beating” the Soviet Union by putting men on the moon, a high point of our space program that led to a long period of quiescence. In fact, the space race began with the German V-2 missiles during WWII, leading to intercontinental ballistic missiles (ICBMs), then later communication and global positioning satellites that became a staple of warfare and commerce. Satellite-aided warfare has become the norm. Meanwhile, NASA, SpaceX, Blue Origin, Sierra Nevada’s Dream Chaser, Boeing and others are pursuing space programs that range from resupplying the International Space Station to colonizing Mars. The peaceful exploration and exploitation of space has different goals than the militarization of space. It’s important to think about how a military Space Force will interface with an international peaceful and commercial venture into space.
In most science fiction, a military presence in space is either a service serving Earth or one serving a group of planets of which Earth is a member (or some other planet if that is the focus of the story). Rarely is a military entity a national one, serving a single country, rather than a whole planet. Of course, given the real-life unlikelihood of us running into any alien races from space in the near future, the idea of a military presence in space at all, is just a trope to satisfy readers’ thirst for conflict.
But we have a Space Force!
As we move into space both through government and private programs, we have to anticipate that the greatest progress will involve international cooperation. Even if countries compete, we don’t need to protect our own companies with a military presence. It’s time for concerted work to make space militarily neutral. Already several countries, including the U.S., China, Russia, and India have demonstrated that they can shoot down satellites in space, and in doing so, have left the region of low-earth orbit littered with dangerous debris. All of these actions were tests, done to both show that countries had the capability of disabling space satellites and ascertain the ability of satellites to withstand such attacks. It’s a fruitless and potentially handicapping game with regard to future use of space.
The rules for international cooperation in space are contained in the 1967 Outer Space Treaty, which deals with exploration, colonization, and the peaceful use of space, but is outdated and mostly ignored. The International Space Station, built by 15 countries and populated by international crews, the earliest of them being an American and two Russians, shows how successful and productive international cooperation in space can be. This is the model for space exploration and use that we need to follow.
I hope that the U.S. Space Force has a limited role (I hope they have a limited existence, but I doubt it), and those interests that favor international cooperation and non-militarization of space can determine our future programs. If we want to “boldly go where no man has gone before,” we don’t need to do it in warships.
Casey Dorman is the author of 14 novels, including the science fiction thriller, Ezekiel’s Brain.
I was surprised when my publisher categorized my novel, Ezekiel’s Brain, as “Dystopian fiction” on Amazon. Still, it’s a category of fiction without many books in it, and Ezekiel’ Brain has been more highly rated in that category than in most others. I suppose that a novel that has the entire human race being wiped out by a virus distributed by a superintelligent AI is dystopian. When I think of dystopian science fiction, I think of the film Blade Runner, whose Asian-style street vendors amidst a constant drizzle of rain and a populace who resembled some combination of the Mafia and today’s homeless, has become the hallmark of dystopian films. Those films and several novels have shaped our view of dystopia so that it encompasses poverty for the masses, minimal law and order, semi-random terrorism, and a wealthy class who control the economy and means of production and live in isolated communities protected by armed guards.
Are we there yet?
Today is not the future, but the future has become today. Unexplained, random mass shootings are now the norm, and armed street violence has become routine in our cities. It’s nearly impossible for neighborhoods to get together to celebrate anything without someone getting shot. Just this last week, in my community we had a series of deadly 7-Eleven armed robberies, and Knott’s Berry Farm was shut down because of teen violence. This is in suburban LA and Orange Counties. Meanwhile, the Coronavirus continues to infect increasing numbers of people, and most county governments have given up on mandating health procedures, such as indoor masking, because no one follows the order, and the local law enforcement refuse to enforce it, even when it’s a law. Gasoline costs around $6.00 per gallon, A drive-through fast-food burger, fries and coke can cost $10.00, and home prices and rents are prohibitive for anyone except those in the upper-middle class. Homeless numbers are growing. Those who own their home and want to protect their property values oppose re-zoning so that 75% of the land in most of our cities is zoned for single-family homes only, and new building projects that would provide low-income housing are opposed by those who are afraid their neighborhoods will be inundated with undesirables who will molest their children, abuse drugs, or are simply the wrong color, and will lower property values. More and more citizens feel the need to walk the streets armed in order to protect themselves.
This is what dystopia looks like.
How do we get out of it? Science fiction and human history can be helpful in showing us some of the ways we might try to make things right and some of the ways that people have chosen in the past (think of the French under Lous XVI and the Russians under Tsar Nicholas). Most of the fiction and the historical solutions have involved blood baths. We can learn from them.
Our Colonial forefathers thought they lived in a dystopian society, and they chose to rebel, but they recognized that rebellion, by itself, was a path to chaos. They framed a constitutional way of addressing grievances and pursuing change that kept society within the limits of laws, kept order, and ensured that the republic they created wouldn’t fall apart when it encountered its own faults. What they did is not something you’ll see in many science fiction novels. It’s not exciting enough. It’s based on sober thinking within a circumscribed framework. The difference between them and the science fiction writers as well as between them and the most vocal rabble-rousers on both the conservative and liberal sides is that our country’s fathers didn’t have a dystopian view of human nature or of humans’ ability to create a livable society. They were children of the enlightenment. Humans and their rationality were placed at the apex of the forces that shaped the present and the future. It was a bold act of faith from men who, in many cases, didn’t subscribe to any typical religious faith. Most of them believed in providence of some sort and all of them believed in their fellow human beings.
I write dystopian fiction, but I don’t believe in a dystopian future. Our present frightens and worries me, but I have faith that, as rational, well-meaning humans, we can overcome those forces that are destroying our society. I’ll continue to speculate about how to seek utopia in the future, and I’ll put some of my ideas in my fiction. But I don’t want the utopia to only occur fictionally while I live in a society that has lost its way. Most of us know what is right. We also know the human qualities that lead us astray: greed, prejudice, self-righteousness, nationalism, ethnocentrism, clan loyalty, the celebration of violence as a way of solving problems. Those things run amok in science fiction and now they threaten to run amok in our real lives. It’s up to us to fight them and provide an alternative. We don’t have to make it science fiction or to think of it as fiction at all. We can make it real if we really want it to happen.
A lot of people these days feel as if their world is out of control. In addressing the two biggest recent challenges to human life, the Coronavirus pandemic and global warming, neither science nor human will has coped well. In fact, global stupidity, myopic self-interest, and government ineffectiveness may be the best ways to describe humanity’s response to these challenges. Meanwhile wars continue, the threat of nuclear war has actually increased, and in America, daily murders and mass shootings seem to increase without any prospect of them coming under control. In Europe, the plight of refugees seeking new homes has sparked prejudice and hatred, and in America, even the peaceful transition of the reins of government from one political party to another has faltered. We need to take a breath and try to gain some perspective that will allow us to view the issues that threaten us in a new light. Could science fiction provide one way to do this? I think so.
At the beginning of the pandemic, I was asked to present a paper at the annual meeting of the Pacific Division of the American Philosophical Association. It could be on any topic of my choosing. I chose to discuss Philosophy and Fiction, examining the idea that teaching philosophy could be augmented by including some of the great philosophical novels in the course syllabus. I thought it would make a course more interesting and attract some students who otherwise might find the traditional teaching methods too intimidating or boring. I included 13 novels in my review, and 6 of them were science fiction (listed below), and another 3 (Kafka’s Metamorphosis, Lightman’s, The Diagnosis, and Quinn’s Ishmael) had a least a science fiction quality to them.
I never got to present the paper, since the conference was canceled because of the Coronavirus, but in writing it, I gained the impression that science fiction has a lot to offer when thinking about the human condition, and even beyond humans, about the nature of reality and the universe we live in.
Recently, I began reading an author who is new to me, although she’s been writing for many years. Her name is Elizabeth Bear and I’m now reading my second of her novels in her White Space series. As with many sci-fi authors, Bear’s novels, or at least the ones I’m reading, take place in the distant future. It is a future in which space ships travel at warp speed, their functions controlled by AI shipminds, where multiple alien races abound, most of them living in harmony with each other, except for a few obligatory space pirates, and where characters’ bodies are modified to fit space flight and their body chemistry adjusted to reduce pain and anxiety, and where everyone’s mind has been altered—rightminded—so that the normal human urges and conceptions that undermined relationships and created disastrous consequences for the race in the past, are muted or replaced. In short, everything that could be done to make humans less cantankerous, more cooperative, more trustworthy and less likely to destroy themselves and each other has been done, presumably through some kind of brain tinkering and training. At least some of her characters call it brainwashing, but others feel as if humans could never live peacefully with other races without this kind of adjustment. I’m only beginning to see how Bear addresses the pros and the obvious cons of this situation. The idea, however, is a striking one, which most people, having been brought up on the value of uncoerced, individualistic thinking, would not think twice about rejecting, but when you look at how we’re doing in getting along with each other and cooperatively solving our problems, maybe it deserves looking into. Elizabeth Bear’s science fiction lets us do that.
My academic and professional background is in science and the hallmark of science is evidence. Science fiction is mostly free from the need for evidence. It is fiction, speculative fiction. It allows the writer to speculate and present ideas, concepts, technology, and even scientific advances without having to provide evidence that any of it would ever work. In doing so, the writer can present alternatives to the way we think about our world, including ourselves, which provides the reader with an insight into how the author thinks about those things, but also provokes the reader to think about them themself. With Bear’s concept of rightminding, one can wonder whether being able to quiet the dishonest, greedy, hierarchical, competitive, violent urges and mindset that characterize most members of the human race at one time or another would be good, perhaps even necessary, to ensure the race’s survival, or would simply lead to a 1984, in which the control of people’s thinking results in belief in a fictional reality created by the government and in mindless conformity by the people. The science fiction author can explore both sides of the issue by producing a made-up world in which the consequences of future decisions are portrayed in dramatic fashion.
What if we developed superintelligent, autonomous artificial intelligences? That’s a question that we’ll probably have to face sooner than we face warp-speed space travel or need to figure out whether we can cooperate with alien species. Some people think the possibility is right around the corner. My novel, Ezekiel’s Brain, and others such as Calum Chace’s Pandora’s Brain, or Daniel Wilson’s Robopocalypse address the issue, each with different outcomes. Ezekiel’s Brain is part of a series, in which the AIs outlive the humans and become the principal characters in a quest to explore the universe and alien civilizations. In writing it, I try to contrast the way humans think and behave with the way machines, pursuing similar goals, would think and behave. The presence of Ezekiel, which is a human mind inside a machine, allows that contrast, as do the instances when alien races, some human and some not, from other planets are encountered. The sci-fi writer and philosopher, Stanislaw Lem, wrote several novels in which humans met alien intelligences, whose mode of thinking was so foreign to the human mind that it was incomprehensible. In doing so, he was able to show how our own minds shape the reality we perceive.
Climate change is, perhaps, the human race’s greatest threat (although the war in Ukraine has brought back the threat of nuclear annihilation). Scientists and politicians are grappling with how to combat global warming, and so far, their success has been limited, verging on nonexistent. Some of our best speculative thinkers, Stephen Hawking, Elon Musk, Michio Kaku, have recommended moving all or part of earth’s population to another planet. This is a well-known science fiction trope, with some writers, such as Greg Bear and Kim Stanley Robinson, producing absolutely absorbing stories about what might happen in such a scenario. Robinson is perhaps the one who has taken the idea most seriously. His Mars Trilogy has earthlings terraforming the red planet, and his novel, Aurora features a giant starship that travels for centuries only to find that their destination is a planet that is lethal to humans, and they are forced to turn back. Robinson seems to have decided that leaving earth won’t work and fixing global warming is the solution. His 2020 novel The Ministry of the Future, provides a detailed scenario of deadly global warming and offers multiple solutions to avoid such an outcome. Bill Gates listed the book on his summer reading list.
Whether the issue is dealing with humans’ deadly instincts, independence of thinking vs. mind-control, the limits of the human mind, the threats and promise of artificial intelligence, the pros and cons of leaving earth, or how to deal with our most pressing issues, such as climate change, science fiction provides a platform for thinking about such things in a speculative way, not fettered by being tied to what we now know, and available to those who are not expert in these fields, but have inquiring minds. In my experience, it is not frivolous, wasted thought, even though reading science fiction is something I do primarily for fun. It provides a perspective that allows a reader to think beyond what he knows for sure, to think about a bigger picture and the implications of new developments, and to have a wider view on what’s possible and what the future may hold. Given that our society seems stymied by its problems at the moment, it can’t hurt to loosen the blinders of current events and look beyond them to the “what if” world that we might create if we choose to. Science fiction offers an opportunity to do that.
Short list of (6) classic philosophical science fiction books:
Dick. P. K. Do Androids Dream of Electric Sheep? (1968). New York: Del Rey Books.
Egan, S. (2013). Permutation City [Kindle version]. Retrieved from amazon.com
Huxley, A. (1998). Brave New World. New York: Perennial Classics. (Original work published 1932)
Lem. S. (1961). Solaris [Kindle version]. Retrieved from amazon.com
Lem. S. (1964). The Invincible [Kindle version]. Retrieved from amazon.com
Orwell, G. (2019) 1984. New Delhi: Delhi Open Books. (Originally published 1949)
Other science fiction novels mentioned:
Bear, E. (2019). Ancestral Night. New York: Saga Press.
Bear, E (2020). Machine. New York: Saga Press.
Bear, G. (2017). The Eon Series: Legacy, Eon, Eternity. (Kindle Edition) Open Road Media Sci-Fi & Fantasy.
Chace, C. (2014). Pandora’s Brain. London: Three C’s.
Kafka, F. (2003). The Metamorphosis, In the Penal Colony, and Other Stories. New York: Touchstone. (Originally published 1915)
Lightman, A. (2002). The Diagnosis. New York: Vintage Reprint Edition
Quinn. D. (1995). Ishmael. New York: Bantam.
Robinson, Kim Stanley (2015). Aurora. London: Orbit.
Robinson, Kim Stanley (1996). Mars Trilogy: Red Mars, Blue Mars, Green Mars. New York: Bantam Spectra
Robinson, Kim Stanley (2020). The Ministry of the Future. London: Orbit.
Wilson, D.H. (2012). Robopocalypse. New York: Vintage.
Imagine a truth that is so secret, so personal, so painful, so precious, or so embarrassing that you don’t dare tell it to anyone else. You can write about it. Disguise the characters, make the conflict symbolic, fictionalize the account. Your story is similar to the truth, even if not exactly accurate to the letter. It’s similar because the pain, the joy, the fear, the anticipation, the guilt, the profundity are still there, true to life, but told fictionally or poetically. Now imagine twenty experienced authors creating such stories and poems and putting them in a single book. That’s what you’ll find in The Truth That Can’t Be Told, 2. The 2 represents the fact that this is the second time this group has done so, growing from ten to twenty contributors from the first volume to the second.
Everyone had the same mandate: write a short story or poem that fits the topic of a Truth That Can’t Be Told. Beyond that, the rest was up to the imagination and experience of the writer. Everyone did it and most wrote several pieces, all of which are included in this volume. The variety of responses to the same request is mind-boggling. One third of the contributions are poems, ranging from a few stanzas to several pages long. The other contributions are short stories. The topics are amazingly broad, as broad as the experiences and interests of twenty people who have nothing in common except a dedication to writing and literature.
There are baseball stories, dating stories, lots of wistful, sometimes painful, memories. There are war stories, fantasies about witches or robots, musings about the creation of the universe, about history, about death and life’s meaning. Poems encompass pain, hope, guilt, beauty and those moments that touch all of our secret hearts.
I’m in there. I wrote the introduction, a long romantic and wistful poem, and one of my signature mystery stories involving, of course, robots. My stories just touch the surface of a trove of literary pieces that capture what every one of us holds deeply and secretly inside ourselves. My co-authors plumb the depths of that trove.
I can’t say enough about The Truth That Can’t Be Told, 2, except to urge all of you to read it. It’s not expensive and is in paperback, hardcover, and Kindle formats. Take a look. Read more about the book and sample its reviews on Amazon at HERE.
Casey Dorman is the author of the scifi thriller, Ezekiel’s Brain. Buy it on Amazon.
Last week my wife and I took a brief trip into U.S. history. My wife had always had the dream of visiting the White House and the Capitol building. We’re both U.S. citizens, but she immigrated from Vietnam 33 years ago, and I was born here and can trace my ancestors back to the Massachusetts Bay Colony in 1634. We’ve traveled the world, but neither of us has visited the political and historical sites in our own country. Feeling as safe as we’d ever be from COVID because of our 2nd Moderna boosters, we decided that now was the time to visit Washington, DC and Philadelphia. We saw the White House, toured the Capitol Building, visited Independence Hall, saw the Liberty Bell, and toured Betsy Ross’ house. We listened to tour guides, read brochures, and even I, born and raised in the U.S., felt as if I was learning something. It’s a trip I wish every U.S. citizen could make.
Declaring independence from Britain, which at the time, was probably the most powerful nation in the world, took great courage. The signers of the Declaration of Independence were committing treason in the eyes of King George III and could have been hung. Forging the constitution of the United States was less life-threatening, but more ground-breaking. The British had their Magna Carta, but their constitution is an unwritten one. To put on paper the basic structure and procedures of a free government was both audacious and brilliantly intelligent. Between them, the framers of the two documents may have represented a perfect storm of intellect and vision. In terms of politics, Washington, Jefferson, Franklin, Madison, Hamilton, Jay, Adams, and several others were as unique as the physicists, Einstein, Fermi, Bohr, Planck, Heisenberg, and their colleagues in the early 20th century. Like the work of the physicists, what the American founders created was a model of clear thinking, brilliant reasoning, practical application, yet limited by the state of civilization and thinking of the time. Similar to the physicists, the framers knew they hadn’t solved all the puzzles and that their work would need to be modified in the future. They were bold, but humble. The job of describing a workable, democratic government is too large for it to be captured at any one time by any one group of people. But they made marvelous progress–in spite of serious errors and omissions.
The U.S. constitution is elegant in its brevity and simplicity. The ideas are profound: balancing central leadership and authority with representative government and the supremacy of law, covering everything from term limits to qualifications to run for office, to impeachment, to raising money and armies, to electing representatives. But not everything was included. Slavery was an issue that was kicked down the road, and untouchable in the near term. Women’s rights were not even considered. In fact, preserving the rights of individuals against the majority and the government, was not guaranteed in the original document, an omission that led to some, such as George Mason, refusing to sign it. These rights were included as amendments four years later in the Bill of Rights. The big issue dividing those that would later become Federalists or Democratic-Republicans, was how much power to allot the central government vs the individual states. It’s an issue that is still not settled.
What really grabbed my attention were statements by opponents such as Hamilton and Madison, that they disagreed with some of the major points in the constitution but still promoted its passage because compromise was superior to holding out for one’s opinion at the risk of jeopardizing the entire project. In fact, no two men could hardly disagree more than Hamilton and Madison, yet they jointly wrote what are now known as the Federalist Papers to convince people to ratify a constitution both felt was deeply flawed, but good enough around which to fashion a workable government. Hamilton’s conclusion was that what they ended up with was, “better than nothing.”
Every American should listen to the speeches from that time or read at least a few of the Federalist Papers. People in the late 1700’s did not talk in sound bites. They expounded at length and in language that is challenging for even a well-educated person of the 21st century to fully understand. The writers were intelligent, well-read, literate, and dedicated to using their intelligence and their reason, rather than rhetoric and trigger-words. They never spoke to the lowest common denominator or dumbed down their arguments to latch onto their readers’ emotions. These were smart, sincere men who thought deeply about what they were doing and put the good of the new nation ahead of their personal agendas.
As we toured and listened, in Washington and Philadelphia, the congressional hearings on January 6th were going on inside the House of Representatives. I wondered how many of those who blithely invaded our Capitol building, demanded that the Vice President, in violation of the constitution, alter the results of the Electoral College, trashed the building, and went hunting for politicians they hated, would have acted in such a way if they had studied how deliberately and carefully our laws and procedures came into being. Everything they did violated the spirit of the Founding Fathers, yet they claimed they were standing up for the constitution, a document dedicated to establishing orderly rules rather than mob rule as the mechanism of government.
The Capitol Building, Washington, D.C., Noclip, Public domain, via Wikimedia Commons
Neither as a group nor as individuals were the constitutional framers infallible. They were prescient in some ways and myopic in others, products of their times, brilliant, but still products of their times. Those were times when the slave business was profitable, rampant, and unquestioned in much of the country. They were times when women had almost no rights in terms of voting or owning property or businesses. They were times when owning a gun was necessary for joining the state militia and protecting the new country against outside threats and internal rebellion. Memories of government intrusion were still ripe after years under British rule. The constitution they created, including its Bill of Rights, reflected the biases and attitudes of those times. It is not a document that deserves to remain unchanged for the ages. The framers knew this, although they had no idea how times would change in the decades to come.
“Five hundred thousand strokes for freedom ; a series of anti-slavery tracts, of which half a million are now first issued by the friends of the Negro.” by Armistead, Wilson, 1819?-1868 and “Picture of slavery in the United States of America. ” by Bourne, George, 1780-1845, Public domain, via Wikimedia Commons
It was also Juneteenth and we witnessed celebrations in both Washington and Philadelphia. We toured the exhibit of Afro-Atlantic heritage at the National Galleries of Art and saw depictions of slave ships, slave auctions, and slave whippings. The founders weren’t perfect, and not completely courageous. They never mentioned slavery, but instead referred to “imported persons.” They expressly protected such “importation” for the next ten years and even allowed a tax (“not to exceed $10”) on each person imported. Twenty of the thirty-nine signers of the constitution were slave owners. Their unwillingness to address slavery left an open wound in the country that festered for another seventy-five years before finally bursting open and splitting the country apart. The continuation of slavery was not even a main sticking point for most of the signers.
Forming our nation was a monumental achievement. It was meant to throw off the chains of oppression from Britain, and it did so, but left other chains intact—chains on women, African Americans, and Native Americans to name some, but not all. It outlined how democracy could work, and later did work to reverse some of the oppression for these groups. It established separation of religion and the government and protected religious freedom. It did so without referring to any religion, neither to God nor to Christianity, which was the religion of the majority of the people. Several of the most prominent founders, such as Jefferson, Franklin, Washington, Madison, and others, were Deists, subscribing to a belief in a vague higher power but not a supernatural God who intervenes to reward or punish humans or who performs miraculous acts on behalf of his worshippers. Thomas Jefferson famously re-wrote the first four books of the Bible’s New Testament, leaving out the virgin birth, the miracles, and Jesus’ resurrection and ascension to heaven after death, all of which he considered myths. Thomas Paine, whose pamphlet “Common Sense,” was instrumental in garnering support for independence, was a vocal Deist, and was accused of being an atheist. His ideas affected all of the founders. The idea that the constitution embraces a “Judeo-Christian” belief system is patently false and is the source of a lot of mischief by those who cannot separate their religious views from their political ones. Nevertheless, independence was won, a constitution was written, slavery was eventually prohibited and abolished, women were finally allowed to vote, and we struggle on, following and sometimes modifying the incredible document and continuing a heritage that we, as Americans, have been given by a very special group of men over two-hundred years ago. It’s an inspiring story.
Casey Dorman is the author of the scifi thriller, Ezekiel’s Brain. Buy it on Amazon.
What to fear and what not to fear from artificial intelligence
In my novel, Ezekiel’s Brain, humans construct an artificial general intelligence, an AGI, as it is called in machine learning discussions. An AGI can operate at or above the intelligence level of a human and can use that intelligence across a wide range of problems and circumstances. It is autonomous and self-directing, working thousands, if not millions of times the speed of a human brain and often finding solutions never before discovered by its creators. It’s goals, however, must be programmed into it, so those who create it decide what it will try to accomplish, but they may not know how it will do that. What it will do may be beneficial or disastrous to humanity; we don’t know because no one has ever built an AGI.
Most AI experts believe an artificial general intelligence is well into our future. Some, like Ray Kurzweil, welcome such a development, and others, such as Max Tegmark and Nick Bostrom, are terrified by it. The Achilles heel of an AGI is that it may take its goals too literally. Max Tegmark envisions an AGI that is instructed to make paperclips and embarks on a mission to turn everything in the universe into a paperclip. In Ezekiel’s Brain, an AGI is programmed to “uphold humanity’s highest values,” as a precaution against it harming humans. It weighs the evidence and decides that the greatest threat to humanity’s highest values is human beings, so it wipes out the human race and replaces it with AGIs, like itself.
AGI is a long way off, and its consequences are still a matter of science fiction. However, artificial intelligence, AI, as it exists now, or in the near future, may be filled with as many dangers as it is with benefits. AI has already reached the level where it can do things faster and better than humans. It can beat humans at the board games chess and Go. It has invented an antibiotic that is effective against antibiotic-resistant bacteria, it can outperform the best human fighter pilots in air duels and guide drones to their targets better than human operators. It can also guide us to the products we are most likely to want to buy, find other people with similar interests or backgrounds to ours, direct us how to navigate to different locations, answer questions about anything that has been discussed on the internet (which is almost everything), predict Coronavirus surges, run our household appliances, tune our televisions, and do almost anything else we used to do ourselves. AI not only helps us, it protects us. It spots false information on Facebook and Twitter, it removes salacious posts on social media, it helps police find felons and has been used to help judges sentence them.
Ernest Hemingway Cornischong at lb.wikipedia, Public domain, via Wikimedia Commons
Increasingly, especially with the development of what are called generative language transformers, such as OpenAI’s GPT-3, AI can produce intelligible texts based on fragmentary prompts—texts that are eerily intellilgent and, in many instances not distinguishable from those produced by humans. It can even mimic the style of famous writers, such as Hemingway, and if given a goal, can produce computer code that results in reaching that goal. It’s not perfect, and may also produce gibberish or nonsense, but it’s still new and getting better and better.
Idealized Artificial Neural Network mikemacmarketing / original posted on flickrLiam Huang / clipped and posted on flickr, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons
Some people are fearful of AI and what it can or already does do. The dangers are real. If AI selects what products we will see on a selling platform such as Amazon, how do we know what products we’re missing? Even Amazon’s employees may not know exactly how the AI is making its decisions. In 2015, when Google switched from a search algorithm designed by human programmers to one based on machine learning, searches improved in quality and precision, but Google’s own analysts admitted that they didn’t always know why the search engine ranked one page above another. AI actions based on machine learning ( presenting the AI with gigantic sets of data and having it find patterns within the data then associating those patterns with certain outcomes) can produce results that the AI developers don’t understand, even if these results are successful. As consumers of these AI products, such as search engines, selling platforms, or newsfeeds, we do not and sometimes cannot know what factors are determining the information being presented to us. As a simple experiment (recommended by Kissinger, Schmidt, and Huttenlocher in their book The Age of AI), if you use an automated AI newsfeed, such as Facebook’s, take a look at someone else’s newsfeed sometime and see how different the news they see is from the news you see. Your news is biased toward your own past patterns of selections, interests, etc., as is theirs. The result is people with very different ideas about what is going on in the world.
As consumers, we don’t usually train AIs ourselves, we use AIs trained by their developers. This presents two big problems: 1) it takes loads of money and resources to train an AI on voluminous amounts of data, as is needed for it to learn effectively. Only big, well-funded companies can do this, putting the development of the AIs in the hands of a small group of mega-corporations. 2) the data banks the AI developers draw from in training their AIs are, in many cases, biased toward the same characteristics that the rest of our society is. Middle-class, whites are the main source of data in their learning corpus. This has resulted in several debacles: Judicial sentencing programs developed by AI were biased toward more severe sentences for black defendants. Facial recognition programs don’t work as well on non-white persons; even the recent ID.me program used by the IRS and Social Security Administration, which relies on facial recognition, failed on a large percentage of its non-white users, preventing them from accessing their own records. And, of course, there is the famous case of Microsoft’s “Tay” chatbot, which, after exposure to real input, began responding with offensive, racist, and sexual content, based on shaping its own responses to correspond to the kind it experienced from its users.
AI, in fact, does represent a danger on a number of fronts. Some of these dangers are fixable, and some, like the dangers posed by an artificial general intelligence, may be very real, even if not immediate.
But there are other fears that may be misplaced.
The other day, I saw a post on Facebook by a service that offered use of its AI to compose blog posts, website content, social media posts, etc. Most of the Facebook comments expressed outrage, even calling the ad and the AI itself, “evil” for substituting the output of an artificial intelligence for that of a real human. One post claimed that human writing is “profound,” “from the heart,” and requires that writers be in touch with their “inner selves,” which, obviously computer-based AIs don’t have. Other comments came as close to advocating violence toward the AI as Facebook censors allow (although I’m not sure where destroying or dismantling an AI fits along the continuum of violence—above breaking an ashtray but below “murder.”).
I’m less upset by the prospect of AIs producing “creative” achievements, such as texts, pictures, music, etc. The reason I’m not bothered is not just because I’m skeptical that most blogs, social media posts, or website texts are “profound” or “from the heart” or rely on anyone being in touch with his or her “inner self” (I’m not even sure what that is). I’m not bothered because, if the product is good, or even better than most humans can achieve, it is probably going to be interesting and I’d like to see it. It’s possible that what we consider artistic qualities may emerge from perceiving or producing patterns or relationships we don’t understand, and an AI might find some relationships that were heretofore undiscovered and produce them, or combine patterns in ways never tried by humans, giving us new artistic experiences, but, if that happened, it would be pleasurable and I don’t see a downside to it.
Pablo Picasso Argentina. Revista Vea y Lea, Public domain, via Wikimedia Commons
I’m not sure that human creativity represents unique processes that have profound implications, as opposed to them being discoverable interactions between organic and nonorganic processes, which, in the long run, may not be appreciably different from those used by an AI program. In other words, the human brain operates similarly to the neural nets in machine learning (neural nets were copied from abstractions of brain connections, after all), and what seems miraculous and profound is, in fact, a down-to-earth process that probably can be duplicated by a machine. Even if AI architecture is only vaguely similar to that of our brains, the point is that how our brains work, even in producing our most “profound” artistic and intellectual products, is a lawful, mechanical process, just as it is in a machine, and there is no inherent reason that a machine could not duplicate it or produce similar products. This is not a terrifying or depressing thought. I expect my kidneys, my liver, my heart and my gut to operate in lawful, “mechanical” ways, that are neither mysterious nor miraculous. I won’t even be surprised if becomes possible to replace any of them with mechanical devices. Why not my brain? The AI and I may not be so different, after all.
Sources (which I suggest you read):
Bostrom, N. Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press, 2014.
Christian, B. The Alignment Problem: Machine Learning and Human Values, New York: W.W. Norton, 2020.
Kissinger, H.A., Schmidt, E, & Huttenlocher, D. The Age of AI: And Our Human Future. New York: Little Brown, 2021.
Kurzweil, R. How to Create a Mind. New York: Viking Penguin, 2012.
Tegmark, M. Life 3.0 New York: Alfred A. Knopf, 2017.
The term “Prime Directive” arrived in our vocabulary via the original Star Trek series, where it is also referred to as “Starfleet General Order 1” and the “non-interference directive.” Curiously, although the directive often referred to, it was never explicitly stated, leaving various writers and Star Trek characters to interpret it differently. It was also broken—a lot—by no less than Captain James Kirk, and Captain Luc Picard, both of whom found numerous reasons to overrule its mandate.
As close as we can get to a precise statement of the Prime Directive is outside of the series in the 1986 book, The Federation, by Bernard Edward Menke & Rick David Stuart. They quote a fictional set of “Federation” articles and regulations as saying,
“As the right of each sentient species to live in accordance with its normal cultural evolution is considered sacred, no Starfleet personnel may interfere with the normal and healthy development of alien life and culture. Such interference includes introducing superior knowledge, strength, or technology to a world whose society is incapable of handling such advantages wisely. Starfleet personnel may not violate this Prime Directive, even to save their lives and/or their ship, unless they are acting to right an earlier violation or an accidental contamination of said culture. This directive takes precedence over any and all other considerations and carries with it the highest moral obligation.”
The above is a liberal interpretation of the directive, since by introducing the phrases “normal and healthy development,” and “a world whose society is incapable of handling such advantages wisely,” it left room for a Starfleet commander to determine that a culture’s development was not “normal and healthy,” or the culture was “capable of handling such interference wisely,” giving them permission to ignore the Prime Directive as much as they pleased.
As numerous episodes of the various iterations of Star Trek over the years have demonstrated, and as various critics have pointed out, a rigid application of the Prime Directive could lead to lies, absurdities or even disasters. Suppose, for instance, that a civilization is on the brink of developing space travel and their planet is soon to be wiped out by a gigantic asteroid. Would refusing to tell the civilization how to build spaceships and escape certain extinction be ethical? Keeping such information from them fulfills the Prime Directive. The crew of the Starship Enterprise were not allowed to tell a more primitive civilization that they were from visitors from another planet and traveled faster than the speed of light in their spaceship, because such knowledge could alter the course of that culture’s development. So, the Enterprise crew had to lie about themselves. Who is to say that their lie did less harm than the truth?
Imagine the situation in which the Enterprise’s visit to a planet is followed by an invasion of the highly aggressive Klingons. Would letting the civilization know that there are races out there traveling from planet to planet and some might be dangerous have saved a culture from being destroyed ? If a world were suffering an epidemic of a disease and the Star Trek crew knew how to prevent or cure it, would it really be more ethical to not reveal that information to them?
Just as following the Prime Directive could lead to disaster, violating it could and did lead to disaster in several episodes. In one case, a social scientist from the Federation tried to create a more orderly, efficient society on another world and instead created a duplicate of the Third Reich. In more than one episode, a well-meaning Federation visitor armed a group within a culture that was being abused and destroyed by another group, only to create a perpetual arms race and war on that planet.
The Original Star Trek series was created during the Vietnam War when the counterculture within the United States saw America’s interference with the internal affairs of Vietnam as immoral and disastrous. During later times within the United States, such feelings resurfaced again with regard to U.S. involvement in Iraq and Afghanistan. In such times a non-interference policy may seem like the best rule for a powerful country to follow. But, today, with Russian forces having invaded Ukraine and committed numerous atrocities, most Americans favor the United States taking an active role in supporting Ukraine and resisting Russia. Perhaps, the Prime Directive wouldn’t be so popular today.
My sequel to my novel, Ezekiel’s Brain is called Prime Directive. A crew of artificial intelligences in humanoid form, including Ezekiel, whose brain is a copy of a human’s, set out to explore the galaxy looking for life. Virtually all the civilizations they encounter are human and are less technologically
developed than they are. Their first dilemma is whether to tell these others that they are machines, not human beings, and that they have traveled faster than the speed of light from a distant star system. A complication is that other than Ezekiel, his fellow crew members, who are all AIs, are unable to lie. Later, the Delphi, the AI’s spaceship, travels to a star system where the followers of a strict, fundamentalist religion rule with an iron hand, restrict women’s rights, imprison nonbelievers and restrict scientists from studying anything that would challenge their religious myths. The Delphi crew, along with Ezekiel, are asked to intervene on behalf of the abused members of the civilization, who are mounting a rebellion. Later, another group of aliens attacks this civilization with powerful weapons and again, the Delphi crew must decide whether or not to intervene.
The dilemmas in Prime Directive are fictitious, as were the ones in Star Trek, but they symbolize real situations that a powerful society sometimes finds itself in. What kind of rules are most ethical and helpful is a difficult matter to decide. The characters in my novel have no “Federation” manual to follow and must develop their rules as they go along. Should a Prime Directive be one of them? You’ll have to read the novel when it comes out to form your own opinion.
This blog is not about writing, but about our society.
On Tuesday of this week, a federal judge voided a CDC mandate that people wear masks when traveling on airlines. Scenes of passenger cheering and throwing their masks in the air were on social media. Sitting next to them, their features, or what could be seen of them behind their still-in-place masks, glum, were elderly and immunocompromised adults, their hope for safety from the COVID virus suddenly ripped away from them.
“My body, my choice,” whether it refers to COVID vaccination or wearing a mask and social distancing, could just as well be rephrased, “my health and the hell with yours.” Americans (and not just Americans, but many people world-over) have felt not only put-upon by having to mask up or vaccinate to protect vulnerable people around them but have felt threatened and incensed by what they perceive are restrictions of their freedom. Suddenly, the welfare of all Americans, old as well as young, sick as well as healthy, are not important compared to the freedom to behave according to what is best for one’s self-interest. It’s Ayn Rand on steroids.
In an effort to justify doing what one pleases, anti-vax proponents have manufactured fake data, subscribed to conspiracy theories about government or corporate plots, and made heroes of fringe scientists or legitimate scientists with whacko theories, to support their reasons for not wearing masks or getting vaccinated. Simple-minded thinking was elevated to the level of profundity with claims such as “’since people who are vaccinated still get COVID, we’re not protecting anyone by getting vaccinated, since both vaccinated and unvaccinated people pass the disease to others.” Simple-minded thinking because, even as vaccine effectiveness waned, the proportion of unvaccinated who got infected was anywhere from 2-8 times as high as the proportion of vaccinated who became infected, and therefore could pass along the disease. Or claims that CDC data showed that “thousands of people died after being vaccinated,” which is true, but had nothing to do with their vaccination status. The VAERS portal of the CDC is open to anyone (not just health care providers) who wants to report an adverse effect following a COVID vaccine. The reports are not substantiated, and even a tiny bit of critical thinking would ring a warning bell because, when a disproportionate percentage of vaccine recipients are elderly or nursing home residents, many of them would die in the next several weeks following vaccination. In fact, if one took the age of people being vaccinated and multiplied the number of recipients vaccinated in each age category, then compared it to the death rates for people of those ages, it would be clear (as the CDC has pointed out) that there is no excess of deaths among the vaccinated when these numbers are corrected for the age of the vaccine recipients. But simple-minded thoughts support distorted selfish thinking.
Freedom can encompass compassion and concern for one’s fellow human beings. In a modern society, we exercise much of our concern for those who are less well-off than we are or less fortunate, by enacting government social programs that provide a safety net for them. This is as much true for health measures as it is for financial well-being. You’re not freer if you don’t help those who aren’t in as good health as you are, you are just more selfish. And if you couch your concern for yourself above others in religious terms, and by some stretch of imagination define your religion as one that is concerned with preserving freedom above helping others, then you are also a hypocrite. Taking extra precautions to ensure that you don’t pass along a deadly disease to those who are most vulnerable to it is not a restriction of your freedom, it is a statement about your ethics and your humanity. Too few of us are making that statement these days.
Casey Dorman is a writer and novelist. His latest novel, Ezekiel’s Brain, is available form NewLink Publishing on Amazon
If you’re not already a subscriber to Casey’s fan newsletter, click here to become one: Subscribe
The second paragraph of Henry James’ Washington Square reads:
It was an element in Doctor Sloper’s reputation that his learning and his skill were very evenly balanced; he was what you might call a scholarly doctor, and yet there was nothing abstract in his remedies—he always ordered you to take something. Though he was felt to be extremely thorough, he was not uncomfortably theoretic; and if he sometimes explained matters rather more minutely than might seem of use to the patient, he never went so far (like some practitioners one had heard of) as to trust to the explanation alone, but always left behind him an inscrutable prescription. There were some doctors that left the prescription without offering any explanation at all; and he did not belong to that class either, which was after all the most vulgar. It will be seen that I am describing a clever man; and this is really the reason why Doctor Sloper had become a local celebrity.
Most people would sense that it was not from a current piece of literature. In popular genre literature particularly, such run-on sentences (averaging 39 words), telling rather than showing, and the narrator’s conclusions about the doctor’s personality are frowned upon. Yet, in a single paragraph describing a particular behavior, James manages to tell us volumes about his character. I could read such literature all day, and its author remains one of my favorites.
Marcel Proust, describing how his eponymous character Swann reacted to hearing a familiar piece of music, wrote:
But now, like a confirmed invalid whom, all of a sudden, a change of air and surroundings, or a new course of treatment, or, as sometimes happens, an organic change in himself, spontaneous and unaccountable, seems to have so far recovered from his malady that he begins to envisage the possibility, hitherto beyond all hope, of starting to lead—and better late than never—a wholly different life, Swann found in himself, in the memory of the phrase that he had heard, in certain other sonatas which he had made people play over to him, to see whether he might not , perhaps, discover his phrase among them, the presence of one of those invisible realities in which he had ceased to believe, but to which, as though the music had had upon the moral barrenness from which he was suffering a sort of recreative influence, he was conscience once again of a desire, almost, indeed, of the power to consecrate his life.
This one-hundred-sixty-three-word sentence makes James look like a champion of brevity. It contains twenty-one commas, setting off cascading parenthetical expressions, which, at increasingly inner depths, reveal the elements of Swann’s character at this stage in his life: his hopelessness, his “moral barrenness,” his search for the catalyst that might uplift him to dedicate his life to something greater than himself. As with James, the author manages to do this with a description, this time of an inner reaction, of a single, small episode in his character’s life. His analogy with an invalid’s recovery, provides the image for us to grasp the experience Swann is having, and is an example of Proust’s unique style of using analogy, simile and metaphor to explore how sensations provoke associations behind which, invariably, lie stories. His entire novel is famously engendered by an association to the taste of a small cake dipped in his tea. And, as with James, I find reading him irresistible, even for the second or third time.
Fascinated as I may be, my own writing task is to complete a science fiction novel about the adventures of a spaceship crew of androids, the sole inheritors of Earth after the extinction of its human population, as they search for life throughout the galaxy. It is Prime Directive, a sequel to my novel, Ezekiel’s Brain—the next novel in a science fiction saga with some resemblance to Star Trek, except the main characters are machines. The potential audience for my novel is most likely young adults, most of whom have never read, and many of whom have never heard of, James or Proust. Their familiarity with the types of characters and setting of the novel was gained through television series such as The Expanse, films such as Ex Machina and Star Wars, and video games such as Stellaris and Homeworld.
Despite the obvious schism between my literary interests and the novel I am writing, I have a strong, if perhaps irrational, belief that the rich writing with which I am enamored in James, Proust and many other “classic” authors, particularly from earlier eras, has something to offer as I craft my science fiction writing “style.”
Science fiction often involves telling, especially in the presentation of scientific or technical concepts that may be central to a story. Often, rather than the narrator stepping back and providing a bit of technical background, the needed explanations are voiced by the story’s characters. This happens less often in the case of technical devices, such as laser rifles, spaceship propulsion systems, or robotic bodies, or when alien settings which give exotic flavor to stories involving other worlds are described In such cases, the omniscient narrator steps in.
Narrators need not be prosaic. An author such as Proust demonstrates that narration can come alive by embellishing descriptions with rich, figurative language. “The planet, tidally locked, with its yinyang hemispheres, was a real-life Ginnungagap, the primordial realm of Norse legend: Its yin a pitch-black Niflheim, a frozen expanse of glacial icesheets and icebound rivers, and it’s yang a blazingly bright Muspelheim, a barren waste of blackened earth and boiling oceans.”
In James, Proust and other literary writers, I am reminded of the power of character description. Science fiction has been said to emphasize setting and plot over character, but there is no reason that this must be the case. Arkady Martine, the author of A Memory Called Empire and A Desolation Called Peace, has shown that a science fiction novel can descend to the deepest levels of a character’s psyche and remain riveting. I admit that my robot characters with electronic brains and barely existent emotions present a challenge but not an insuperable one. Ezekiel, the main character in the series, is an android body whose brain, including his personality and memories, is an exact copy of that of his human creator. The other androids have unique personal identities and self-assigned genders. Siaree, a human empath from another planet, becomes a member of their crew, and the aliens they meet—good guys and bad guys— are mostly varieties of humans. There is plenty of room for deep character examination and description. Proust and James have provided me numerous examples of how to make these characters fascinating and unforgettable and real. Doing so is, of course, quite another thing.
Do you love “mind-bending science fiction?” Read Casey Dorman’s Ezekiel’s Brain. Buy it on Amazon.
Want to receive regular updates on new books, new ideas and the latest in literature and science fiction? Subscribe to Casey Dorman’s newsletter. Click Here.
If you read science fiction, this is a great time to live in. If you haven’t tried reading science fiction or think it’s not for you, this is an even better time to live in. On television right now, we have two of the genre’s all-time classics being presented, one as a film, one as a series. HBO Max is streaming a new version of Frank Herbert’s classic, Dune. It’s also in theaters. This is the novel that served as the initiation of many science fiction afficionados back in the 1960s. I haven’t seen the film yet, but it has good reviews and I’m willing to shell out the extra bucks just to be reminded of how great the story is. On Apple TV, Isaac Asimov’s Foundation is being presented as a TV series. Also getting good reviews, this is a second in the grand trilogy of scifi, which also includes Robert Heinlein’s Stranger in a Strange Land (followed closely by Arthur C. Clarke’s Childhood’s End and Ursula LeGuin’s Left Hand of Darkness).
But, in terms of reading, what is fantastic right now are the books coming out by the new generation of writers. Last blog entry I told you about Arcady Martine’s novels, A Memory Called Empire and A Desolation Called Peace, the former a 2020 Hugo Award winner. With Arcady Martine, science fiction becomes literary writing, with poetry of language and characters that are beautiful to experience. This time I want to discuss two other novels, opposites in some ways, that express the breadth of the field. If I were to give a title to this presentation, it would be “from science to imagination.”
From the scientific side, Vera Tinych’s TheTransfer, although an incomplete novel, is the most loaded with science and engineering information of any science fiction I have read. If you think you don’t like scifi because it is scientifically implausible, you need to read this novel. The transfer that it’s about is from a human mind to a computer and back again. The technology that could make this possible is described in detail by an author with a scientific and engineering background that allows her to make the process believable. In fact, something along the lines of what the story proposes may someday happen. If you don’t like science or technical detail, this isn’t the book for you, although its plot and characters are well-developed and enough to carry the story by themselves. But it is real science and real scientific speculation. The author calls it a semi-novel, but it’s the real thing.
On the imaginative side, I just finished reading Adrian Tchaikovsky’s Children of Time, the 600-page Arthur C. Clarke Award winner published in 2015. Science fiction is not real science, it is speculative science, sometimes very similar to real science, such as in Tinych’s The Transfer, and sometimes speculating on the boundaries of science. Most of the latter types of scifi use fairly standard speculative tropes, such as jump-gates, warp drives, superintelligent AIs, etc. Children of Time avoids such tropes and instead, takes science to places it has not gone before. A human society that is not only the culmination of humanity’s penchant for self-destruction, but a self-destroying society that is built upon a previously self-destructing human culture, which only exists as a distant, almost mythological memory. Genetic inheritance is turned on its ear so that it is not just Darwinian, but Lamarckian: experiences and learning are passed from one generation to another. The race that is marching toward the stars as a result of profiting from this newfound inheritability of knowledge, is not even human, it is arachnoid—spiders. And the spider brain and society are, true to form, “spider-like” as they develop, even to the point of creating organic computers made from chemically guided groups of ants mindlessly carrying out simple functions as part of a larger, organized, intelligent whole. At some point in the millennia of time that passes in the novel, the human and arachnoid society must come in contact. And they do, forming a new, hybrid society that I presume is described in greater detail in the sequel to this book, Children of Ruin.
From the classics Dune and Foundation to modern classics, such as A Memory Called Empire and Children of Time, plus a venture into highly scientific science fiction, with TheTransfer, there is just a lot of great scifi available for readers and fans right now. I hope some of you take advantage of it.
I’ve tried to make my own contribution to scientific and imaginative science fiction with my novel, Ezekiel’s Brain. It’s a plausible story of the creation of artificial intelligence and the transfer of a scientist’s brain into a computer, but with a lot of added twists. The extinction of the human race is one of these twists, but the creation of a race of intelligent androids who dedicate themselves to exploring space is another. If you’re a fan of speculative fiction, or just want to try reading something to get your feet wet, Ezekiel’s Brain is a good book for you.
Do you love “mind-bending science fiction?” Read Casey Dorman’s Ezekiel’s Brain. Buy it on Amazon.
Want to receive regular updates on new books, new ideas and the latest in literature and science fiction? Subscribe to Casey Dorman’s newsletter. Click Here.
Arkady Martine got my attention by winning the 2020 Hugo Award for best science fiction novel, which was her 2019 debut scifi book, A Memory Called Empire. I was so captivated by the characters and the author’s writing style and talent that as soon as I finished A Memory Called Empire, I bought its sequel, A Desolation Called Peace, which I enjoyed even more. The attraction of these two books isn’t the science involved in them, which is minimal, or the creation of a future world, which is, in many respects, fairly stock for scifi, but rather the attraction is the author’s ability to tell a story and develop characters that grab and hold you, so you really care about everything that happens to them.
The main character in both books is Mahit Dzmare, a newly appointed ambassador from the space station/independent state Lsel, which is part of the Teixcalaanli Empire, which stretches over vast reaches of space, even beyond some of the jumpgates that open into regions of space far distant. Lsel station, basically a mining station, is tiny, containing only 30,000 people, but it has one thing no one else possesses, which is the Imago technology. The Imago is an attachment to a person’s brain that contains the uploaded memories of the consciousnesses of generations of predecessors who have occupied the same role in the Lsel society. The recipient has those memories, integrated more or less into his or her consciousness, to aid in performing his or her duties within the society. In Mahit’s case, the imago she uploads is a 20-year-old record of the brief memories of the previous Ambassador from Lsel, named Yskandr, who neglected to update his memories and recently died in the capital of Teixcalaan.
Teixcalaan society is dominated both by its history of war and empire building and by its penchant for communicating through poetry. Mahit is given a special female attaché from the Teixcalaan government, named Three Seagrass, another deeply explored character in the novel, and they develop a relationship that, while tainted by mutual suspicion due to their representing two different cultures, becomes warm, close and, by the second novel, sexual. Both Mahit and Three Seagrass are assertive, intelligent, creative women, who, by the second novel, become almost co-protagonists, they are both so appealing and important to the story.
In A Memory Called Empire, we explore Teixcalaan’s Imperial society and the plots that go on within it, discovering that Yskandr, the previous ambassador was murdered before he was able to give the secret of the Imago to the dying emperor who could then pass on his memories to his successor, and we learn of an ominous enemy that is destroying Teixcalaanli spaceships in the far regions of the empire. We also learn who some people on Lsel station are planning to use Teixcalaanli engagement in a war of long duration with the mysterious alien marauders to keep Teixcalaanli from threatening Lsel’s autonomy.
When we get to the second novel, A Desolation Called Peace, the war with the aliens has begun and, as we learn that those aliens are not human and have an indecipherable language, we find that Three Seagrass has been tasked by the new emperor of Teixcalaan to try to learn how to communicate with them and avert a larger war. She asks Mahit Dzmare to help her. Together they travel to the flagship of the Teixcalaanli warfleet and make contact with the aliens. Meanwhile, within the Teixcalaanli society and even within its military, there are plots to seize power, overthrow the Minister of Defense and possibly the emperor. The savior of the society, who emerges as a full-fledged third protagonist, is the 11-year-old successor of the previous emperor who is a 90% clone of that emperor but will not take power until he is of age.
I won’t tell the rest of the story, since a great deal of its enjoyment is the suspense regarding how contact with the aliens will work out and what will happen to Three Seagrass and Mahit Dzmare, as well as whether Eight Antidote, the 11-year-old future emperor, will be successful or even survive as he works behind the scenes back in the capital of Teixcalaan. I found myself feeling deeply involved with the main characters, especially Mahit Dzmare, Three Seagrass and Eight Antidote. The author has an absolute gift for creating reader involvement with her characters. The plots and subplots within the Teixcalaanli imperial society are worthy of the Borgias, of Shakespeare’s Cassius and Brutus plotting behind Caesar’s back, or the in-fighting and plotting of Grave’s I, Claudius. Some of the poetic communications among the Teixcalaanli are striking. Arkady Martine is a major literary talent whose decision to write science fiction is a fortunate one for all of us who love literature and especially love well-told tales that stir the imagination, as these two novels do. She reminds me of Ursula Leguin in her use of genuine literary style to tell a story that is pure fantasy. Her writing is filled with allusions, striking imagery, poetic sentences and deeply drawn and examined characters who are still able to strike feelings familiar to all of us as readers. She is likely to become a giant in the field and these two books are a great introduction to a major new talent.
Do you love “mind-bending science fiction?” Read Casey Dorman’s Ezekiel’s Brain. Buy it on Amazon.
Want to receive regular updates on new books, new ideas and the latest in literature and science fiction? Subscribe to Casey Dorman’s newsletter. Click Here.
Among most people’s lists of great science fiction novels, you’ll find such books as 2001 a Space Odyssey, by Arthur C. Clarke, I, Robot by Isaac Asimov, Neuromancer by William Gibson, and Do Androids Dream of Electric Sheep? (the Bladerunner film) by Philip K. Dick. These novels, have in common a focus on artificial intelligence. They also have in common that they are novels that make you think.
Of course, provoking thinking is one of the main points of science fiction. In my mind, it’s what separates science fiction from other speculative fiction genres such as fantasy and horror. The things that happen in science fiction could happen, although probably not now and probably not without some, admittedly speculative, advancements in science. Really great science fiction doesn’t just speculate about a future filled with new and sometimes strange scientific advances, it raises questions about their meaning and, in turn, about the meaning of life, our world, and the universe.
Let me give you an example.
In Do Androids Believe in Electric Sheep?, we find a future in which many humans, who can afford it, have abandoned earth following a nuclear war that threatens human genes, leaving behind those who are suspected of having some kind of defect, especially genetic ones, and those too poor to pay for leaving. The unaffected and well-to-do humans live in colonies somewhere else and are served by androids manufactured on Mars. Some of those Androids however escape and return to earth. The novel’s hero, Rick Deckard, is tasked with hunting down such rebellious androids and terminating them. Most of the people who are left on earth are poor, perhaps genetically marred in some way, and keep live animals as their most favored activity. If they are poor, as Deckard is, they keep robot replicas of live animals (Deckard has a robot sheep). A subplot involves a method of trying to engender more empathy in people by using an “empathy box” which focuses on an empathy-inducing vision of a Sisyphean character climbing a mountain and getting knocked down by falling stones. Greater empathy may reduce the chance of another devastating war. It turns out that humans and androids are so similar, that the only way to distinguish them is by testing them to see if they feel empathy, which androids supposedly do not. The film version of this novel, Bladerunner, which has Harrison Ford playing Deckard, portrays the dystopian earth in such a way, with constant rain, deteriorating buildings, food bought from street vendors, that it has become the paradigm for earth’s future in a host of films and novels.
Although the future portrayed in the novel is interesting and thought-provoking in itself, the core of the story involves the issue of the difference between fully humanized mechanical androids and real human beings. The androids develop a goal of personal survival, which puts them at odds with the human view of them as merely complicated tools. How to distinguish an android with consciousness and a personality and a will to live from a human being, which is Deckard’s job, requires testing for empathy, but the latest android models may be able to pass the test as if they are humans. When Deckard meets a woman whom he suspects is an android and she passes the test, he is not sure if passing the test really works anymore. He then has a series of encounters with androids he should be terminating, but who first fool him into thinking they are humans, even raising the question in his own mind as to whether he is a human or android. He finally decides he is human but is empathetic toward androids, whom he is supposed to kill. Complicating things, he is falling in love with one of the androids but knows he should kill her. The entire novel raises the question of what the difference between a human and an android really is.
I have been inspired by Do Androids Dream of Electric Sheep? enough to write two novels, I, Carlos, and Ezekiel’s Brain, which feature androids or humans operated by digital programs and raise the issue of what makes the difference between a human and an artificial intelligence. In I, Carlos, the human whose brain has died, is run by a computer program, and he doesn’t know he isn’t human or that his goals are all pre-programmed. His goals happen to be to assassinate the President of the United States. In Ezekiel’s Brain, the AI named Ezekiel is actually an exact copy of a human brain, complete with that human’s memories, and able to be conscious of itself, so it is, in fact, a human being in all but the material it is made from. Another group of androids, however, are not copies of human brains and are conscious, but emotionless AIs with humanoid bodies. Their goals have been programmed into them by the humans who created them and center around the command to “fulfill humanity’s highest values.” This command results in them annihilating the human race because it represents a threat to human values and almost results in the race of androids being eliminated when an aberration in their programming creates another race of AIs that are guided by the command to “acquire more resources to make more and more replicas of themselves.” Without a goal of self-preservation or emotions, the androids are unable to combat the altered race and must copy electrical circuits from the AI Ezekiel to implant emotions and the goal of self-preservation in themselves so they can fight for their own existence.
Ezekiel’s Brain is billed as a “philosophical science fiction novel” since it explores questions of the essence of humanity, morality, ethics, consciousness, and the necessity of violence for self-preservation in the context of a mystery and an adventure that begins on present-day earth and extends 200 years into the future when wars are fought in locations across the galaxy using faster than light-speed spaceships. It’s meant to please those who love to think about speculative, but not impossible, science-based ideas.
I invite all of you to purchase and read Ezekiel’s Brain, and also, I, Carlos, if you want to have your mind provoked at the same time as your imagination takes a thrilling ride.
Ezekiel’s Brain may be purchased through Amazon by clicking HERE
I, Carlos may be purchased through Amazon by clicking HERE
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.
Award winning Chinese science fiction author Cixin Liu has said, “Science fiction is a literature that belongs to all humankind. It portrays events of interest to all humanity, and thus science fiction should be the literary genre most accessible to readers of different nations.” I think this is true, or at least it can be. For science fiction to appeal to everyone on the planet it is necessary that its stories portray situations that are relevant to everyone, that they are written about in a way that doesn’t exclude those whose cultural or societal beliefs fall into one political camp or another, and, most of all, it requires a literate world in which everyone has enough of their basic needs met that they have time for leisure reading.
We are a long way from the ideal state described above, but some books are a movement toward it. Cixin Liu’s “The Three-Body Problem “represents a step in that direction. Liu lives in the People’s Republic of China. When I think of science fiction audiences, China doesn’t come immediately to mind, but that is because of my ignorance, not reality. “The Three-Body Problem” not only won the Hugo Award after its translation into English in 2014, but it also won China’s Galaxy Award for best science fiction in 2006, the year of its publication in China. Cixin Liu has won the Galaxy Award, which I didn’t even know existed, 9 times.
“The Three-Body Problem “is hard science fiction, meaning that it is literally filled with science, some of it real, much of it speculative with kernels of real science leading to wildly fantastic consequences. One of its themes is the overturning of the basic principles of modern physics, or at least the apparent overturning of them, since another theme is the deliberate undermining of belief in those principles. The underlying plot of the novel is the mutual discovery of another race in our galaxy, mutual in the sense that we discover them at the same time that they discover us.
The ideas contained in this novel are mind-boggling. What appears fanciful becomes less and less so, as more science behind it is revealed, although the science too, get stretched until everything seems fanciful, but I as a reader, was never sure if it was based on realistic science or not. That’s part of the entertaining quality of the book. The extraordinary discoveries come one after another, gradually unfolding the true plot that is determining the characters’ actions.
There are political criticisms in “The Three-Body Problem,” almost entirely of China’s Cultural Revolution of the 1960’s and 70’s. As such, they are a criticism of constraining science because of political or philosophical reasons. The author himself has made some political statements, almost entirely in favor of Chinese government policies, which have earned him enough suspicion in the U.S. that several Republican Congressmen objected when they heard that Netflix was creating a film version of his work. But modern Chinese politics are not an issue in the novel. Liu’s comments at the end of the English translation of the book make it clear that he hopes science fiction such as his can bring the world together.
A word about character development in “The Three-Body Problem.” The early portions of the book cover several years and skip from one character to another, many of them who die. Finally, the story settles down to a small set of regular characters. Some Western critics have complained that the characters are “shallow,” which may be valid when comparing the novel to many Western ones. I suspect that this reflects a difference between Western and Eastern cultures, as well as difference between science fiction as a genre (at least old-style science fiction) and other fiction genres. Our Western mindset is to attribute the causes of a person’s behavior to elements of their personality. They are adventurous, courageous, lazy, lackadaisical, psychopathic, etc. Sociological research has suggested that many Eastern cultures tend to see the causes of behavior as due to events and circumstance or even luck, rather than to ongoing personality characteristics (it is a more vs less difference, rather than an either-or difference). Liu’s novel takes the latter approach, giving a detailed description of the circumstances leading characters to do what they do in the novel. It is not a lack of depth of characters so much as it represents a different approach to character motivation that is reflective of the overall culture of the writer. In the case of “The Three-Body Problem,” this results in the novel gradually providing the basis for different characters’ otherwise puzzling behavior by providing after-the-fact stories of what happened in their lives to cause them to behave as they do.
I found this book to be absolutely intriguing and impossible to put down until I got to its end. I am eager to read the two novels that are its sequels. It is science fiction at its very best.
If you enjoy hard science fiction using a wildly imaginative plot based on real science, take a look at my most recent novel, Ezekiel’s Brain.
The Three-Body Problem is available on Amazon HERE
Ezekiel’s Brain may be purchased through Amazon by clicking HERE
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.
In my quest to find out how other science fiction writers have handled the subject of first contact with an alien race, I read Andy Weir’s new novel, ProjectHail Mary. Most of you are probably familiar with Weir’s first novel, The Martian. That novel was a pleasure to read, if only because a first-time geeky author who liked technical intricacies was able to his self-published novel into a world-wide success, not to mention a Matt Damon film. His next novel, Artemis, showed that Andy Weir wasn’t just a science-nerd with phenomenal luck but was a genuine writer. In Hail Mary, Weir goes back to his techno-science details and mind-busting problems, similar to The Martian, and demonstrates that he has both developed a voice (nerdy, but wise-cracking— like the human and robot narrators of Mystery Science Theater 3000 if they were Ph.D. students in chemistry, physics and math), but can also develop mind-boggling plots involving real science applied to imaginary events. Isn’t this one definition of science fiction?
A microbe is siphoning off energy from our sun and using it to propel itself at near light-speed to Venus, where it absorbs CO2 and replicates, then it and its clone return to the sun to gather more energy, resulting in an exponentially increasing drain of energy from the sun that threatens to end all life on Earth. Not only that, every star but one in the nearby galaxy is suffering the same fate. Earthlings mount an exploratory manned vessel to travel to the one star that appears immune to the microbe to discover why this is so, hoping that the answer will provide a solution for Earth and our sun. In a nutshell, that’s the plot. What happens is that the geek scientist from Earth and a gifted mechanical engineer from another star system arrive at the star at the same time, both of them the only surviving members of their crews. They must learn enough about each other to work together to find an answer to their respective home planets’ dilemmas. What a cool plot! I told my wife this plot and she thought I was describing my next novel and told me I was a genius and sure to write a best-seller. I felt bad telling her that it was too late, it had already been done by Andy Weir.
Hail Mary is a best-seller and gets overwhelmingly positive reviews on Amazon and elsewhere. Although I agree with the rave reviews 100%, I’m a little surprised, and a lot encouraged to find that so many readers enjoy reading about real science and how real scientific and technological problems can be solved by a lot of knowledge and a lot of creativity. This novel is one scientific or technical dilemma after another and the characters arrive at solutions using science and math—no wizards, sorcerers, magic, alternative mental states, or any of the tropes that characterize much of what passes for science fiction today. What a relief! I was reminded of Kim Stanley Robinson, Arthur C. Clarke, Fred Hoyle, and others who draw on real science for the basis for their novels.
This is a novel, with likeable characters, including the alien, moral dilemmas, human failings, and the story of how someone with such failings manages to put the future of humanity above his own needs and friendship above self-preservation. It’s a wonderful example of real science fiction. You’ve got to read it. I’m glad that I did.
What did I learn about aliens from Weir’s novel? The most valuable message was probably to let my imagination go and think beyond oxygen-breathing, anthropoid-looking creatures. Rocky, the alien in HailMary, looks like a spider, has no head and no eyes, and is oriented toward sound and touch. He weighs about 800 pounds and talks by making music. In what is a real tradition-breaking idea, Weir has his star-traveling alien race having never discovered radiation or relativity or even digital computing. We forget that early science fiction rarely incorporated quantum physics or computers yet gave plausible accounts of interstellar travel. How far could a race get without our science? It’s a good question. I didn’t get any concrete ideas, and I wouldn’t want to copy Weir’s ideas anyway, but reading Hail Mary did open my mind to the range of possibilities available for constructing an alien race. To find out what I did come up with, you’ll have to read the sequel to Ezekiel’s Brain when I finish writing it and it gets published.
I must admit that I was taken by surprise by the contents of Brian Christian’s recent book, The Alignment Problem. The book came out in 2020 and made quite a splash among the artificial intelligence (AI) and machine intelligence community. Much of the public, including myself, had been made aware of “the alignment problem” by Nick Bostrom’s book, Superintelligence, or the writings of people such as MIT physicist, Max Tegmark. In fact, in my case, it was the conundrum of the alignment problem that spurred me to write my science fiction novel, Ezekiel’s Brain. Simply put, the alignment problem in the AI world is the question of how you create a superintelligent AI that is “friendly,” i.e., helpful rather than dangerous, to humanity. It’s such a difficult question that, in my novel, Ezekiel’s Brain, the creators of the superintelligent AI fail, and the result is disastrous for the human race. What I was expecting from Brian Christian’s book was another description of the nightmare scenarios of the kind I wrote about in my novel, and experts such as Bostrom and Tegmark talk about in their writings. That wasn’t what The Alignment Problem was about… or at least not what it was mostly about.
Christian gives some detailed accounts of disastrous results applying the most sophisticated AI learning algorithms to actual human situations. Some of these are well-known, such as attempts to censor social media content, or to produce an algorithm that aided judges in criminal sentencing, or to develop screening tools for employment selection. Training AIs using data on human decisions simply amplified the biases, including gender, racial and ideological, we humans use to make our decisions. These were instances of AIs performing in a way that was more harmful than helpful to humans, and they were results of which I had previously only been vaguely aware. Although they were not the kind of misalignment that I was concerned with and had prompted me to buy the book, they expanded my concept of alignment considerably.
Instead of providing nightmare scenarios of the danger of superintelligent AIs that are not aligned with what is best for humanity, the bulk of Christian’s book provides an exquisite history, up to the present, of the efforts of the AI community to define how machines can learn, what they are learning and what they ought to be learning, and how to identify whether the progress being made is bringing AIs in closer alignment with what humans want from them. What was most surprising and gratifying to me, as a psychologist, was how much this effort is entwined with progress in understanding how people learn and what affects that learning process.
Christian writes his book like a good mystery, but rather than following a narrow plot, the breadth of inquiry is extraordinary. Even a psychologist, such as myself, learned about findings in psychology and learning and child development of which I was unaware. How computer scientists who develop AI use psychological findings to open up new avenues in machine learning is fascinating to hear about. The collaborations are thrilling, and both psychologists and AI researchers who are not aware of how much is happening on this front should read Christian’s book to get an idea of how exciting an important this area of research is becoming.
Although I have some background related to psychology, AI, and the alignment problem, this book is written for the non-expert, and the interested layperson can easily understand it and bring their knowledge of the subject up to date. I found it one of the most captivating and informative books I have read in the last several years, and I recommend it for everyone for whom this topic sparks an interest.
The Alignment Problem may be purchased through Amazon by clicking HERE
Ezekiel’s Brain may be purchased through Amazon by clicking HERE.
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.
In her book, Artificial You, philosopher and author, Susan Schneider says that if Aliens visit Earth, they will most likely be artificial intelligences. What’s more, they may well be non-conscious artificial intelligences. Her reasoning is that the distance between star systems in our galaxy is so great that it is unlikely that organic creatures could survive a trip (in scifi books and films this problem is usually overcome by cryogenic freezing of the spaceship’s occupants for the majority of the voyage). A civilization advanced enough to send a probe to another star system would also be sophisticated enough to construct an AI to pilot the ship and carry out the mission (note that virtually all of our probes to outer solar planets are unmanned).
But why would such an AI not be conscious? Schneider’s reasoning is that consciousness is not required in order to be intelligent, and the extra energy demand needed to invest an AI with consciousness would not be worth the effort. As I said in my last post, Is Consciousness an Epiphenomenon, I agree that consciousness not only is not necessary for an entity—either machine or organic—to be intelligent, but also that when it is present, it does not play a role in determining behavior. Consciousness is real, but it is simply our awareness of the outcomes of our brain operations, and being aware of them allows us to communicate them to others, which in a group, helps us to predict each other’s behavior.
So how would I construct an alien? First, you may ask, why would I want to construct an alien and, in fact, what does constructing an alien even mean? I’m writing a sequel to my near- (could be, might be, someday it’s within the realm of possibility it will be) best-selling novel, Ezekiel’s Brain. In the sequel, the crew of the spaceship Delphi, who are AIs from our solar system, arrive at a distant planet inhabited by a race of humans but on which there are aliens who appeared on the planet decades earlier and are mysterious, never actually seen, beings known as Snatchers, because they kidnap children and keep them for several years before letting them go. My job, as the author, is to create the Snatchers.
I took to heart, Susan Schneider’s conjecture that alien visitors would be AIs not organic creatures. After all, the Delphi’s entire crew (except one human who is not from earth that they added to their crew), are AIs who travel from star system to star system. But if there is more than one Snatcher, then they have to communicate. In fact, if there is more than one of them, that means piloting their ship must have been a cooperative group effort. I tried to imagine how non-conscious beings could communicate with each other. I assumed that they could not have a language as we do, although that might not be true. I thought about bees, who communicate by choreographed movements, and flocks of birds of schools of fish, who move in graceful, often complex patterna. It turns out that, at least in some birds, any particular bird picks up information (it’s not clear how) from its nearest neighbors and they all perform the same movement at once. The movement of the entire flock may be initiated as a reflex by one bird on the periphery in response to detection of a predator. Almost immediately, the entire flock makes the same movement.
Flock or school behavior didn’t seem apt for communication among members of a crew on a starship. There would be no sense in having everyone behave the same way. I next examined direct sensing between AIs of patterns of activation across neural networks. It would be like scifi scenarios, or Elon Musk’s goal, of machines that can read thoughts directly from our brains by picking up the pattern of electrical charges moving through circuits of neurons. At first glance, it seems this would require a translator. What good would knowing what the pattern of neural firing were if we couldn’t translate them into thoughts? But that was me thinking in terms of consciousness. If one brain—artificial or organic—used the same pattern of neural firing as another brain did, then sensing that pattern would immediately tell it what the other brain was thinking. But this assumes a within-species universal “language of thought” in terms of using the same patterns of neural firing for similar thoughts. That’s what language is for us, at least at some deeper level than regional languages and dialects.
Without requiring that they be identical, similar patterns of neural network firing (either in AI brains or organic brains) could be correlated with behaviors in much the same way we correlate patterns of speech with behaviors when we learn what patterns of speech mean. We don’t all use exactly the same pattern to express the same thought, but we quickly learn how to interpret different variations of an idea similarly. The same could be done by sensing another’s pattern of neural firing. In essence it becomes a language. The difference is that we can’t decide not to express it. Whatever is thought can be sensed by another being so long as they have the ability to directly sense neural patterns in others.
So now I have one possibility for my aliens. They are AIs that directly sense each other’s brain workings. The next step is to decide whether the organic creatures who created them a) are still surviving on some distant planet or have become extinct, and b) are they conscious beings or do they operate like the AIs they sent on this mission? Additionally, I have to figure out how the Delphi crews learns how these alien Snatchers communicate and if they can establish any type of communication with them, as well as if they are friendly. I’ve got some ideas. Lots of fun.
Ezekiel’s Brain may be purchased through Amazon by clicking HERE.
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.
If the question or some of the words in the title are unfamiliar to you, it means: could consciousness be something that exists, but is simply a by-product of brain activity with no actual impact on how we behave? In other words, our brain activities produce our behavior, and we’re aware of some of what goes on in those brain activities, but our awareness doesn’t impact how we behave.
But of course, we know why we do things. Don’t we? Don’t we make conscious decisions and then behave according to them? If the answer to the question in the title is no, then we only think we know why we do things and we only think we behave according to our conscious decisions. Such a suggestion sounds obviously wrong. It doesn’t square with our experiences. Yet this is exactly what some psychologists, neuroscientists and philosophers are now saying. They are in a minority, but I think their arguments are worth considering.
Why does the answer to the question matter? For many people, if our conscious decisions don’t cause our behavior, what happens to the concept of free will? It turns out that the answer to that question is almost as difficult to arrive at as the question of whether consciousness has an effect on behavior, so I will postpone that discussion. For my part, I want to know the answer because I am curious about the human mind, and because I write books about AIs, robots, and androids and I have to deal with the issue of whether machines can be conscious. If consciousness doesn’t affect humans’ behavior, perhaps it isn’t an important question to ask about machines.
Some serious psychologists, neuroscientists and philosophers (e.g., Halligan and Oakley, 2021) have come to the conclusion that our consciousness is an epiphenomenon and has no effect on what we do. Their reasons are related to the large variety of things we have learned can be done without being aware that we are doing them. This is most striking under hypnosis, but can also happen simply through suggestion and as a result of some brain lesions. In addition, there are several examples of people being aware of doing things that are not happening (e.g. wiggling one’s ears, moving a phantom limb). Probably more importantly, we generally are unable to describe how we do many everyday cognitive activities, e.g. name a familiar object, do simple addition or multiplication, even read. Many of these activities were done slowly and deliberately when we were learning them, but once they were learned, they occur automatically, whether we want them to or not. Try not to know what 2+ 2 equals or try not to read the following word: the. Is that just pure memory or is our mind calculating and we are not aware of it? Try not to read the following nonsense word: smat. Try not to calculate 95+5. You probably didn’t memorize either of those answers, but your brain computed them automatically and your conscious mind is unable to stop it from doing so.
A persuasive piece of evidence for some people is an experiment by Libet et al in 1983 in which subjects made a spontaneous movement while gazing at a clock and reported to the experimenters when they “decided” to make the movement. Although the decisions preceded the movements, EEG leads attached to the subjects’ heads indicated that a readiness potential in the brain preceded the decision by several hundred milliseconds. Libet interpreted his findings to mean that conscious decision making did not cause the brain to initiate the movement. Libet’s experiment has been interpreted to mean that free will is an illusion, but Libet didn’t go that far and neither will I. Assuming it’s a valid demonstration of what he said it is, it shows that, in some instances, the model of consciousness causing behavior is incorrect. Consciousness appears to be a report of decisions our brains have already made.
Assuming that consciousness plays no part in causing our behavior, what is its value? Halligan and Oakley describe consciousness as a “personal narrative,” which broadcasts the outputs from non-conscious brain systems “that have access to cognitive processing, sensory information, and motor control.” Why would such a system evolve if it was of no use to us? Halligan and Oakley believe that the advantage of the personal narrative system is to provide information on our thought processes to others who can then understand and predict our behavior and to us who can, by analogy, predict others’ behavior as well as use the memory of our conscious thoughts as at least a partial account of cognition that we can copy in new, similar situations or pass on to others, including future generations.
Would there be an advantage to having an AI be conscious? This is something I consider in my book, Ezekiel’s Brain. There certainly would be an advantage to humans who were trying to monitor an AI if the AI gave a running account of what it was thinking and doing, even if such accounts were, so to speak, after the fact. If it were isolated, there would be little use for an AI to be aware of the outputs of its cognitive processes since it could use those outputs without being aware of them. If, as in Ezekiel’s Brain, a community of AIs lived together, there would be a similar advantage to that for humans. Understanding and predicting others’ behavior would be enhanced, and acquired skills could be passed on to others, including future generations.
Philosopher Susan Schneider, in her book Artificial You, says that we cannot be certain that an intelligent alien species would be conscious. She also thinks that any alien that visits us on earth will likely be a machine, either because that’s the species that is advanced enough to send a visitor to us, or because only a machine can live long enough to move from star system to star system, even if they are only a representative of an organic species that built them. She sees no reason that such a machine would be conscious.
That consciousness has no power to cause behavior is a counter-intuitive idea and violates most people’s view of how our minds and bodies work. However, it’s not outside the realm of possibility and should give all of us—not just science fiction writers—food for thought.
Halligan, P. W., & Oakley, D. A. (2021). Giving Up on Consciousness as the Ghost in the Machine. Frontiers in psychology, 12, 571460. https://doi.org/10.3389/fpsyg.2021.571460
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activities (readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623–642.
Schneider, S. (2019). Artificial You: AI and the Future of Your Mind. Princeton, NJ: Princeton University Press.
Ezekiel’s Brain may be purchased through Amazon by clicking HERE.
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.
If the question or some of the words in the title are unfamiliar to you, it means: could consciousness be something that exists, but is simply a by-product of brain activity with no actual impact on how we behave? In other words, our brain activities produce our behavior, even if we’re aware of some of what goes on in those brain activities, and our awareness doesn’t impact how we behave.
But of course, we know why we do things. Don’t we? Don’t we make conscious decisions and then behave according to them? If the answer to the question in the title is no, then we only think we know why we do things and we only think we behave according to our conscious decisions. Such a suggestion sounds obviously wrong. It doesn’t square with our experiences. Yet this is exactly what some psychologists, neuroscientists and philosophers are now saying. They are in a minority, but I think their arguments are worth considering.
Why does the answer to the question matter? For many people, if our conscious decisions don’t cause our behavior, what happens to the concept of free will? It turns out that the answer to that question is almost as difficult to arrive at as the question of whether consciousness has an effect on behavior, so I will postpone that discussion. For my part, I want to know the answer because I am curious, about the human mind, and because I write books about AIs, robots, and androids and I’ve had to deal with the issue of whether machines can be conscious. But if consciousness doesn’t affect behavior, perhaps it isn’t an important question to ask about machines.
Let’s start with a definition of consciousness. Philosopher Ned Block has proposed a division of consciousness that has fairly wide agreement and has fostered a great deal of psychological research. Block divides consciousness into two types: P-consciousness, which is phenomenal consciousness, our awareness of sensations, and perception–those originating in our own body and those originating from the world around us— our emotions and our thoughts. These are private events and we assume, but cannot know, that other people have similar experiences. The second type is A-consciousness or accessconsciousness, which is our directed consciousness, which we use to control what we think about, to reason, to plan and to solve problems as well as to make decisions. Humans have both types of consciousness and perhaps some animals, such as chimps, do too. Many animals have only P-consciousness, they are aware of their environment and their sensations, but cannot make cognitive plans regarding how to behave. It’s conceivable that some simple animals, bacteria, amoeba, etc are not even conscious of their environment, although they are able to react to it, much like a machine that reacts to an external stimulus, but we would not say it is conscious of the stimulus (my robotic vacuum cleaner, for instance).
It’s not difficult to conceive that P-consciousness is an epiphenomenon, a by-product of our brain activity that has no function other than to make us aware of ourselves and our environment. We can’t control it. You can’t look at a tree and decide not to see it. Your sensory processing of information happens in parts of your brain that are not under your control. It’s not simple, as any study of brain function will demonstrate, and it involves many steps and many specialized nerve and brain networks to happen, but we are not aware of these and cannot affect them (except to shut our eyes, plug our ears, or anesthetize our body sensations). Special training may allow us to not attend to them, but that is a process involving A-consciousness. Even learned, complicated perceptions occur without us controlling them. Try, for instance, to not read the following three letters as a word: the. It happens outside of your conscious control, even though it took you a long time to learn how to do it. Well-practiced behaviors that required a great deal of attention while learning them, often can be carried out without our awareness once they become a habit. This includes things like simple math: 2+ 2= 4, recognizing a familiar face, remembering a name, driving a car on a familiar road. A common piece of advice to a graduating senior who has to climb steps to a stage to receive their diploma is “don’t think about your feet as you climb the steps or you’ll trip.” It’s better to let automatic processes control your habitual behavior than to try to use concentration and deliberation.
So what about A-consciousness? I may be able to say 2+2=4 without thinking about it, but I can’t solve a quadratic equation without deliberately searching my memory for the formula. Very accomplished mathematicians can. The formula pops into their heads as easily as 4 pops into mine when asked “what is 2+2?” But most of us can’t. We have to search our memory or, more likely, search the internet or a math text. And we decide to do this and we decide where to search, don’t we? Yes and no. The request to solve a problem that is presented as a quadratric equation, ax^2 +bx+ c, is a verbal cue (or if you are taking a test, a written, visual cue) that prompts a memory that there is a formula for this. Are we aware of how we remembered that? Not usually. We just know that there is a formula and we may even remember the teacher who taught it to us, but we don’t know how we remembered that. We then have to determine if the formula is in our memory and if it is readily accessible. Are we aware how we determine this, other than we simply know it? Not usually. Presumably our brain does some sort of memory search, perhaps resembling, at least metaphorically, us going through the index of a book to see if it mentions the quadratic formula and where it is located in the book. But we are not aware of how that search process is carried out. Once we arrive at the formula we either understand it or we don’t. We either remember how to use it or we don’t. If we do, we are not aware how we know how to understand and use it, we just are. So we solve the problem…. or we give up and don’t solve it. Isn’t that a conscious decision? It’s a decision we are conscious of making, but that is not the same as requiring our consciousness to make the decision. Perhaps we are just aware of what our decision is. We can even be aware of the reasons for it, or at least some reasons, enough to make sense to ourselves or even to others if we have to explain our behavior to them.
So we are aware of using our A-consciousness to solve a problem and aware of some of its operations, although mostly as the outcome of those operations. We don’t have an experience of them being carried out (we can write the process down, which makes it easier to carry out, since it reduces the load on working memory, but it still comes down to how we know what to write down, how we know how to carry out the steps, e.g., divide the top numerator by the denominator, all of which just “”pop into our heads”). Our conscious thoughts are mostly the results of the operations that occur automatically below our level of consciousness. But are those conscious thoughts necessary to solve the problem. I can easily imagine a computer program that is set up with solving math problems as its goal (it’s goal is to accumulate points by solving math problems, and for every problem it solves within a limited time frame, it gets a point – sorta like programming it to take the SAT-Math section). It encounters a problem presented as a quadratric equation, e. g., 5X^2 + 6x +1=0, and is asked to find x. The form of the problem is highly associated with “quadratic equation” in its memory. The problem, combined with the instruction to solve it is associated with “quadratic formula.” It searches its memory for the quadratic formula, follows a prescribed sequence of steps to solve the problem using the formula and outputs the answer, or… failing to find the formula or the set of steps to use with the formula, it outputs “skip this problem.” Did the computer have to be aware of what it was doing? No. Suppose the computer were a copy of a human brain (as in my novel, Ezekiel’s Brain) and, in addition to following the steps needed to solve the problem, it outputted the results of each step into awareness, e.g., search> “I found it!”; following the steps> “I’m following step 1, now step 2, etc. ; solution> “Here’s my answer.” Of what use would this latter function of outputting the results of each step into awareness be? To the computer, it would not be useful. It is aware of what it is doing, but its awareness is ans epiphenomenon; it doesn’t affect the actual operation of solving the problem. However, suppose that it was a “teaching computer.” How would it respond to the request, “tell me how you did that?” It could recite the steps it took. If it recorded what it was aware of, it could consult its own record if it forgot how to do the problem. In much the same way, our awareness may not be crucial for solving problems, but it provides a rough description of what operations are necessary to solve the various problems so we can remember them, and so we can pass them on to others, or if they are operations that produce errors, we can tell someone what we did and they can correct our approach by suggesting doing something else. In other words, it allows us to pass on knowledge and assist each other in solving problems. Evolution selected consciousness as a product of brain operations because passing on knowledge and helping each other solve problems helped humans survive.
If the above explanation makes sense to you, then perhaps you can understand that regarding consciousness as an epiphenomenon is not so far-fetched as it seems. It may be incorrect, but it is not impossible. It will be difficult to prove, because it is somewhat like proving a negative. No matter how many brain operations can be shown to be carried out without the necessity of consciousness, there could always be one more that appeared to require it. The answer to the question is important, not just so we can understand humans, but to determine if it is necessary to make artificial intelligence devices conscious in order for them to do everything humans do (cognitively, that is). It also might mean that, should we encounter an alien civilization someday, we might not be able to assume that because they are intelligent, they are conscious. It could alter our assessment of our fellow creatures here on earth. Must the most intelligent animals (chimps, porpoises, whales) also be conscious? These are interesting questions with lots of implications.
Sources:
Block N. (1995). On a confusion about a function of consciousness. Behav. Brain Sci. 18, 227–287. 10.1017/S0140525X00038188
Halligan, P. W., & Oakley, D. A. (2021). Giving Up on Consciousness as the Ghost in the Machine. Frontiers in psychology, 12, 571460. https://doi.org/10.3389/fpsyg.2021.571460
The recent release of a government report on UFOs leaves more questions unanswered than answered. There is little to link any observations to extraterrestrial objects, unless that is your default assumption for those observations that remain unexplained. We often hear people say that there must be other inhabitants of the universe besides ourselves, simply on the basis of the number of planets that are in the circumstellar habitable zone (CHZ), the region around stars that is potentially able to include planets with water and the possibility of sustaining life. Others say that the “odds” of our planet being the only one in the universe, or even the galaxy, that contains life are exceedingly small. The flaw in this latter argument is that there is no way to know what the odds are by generalizing from a single event. If we had a metric that told us what the odds are of life arising from the baseline conditions of our planet (temperature, gravity, chemical elements present) then we could make an estimate of the odds of life arising on other planets similar to ours, but we don’t yet have that metric.
A final issue, even if life exists in many other places in the universe, is what is the likelihood that a representative of that life form will travel to earth? To do so would require it have a means of traveling through space, and the ability to either travel faster than light speed or to maintain itself for incredibly long time periods to get from one star system to another, as well as a reason to choose our planet as a destination. What we do know is that the human race here on earth is an inquisitive race and one that has a strong impulse to explore places it has not yet visited. After all, we spread out across our entire planet when there weren’t that many people in existence. So even if no one discovers us, we are going to try to find them.
Susan Schneider, in her new book, Artificial You, which is about consciousness, particularly the possibility of machine consciousness, suggests that the next step in human development will be to merge ourselves with machines, a development that will probably be followed by replacing ourselves with machines. She regards this as almost inevitable and therefore believes that any advanced race from another world that we are likely to encounter will be a machine race—a race of AIs. This is because any race advanced enough to travel in space will be advanced enough to create artificial intelligence that will eventually surpass the intellectual abilities of organic life forms, and because travel from one star system to another will only be practical for machines, even if they are simply representatives of an organic race that sent them.
In novels, including my own novel, Ezekiel’s Brain, superintelligent AIs are usually portrayed as having capabilities that exceed human intelligence not just in speed and quantity of processing, but in the quality of their thinking. At the same time, such AIs, which are often malevolent (think Robopocalypse), have more or less human motivation and personalities.
This raises several issues. One is the extent that we are likely to produce AIs that think the way humans think. The other is, if this is the case, does that mean that alien AIs will think the way alien organisms think? I’ve done a lot of research on the first question, which is also linked to a more basic question of how do humans think and how do they do it? Below is a picture of just my current reading trying get a picture of the answer to this question (and I already have a Ph.D. in Psychology!).
It’s fair to say that the most startling advances in artificial intelligence have come from efforts to base what an AI does on a simplified version of what a human mind does. The principle field that addresses this is called neural networks or connectionism and tries to build cognitive devices that use layers of interconnected “nodes” which roughly resemble neurons in their function (although not in their composition) and use “deep learning” to figure out how to solve problems through exposure to stimuli that represent the real world.
Sometimes this learning is “supervised,” in that a person designates which stimuli are correct representations and which are not (e.g. words that signify speed) and the computer learns to make the distinction and sometimes it is to a more or less extent “unsupervised,” in that it learns based on dependencies between stimuli that exist within the real world (e.g. words that occur in the same place in a sentence or in conjunction with each other). To some extent, then, the kind of device that emerges from such efforts will resemble, at least in terms of what it does and how it does it, a human brain and mind. But it will work faster and be able to discover relationships heretofore undiscovered by human minds.
An important question to ask about AIs that humans develop and also about AIs that we may encounter as representatives of alien civilizations is whether they are conscious. After all, consciousness and particularly self-consciousness is, for many people, what distinguishes humans from other species, if not on an all-or-none basis at least in terms of degree. Human consciousness is more varied, more agile, more useful and more imaginative than consciousness in other species… or so we assume. If we encounter alien AIs, I for one will be disappointed if they don’t appear to be conscious. That is, if they are merely machines carrying out functions. They won’t be surprised by us, because they can’t experience surprise, they can’t wonder if we are worth relating to because they can’t wonder. They just do whatever it is they were designed to do.
But, I say, and you may also, if they are really as smart or smarter than we are, wouldn’t they have to be conscious? Is it really possible to plan, to meet new challenges flexibly, to alter priorities when contingencies change, without being conscious? I’m exploring those issue now. It’s not an easy question to answer nor one that can be answered off the top of your head. Greater minds than mine have had a go at it. Here is a screenshot of some of the academic articles I’ve collected on the subject so far. For the time being, I can just take my speculations, informed by the philosophy and science, and apply them to my next science fiction novel, tentatively titled Prime Directive, the sequel to Ezekiel’s Brain. In the new novel, the AI crew of the starship Delphi discovers a marooned group of AIs from another star system who don’t resemble humans at all. In order to deal with them, they must figure out to what extent they are conscious. The question opens up a whole rabbit hole of further questions, which causes the Delphi crew to re-examine their assumptions about themselves. I hope you’ll be eager to read Prime Directive when it comes out. In the meantime, read Ezekiel’s Brain and become acquainted with Ezekiel and where he came from and the rest of the crew of the Delphi.
Ezekiel’s Brain may be purchased through Amazon by clicking HERE.
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.
In my novel, Ezekiel’sBrain, plus in many other sci-fi AI stories (Pandora’s Brain, Robopocalypse, 2001 a Space Odyssey, Isaac Asimov’s robot series), artificial intelligence is conscious, in fact, self-conscious. Consciousness, perhaps even self-consciousness, is required in order to make the AI a character in the story, rather than just a machine that either helps or threatens humanity.
For many years, AI consciousness was a topic for philosophers, for computers scientists when they wanted to speculate on the future of AI, and for novelists. Today, the conversation has shifted from speculation to answering the question of whether certain AI developments, such as OpenAI’s GPT-3 (now Microsoft’s device) is or could become conscious and if not, why not, and how close is it already? This is exciting and, for many people, threatening and even frightening.
One of the difficulties in answering questions about AI consciousness is that there is no universal agreement on what consciousness is. We all know we are conscious; we are aware of what we are doing and thinking. But what does that mean? The conversation becomes murky and circular. Our vocabulary for describing behavior contains words such as thinking, angry, happy, aware, lying, and conscious, and we know how to use these words with humans, sometimes even with other animals. We watch their behavior and apply the words when it satisfies the criteria that our language community has agreed on about the use of these words. Most of these words imply internal states. We believe the behavior expresses the internal states. If we are in doubt, we can ask a person. Usually, they can tell us what they were thinking or feeling, and we take that as evidence that we were justified in using the word. But what happens when a machine produces similar behavior? If Siri says, “I’m sorry you feel that way,” what does that mean? What state of a machine corresponds to feeling sorry?
No one, except perhaps Joaquin Phoenix, believes that a language processor, such as Siri, feels sorry, even if it says it does. Machines can’t feel, much less feel sorry. Why not? One answer is that feelings require organic components, which include hormones, chemical neurotransmitters, and special neural receptors that translate patterns of chemical activity across neural circuits in order to feel. So nonorganic entities can’t feel. But, except for the hormones, everything is the same with regard to seeing and hearing. Can AIs see or hear? You’ll probably say they can, but are they aware of what they see and hear? That’s where we started this conversation. Murkiness and circularity are on the horizon.
Computers and brains both act on electrical signals, so any signal or pattern of signals that the brain is capable of producing is, in theory, able to be reproduced in a nonorganic electrical circuit (this is partly a statement of faith). We don’t know what kind of electrical signal or pattern produces consciousness but presumably, it can be duplicated in a machine’s electrical circuitry. I say, presumably, because there are theorists, such as John Searles, who believe that there is something peculiar about organic neurons and chemical transmission along neural circuits that produces consciousness—something not present in nonorganic electrical circuits. This is also a statement of faith.
Those who believe that machines can develop consciousness usually favor either of two broad hypotheses: 1) Consciousness is based on a special type of programming that, in humans, evolved as neural architecture and would need to be supplied by a programmer if a machine were to be conscious. 2) Consciousness arises as a function of certain types of neural network activities that can develop as learning is reinforced by rewards. In either case, consciousness is neither confined to organic structures nor is it a non-functional epiphenomenon (like the subtitles in a film where the characters already speak your language).
In my novels, I, Carlos and Ezekiel’s Brain, I take position #1 and have computer scientists write programs that produce consciousness by framing an ongoing account of an AI’s activity in an agent/action or subject/verb format. That may be the case, but frankly I doubt it because it also means that consciousness is based on language, since “an ongoing account” is most easily imagined as an internal monologue. While I do think it is possible that we inherit a genetically determined neural structure, shaped by evolution, that produces consciousness, I don’t think it is an internal monologue, because nonverbal animals and babies would not have consciousness, which they no doubt do. I am more inclined to see consciousness, or awareness as simply one aspect of some kinds of neural activities. It could require that it be built in to a machine the way it is built into a human, but recent theorizing about the similarity between natural selection in evolution and selection of behaviors through reinforced learning lead me to believe that some learned behavioral activities may conceivably produce consciousness without it being “built in.” This could mean that unsupervised learning through encounters with a complex environment could produce self-organizing activities that bring about consciousness in a machine without any human deliberately creating it. The main thing that is necessary is that the presence of consciousness produces a gain in the likelihood that the machine will achieve its goals or attain its rewards. We will then be faced with an AI that thinks, plans, and knows what it is doing. It will be a whole new world.
I’ve said a lot in order to explain why it is worthwhile to write and to read science fiction about artificial intelligence. When we imagine conscious AIs, we may be talking about a whole new species of being, which will live alongside humans or supplant them. That is both an interesting and a frightening idea. If you want to read more about what this could look like, I explore it in depth in my novel, Ezekiel’s Brain.
I got the idea for my novel, Ezekiel’s Brain from reading Nick Bostrom’s Superintelligence. Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” A superintelligence could be an alien brain, but in both Bostrom’s book and my novel, it is an artificial intelligence. Thinkers such as Bostrom, social scientist, Iason Gabriel, and physicist, Max Tegmark, regard superintelligence as potentially dangerous. The reason a superintelligent AI is dangerous is because it would have the intelligence and power to outthink humans and, if it controlled aspects of the world outside its own thinking, it could cause catastrophic damage to humans and their world.
No one has built a superintelligent AI, but most people who ought to know think it will happen sometime between ten and fifty years from now. This situation brings to the fore the question of how do you control an entity that is smarter than you are? In other words, how do you get it to do what you want it to do and not do what you don’t want it to do? This is called the problem of AI alignment.
Why do many writers about AI treat their subject as if it represents an alien intelligence, rather than just a supercomplex tool? The answer is that artificial intelligence involves machine learning. Instead of programming a machine’s actions, we allow it to use feedback from its attempts to reach a goal to guide its behavior. We don’t design the method of solving the problem, we design the method of learning that allows it to solve the problem. With a superintelligent AI, the machine could even modify its method of learning to maximize attaining its goals.
The difficulty arises when a superintelligent AI that works faster than our brains do (remember, electrical current traveling across transistors on a computer chip moves 10 million times faster than current traveling across a neural circuit) learns how to solve a problem in ways we are not able to understand. How do we guarantee that it will choose the solution that best meets our needs?
What has been declared the “World’s Funniest Joke” goes like this: Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says, “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence; then a gunshot is heard. Back on the phone, the guy says, “OK, now what?”
This joke is instructive. Imagine the operator is the programmer and the hunter is the AI. When the operator says, “First let’s make sure he’s dead” we all know they meant to check on the status of the wounded friend. But the hunter takes their instruction literally. He misunderstands the operator’s intention.
Eliezer Yudkowsky gives another example, this time from the Disney film Fantasia. Remember “The Sorcerer’s Apprentice,” portrayed in the film by Mickey Mouse? He learns a magic spell and instruct a broom to carry buckets of water to fill a cauldron.
From Wallpaper Safari
Everything seems fine until the cauldron is filled and the broom continues to bring more buckets of water. Mickey forgot to tell it when to stop. When he tries to chop the broom in half to stop it, he ends up with two brooms carrying buckets of water. Yudkowsky points out that getting a computer to know when to stop doing what you asked it to do is not as simple as it seems.
Nick Bostrom takes the problem even further with his so-called “paperclip apocalypse” thought experiment. A programmer instructs a superintelligent computer to make paperclips. Since it is smarter than humans, it learns the fastest, most efficient way to do this, but it doesn’t stop.
CC0 Public Domain
When the programmer tries to shut it down, it resists and, being smarter than he is, keeps producing paperclips. It has learned that if it is shut down, it cannot fulfill its main function, which is to produce paperclips.
If it runs out of raw materials, it begins using whatever else it can find, turning cars, buildings, etc. into fodder for more paperclips. If all you’ve got is an instruction to make paperclips, all the world becomes a resource for making more paperclips.
All of these catastrophic scenarios highlight the alignment problem. How do you get a superintelligent AI to carry out your intention without unintended negative consequences? One avenue for solving this problem is to learn how to be precise in giving instructions, but numerous examples show that this is not as easy as it sounds. A computer will respond literally to what it is instructed to do and the more capable and intelligent it is, the more resources it will bring to bear on the solving the problem in a way that doesn’t allow a human to subvert it.
The other route to finding a solution to the alignment problem is to give the computer “values” that guarantee it won’t perform actions that cause harm. This has been termed creating a “friendly” AI. This is also easier said than done. In the first place, we humans don’t agree on what our values are. Secondly, the same problem with specifying what we mean to a machine that will take us literally is daunting. When I was working on my novel, Ezekiel’s Brain, I played around with several possibilities. One of these was telling the superintelligent AI to do “what was best for humanity.” Unfortunately, it ended up eliminating much of the population in crowded, poverty-stricken regions of the world. It was not a pretty sight. In another scenario, when asked to simply rid the human race of illness and premature death, the world became so overpopulated that life was unmanageable.
Achieving AI alignment with what is best for humans is a challenging task, which right now is so much in its infancy that it consists of mainly conjectures and thought experiments. But computers are getting smarter and the speed at which this is happening is faster than is our pace to figure out how to ensure that they are friendly. Right now, the problem is still science fiction, and perhaps some of the answers will come from that realm. It wouldn‘t be the first time that our artists of the imagination led the way and science and technology followed. I gave it a try in Ezekiel’s Brain. You might take a look and see what you think.
Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here.
In 2004, I wrote I, Carlos, a novel in which a new form of entertainment, called neurostories, involved implanting a computer chip inside the brain of a person, so they could experience a story as a participant, their sensory organs bypassed and the neural circuits to which they connected stimulated directly by impulses from the chip. In my novel, the result was disastrous. A skilled marksman and martial arts expert suffered a heart attack during a neurostory episode, and parts of his brain became damaged with the neurostory chip taking over and directing his thoughts and behavior. On the chip in his head was a modern day version of the film “The Day of the Jackal,” in which an assassin sets out to murder, not Charles DeGaulle, as in the original novel and film, but the President of the United States.
I, Carlos was imaginative in 2004, but less so today. New, though still experimental, devices such as Elon Musk’s Neuralink or other brain computer Interfaces called BCIs, and brain machine interfaces, or BMIs, have made neurostories and other applications of electronic components embedded in the brain possible. The first applications of such devices are mostly to aid in bringing function back to persons with spinal cord injuries. Thoughts can be translated into movements by using what Musk calls an electronic shunt that circumvents the severed region of the spinal cord, enervating the neurons at lower levels on the cord that carry messages to the muscles of the arms and legs. The person thinks about moving and his limbs follow his thoughts.
In one application, recently described in Scientific American, a team from Stanford University developed an implant that allows a person to think about drawing a letter, and electrodes implanted in his brain convey the messages to a computer whose cursor draws the letters, eventually forming words. This is a BCI, as the brain communicates directly with a computer.
One of the difficulties of using BCIs is the implant process. A company called Synchron has developed a device called Stentrode, which is a stent inserted through a vein in the neck and which then fans out electrodes that read neural signals through the walls of the vein. Their device can read movement thoughts and convey them via a computer, to a robotic arm that then carries out an action. The device is being tested with persons with spinal cord injuries.
Elon Musk’s neuralink can operate like the BCIs described above, but, in addition to interfacing with a computer, it can receive signals and relay them to the brain. As Musk says, it can be like having your iPhone inside your head, but you don’t need to physically text or speak your responses, just think them. The hope is that it can also substitute for damaged areas of the brain after strokes or as dementia is developing, conveying messages from one brain region to another. He also mentioned watching films in one’s head, which is eerily like the plot of I, Carlos.
The field of brain implants is moving faster than anyone would have thought 20 years ago, and faster than I thought when I wrote I, Carlos. In my latest novel, Ezekiel’s Brain, which begins in 2023, just two years from now, the main character, Ezekiel Job is a neuroscientist who specializes in brain implants and brain-computer interfaces. He uses the latest technology, which includes, neuroprosthetics, devices that, instead of being used to interface with a computer, actually replace damaged neural structures with artificial, electronic ones. Such are called bio-hybrids or biomimetic neurons or biomimetic neural circuits. They interface with real brain neurons but mimic natural circuits and sometimes even outperform them (an experimental biometic circuit mimicked the auditory neural circuits that allow barn owl to be skilled night hunters using only auditory cues. The artificial neural circuit outperformed the actual barn owl “by orders of magnitude”). From there it was only a hop skip and a jump to copying the circuitry of an entire brain—his own—this time by scanning himself and duplicating circuitry using a 3-D printer.
Ezekiel Job’s experiment didn’t backfire, as did the neurostories in I, Carlos, but the result, which was a computerized AI named Ezekiel with the same personality and memories as Ezekiel Job, turned out to outlive the human Ezekiel, and two-hundred years later finds himself amidst an entire civilization of AIs. That’s another story, but the underlying idea is that today’s BCIs, BMIs, biomimetic circuits, and neuroprosthetics are the beginning of what is almost for sure an inexorable march toward replacing our organic brains with electronic ones. First the bio-hybrid circuits, then the cyborgs and finally the androids with the AI brains. Ezekiel’s Brain.
Some books have the power to generate iconic images of the future. Aldous Huxley’s Brave New World, for instance, with its people controlled by a protective and paternalistic group of government Controllers, and the use of Soma, a pacifying drug, has become a meme for the future. Conflict, birth, and pain are all removed from society, but so are art, science and religion. People are confined to castes from the time they are born, with some living a pleasure-oriented life and others destined to do the society’s menial work without complaint. For the developed world, at least prior to the pandemic, a society in which anxiety, depression, and pain are softened by drugs that are constantly being advertised on television, where people’s attention is only captured by 240 character Tweets, where science is something learned from pseudo-scientists on YouTube videos, where, despite climate change threatening the planet, people are happiest when they can buy high-emission SUVs that are so large only an extended polygamous family could possibly occupy every seat, where anyone caught sitting home reading instead of “working out” at ubiquitous gyms, is labeled foolish, and where communication must “grab” you, preferably with a startling visual image and not last more than 60 seconds, the specter of Brave New World is perhaps applicable.
In contrast to Brave New World, George Orwell’s 1984 portrays an equally controlling governmentally organized society, but this time the control is exercised by fear and misinformation. ““Newspeak” tells us that “War is Peace, Freedom is Slavery, Ignorance is Strength?” and “Big Brother,” which has become a synonym for intrusive government surveillance in the name of protecting all of our security, watches everything that we do. The penalty for challenging the system is torture, the goal of which is to force someone to inform on their friends and colleagues in order to gain information about underground movements to challenge the government.
The worlds envisioned by Huxley and Orwell are not mutually exclusive. Both are characterized by citizens who are not fully aware of how limited and misshapen their lives are, because they are not allowed to see any alternatives. And in both, the control is pervasive and disguised as something in the citizens’ best interests. Both novels are social commentary more than science fiction, and each contains enough elements of our current society, although they were written decades ago, to arouse our fear that they are coming true.
Philip K. Dick was another architect of a dystopian future. Dick was so imaginative that he painted portraits of a number of alternative future societies. We all remember Do Androids Dream of Electric Sheep?, which became the film Blade Runner, it’s paradigmatic street scenes of a semi-lawless, disintegrating city life with street vendors, constant rain, trash in the streets and crumbling infrastructure, becoming the model for countless book and film versions of the future.Dick’s Valis trilogy is perhaps brought to a conclusion by his posthumously published Radio Free Albemuth, which, resurrecting shades of 1984, has a paranoid U.S. president, an obvious allusion to Richard Nixon, using Orwellian societal mechanisms to control society, whose only salvation is offered from Valis, an extra-terrestrial power that fosters rebellions among the nation’s young.
Besides the themes of social control and powerful non-human agents from space, another of Dick’s main issues was the development of artificial intelligence to the point that we could no longer distinguish androids from humans, including whether we, ourselves were one or the other. In Time Out of Joint, the character lives in a false reality, constructed to fool and control him. That the reality we experience may be an illusion is a central idea among many of Dick’s works. It is a theme of many of our most significant modern-day science fiction novels, such as Greg Egan’s Permutation City and a host of William Gibson stories from Neuromancer to Mona Lisa Overdrive echo, by creating characters who are so immersed in a cyberworld that it becomes the only reality they know. In Agency, Gibson’s latest, his artificial intelligence agent, Eunice, an AI is so potentially powerful that superpowers in the present and in an alternative past compete to possess her, presents the dilemma of humans creating an AI, which is so advanced that they no longer control it.
Which brings me to my latest novel, Ezekiel’s Brain. It’s 1984, Blade Runner, Agency and the 79 episodes of the original Star Trek series rolled into one. Ezekiel is an “emulation,” an AI created by scanning the brain of its creator into computer circuits. But Ezekiel has a competitor in the AI world, Wanderer, a superintelligent AI created by DARPA, the U.S. Defense research agency, to give them control of the cyber world. The attempt to make the powerful DARPA AI “friendly” toward humans backfires and Wanderer decides the world is better off without our species. Two hundred years later, when faced with an electronic mutation among its race of AIs, Wanderer’s descendants decide to resurrect Ezekiel, the human brain emulation to give them the instincts to wage war against the mutants, who are rapidly consuming everything around them to make more copies of themselves. That’s enough of an introduction to the novel, but if you read it, you’ll be able to trace its sources from Isaac Asimov to Philip K. Dick in the history of science fiction. Science fiction is not simply a genre, it is a way of thinking that allows one’s imagination to roam over serious questions, keeping in mind real science, but exploring its limits and the social implications of its new developments. That’s what I’m aiming for in Ezekiel’s Brain. A long history of gloriously imaginative science fiction writers provides me with superlative examples of where I need to go.
Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here.