Our human minds were constructed over hundreds of thousands of years (actually millions of years, considering pre-human evolution), cellular circuit by cellular circuit, to allow us to survive and reproduce in earth’s environment. As amateur philosophers are prone to make much of, the world we see and how we interpret it are not one and the same with the world that is “out there,” surrounding us. Like the sounds dogs hear but we don’t hear, or the nonvisible wavelengths of light, or the magnetic fields we are not aware of except with artificial devices, we see, hear and sense what is necessary for our survival. Dogs, insects, birds, and ocean creatures may sense what we do not, so what was selected as necessary for survival for humans was not the only option.
Even more importantly than what we sense, is how we interpret and understand it. Psychologists use various illusions, such as the Müller-Lyer illusion or the Ebbinghaus Illusion to show us that our brains make decisions below our level of consciousness about how to interpret our environment. Kahneman and Tversky similarly demonstrated that our logical thinking is riddled with “errors,” mostly a result of assumptions and shortcuts built into our way of thinking because they allowed faster, albeit, less accurate decision making, which probably assisted our survival in the past.
We only have a vague idea how many of our fellow species on earth share our sensory and cognitive biases and see the world the way we do. Hive insects probably don’t. Whales and porpoises are intelligent, but what reasons are there that creatures with no hands who live in water would have the same perceptual and intellectual processes we possess? As the philosopher Thomas Nagel famously asked, “What Is It Like to Be a Bat?” His point was that human understanding is limited. As Donald O. Hebb observed decades ago, basic physical properties of the world place limits on what perceptual and cognitive systems can do, because there is a common world to which all species must adapt, but what about creatures from another world?
The great Polish philosopher and writer, Stanislaw Lem, addressed this issue in his science fiction novels, The Invincible and Solaris. In The Invincible, the aliens are tiny automata, descended from small robotic assistants to members of an alien race, which crash-landed on a planet. Over eons, the automata evolve into a collection of tiny “flies,” which, although not individually conscious or possessed of reasoning, use evolved herd behaviors to destroy their alien masters and all other living creatures on the planet’s surface, including the humans who come to visit.
In Solaris, humans discover a planet with only a single creature on it—a massive, alive, ocean, which obeys higher-order mathematical principles and has an ability to create copies of the humans’ most intimate memories. Its purpose, if it has one, and its way of thinking are incomprehensible to humans, including the main character, who learns that there are ways of being that are simply beyond the comprehension of men because the concepts by which we think and perceive provide a limit to our understanding.
Lem’s conception of aliens is in the minority among science fiction accounts of other species humans might encounter. Most create their aliens to resemble humans.
Without resorting to alien encounters, humans may be right now developing artificial intelligences that think differently than we do. There certainly is no reason to build human perceptual or cognitive biases into the way a machine perceives or thinks. And because machine learning allows feedback-based modification of the thinking mechanisms themselves, and such modifications don’t require long time periods from one “generation” to the next, machine intellectual evolution can proceed rapidly, in fact at break-neck speed compared to human brain evolution. We could create an AI creature that soon operates as differently from our way of thinking as any of Lem’s aliens. Obviously, if we don’t understand how it thinks, we have little chance of controlling such a machine.
Some of these ideas are approached in my new novel, Ezekiel’s Brain. Approached, rather than embraced, because, as a novel, its purpose was to introduce a species of AI that the reader could understand. But in further books in the “Voyages of the Delphi” series, of which Ezekiel’s Brain is the first novel, encounters with other alien species will bring these issues up, much as Lem has done in his writing. It’s a difficult idea to contain in a novel’s plot, because the paradox is how to describe something to the reader that, by its very nature, the human mind is not built to comprehend? I have a lot of work to do.
For the time being, I urge you to begin the journey by reading Ezekiel’s Brain, available at Amazon in Kindle and paperback editions. Click Here.
Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here.