The recent release of a government report on UFOs leaves more questions unanswered than answered. There is little to link any observations to extraterrestrial objects, unless that is your default assumption for those observations that remain unexplained. We often hear people say that there must be other inhabitants of the universe besides ourselves, simply on the basis of the number of planets that are in the circumstellar habitable zone (CHZ), the region around stars that is potentially able to include planets with water and the possibility of sustaining life. Others say that the “odds” of our planet being the only one in the universe, or even the galaxy, that contains life are exceedingly small. The flaw in this latter argument is that there is no way to know what the odds are by generalizing from a single event. If we had a metric that told us what the odds are of life arising from the baseline conditions of our planet (temperature, gravity, chemical elements present) then we could make an estimate of the odds of life arising on other planets similar to ours, but we don’t yet have that metric.
A final issue, even if life exists in many other places in the universe, is what is the likelihood that a representative of that life form will travel to earth? To do so would require it have a means of traveling through space, and the ability to either travel faster than light speed or to maintain itself for incredibly long time periods to get from one star system to another, as well as a reason to choose our planet as a destination. What we do know is that the human race here on earth is an inquisitive race and one that has a strong impulse to explore places it has not yet visited. After all, we spread out across our entire planet when there weren’t that many people in existence. So even if no one discovers us, we are going to try to find them.
Susan Schneider, in her new book, Artificial You, which is about consciousness, particularly the possibility of machine consciousness, suggests that the next step in human development will be to merge ourselves with machines, a development that will probably be followed by replacing ourselves with machines. She regards this as almost inevitable and therefore believes that any advanced race from another world that we are likely to encounter will be a machine race—a race of AIs. This is because any race advanced enough to travel in space will be advanced enough to create artificial intelligence that will eventually surpass the intellectual abilities of organic life forms, and because travel from one star system to another will only be practical for machines, even if they are simply representatives of an organic race that sent them.
In novels, including my own novel, Ezekiel’s Brain, superintelligent AIs are usually portrayed as having capabilities that exceed human intelligence not just in speed and quantity of processing, but in the quality of their thinking. At the same time, such AIs, which are often malevolent (think Robopocalypse), have more or less human motivation and personalities.
This raises several issues. One is the extent that we are likely to produce AIs that think the way humans think. The other is, if this is the case, does that mean that alien AIs will think the way alien organisms think? I’ve done a lot of research on the first question, which is also linked to a more basic question of how do humans think and how do they do it? Below is a picture of just my current reading trying get a picture of the answer to this question (and I already have a Ph.D. in Psychology!).
It’s fair to say that the most startling advances in artificial intelligence have come from efforts to base what an AI does on a simplified version of what a human mind does. The principle field that addresses this is called neural networks or connectionism and tries to build cognitive devices that use layers of interconnected “nodes” which roughly resemble neurons in their function (although not in their composition) and use “deep learning” to figure out how to solve problems through exposure to stimuli that represent the real world.
Sometimes this learning is “supervised,” in that a person designates which stimuli are correct representations and which are not (e.g. words that signify speed) and the computer learns to make the distinction and sometimes it is to a more or less extent “unsupervised,” in that it learns based on dependencies between stimuli that exist within the real world (e.g. words that occur in the same place in a sentence or in conjunction with each other). To some extent, then, the kind of device that emerges from such efforts will resemble, at least in terms of what it does and how it does it, a human brain and mind. But it will work faster and be able to discover relationships heretofore undiscovered by human minds.
An important question to ask about AIs that humans develop and also about AIs that we may encounter as representatives of alien civilizations is whether they are conscious. After all, consciousness and particularly self-consciousness is, for many people, what distinguishes humans from other species, if not on an all-or-none basis at least in terms of degree. Human consciousness is more varied, more agile, more useful and more imaginative than consciousness in other species… or so we assume. If we encounter alien AIs, I for one will be disappointed if they don’t appear to be conscious. That is, if they are merely machines carrying out functions. They won’t be surprised by us, because they can’t experience surprise, they can’t wonder if we are worth relating to because they can’t wonder. They just do whatever it is they were designed to do.
But, I say, and you may also, if they are really as smart or smarter than we are, wouldn’t they have to be conscious? Is it really possible to plan, to meet new challenges flexibly, to alter priorities when contingencies change, without being conscious? I’m exploring those issue now. It’s not an easy question to answer nor one that can be answered off the top of your head. Greater minds than mine have had a go at it. Here is a screenshot of some of the academic articles I’ve collected on the subject so far. For the time being, I can just take my speculations, informed by the philosophy and science, and apply them to my next science fiction novel, tentatively titled Prime Directive, the sequel to Ezekiel’s Brain. In the new novel, the AI crew of the starship Delphi discovers a marooned group of AIs from another star system who don’t resemble humans at all. In order to deal with them, they must figure out to what extent they are conscious. The question opens up a whole rabbit hole of further questions, which causes the Delphi crew to re-examine their assumptions about themselves. I hope you’ll be eager to read Prime Directive when it comes out. In the meantime, read Ezekiel’s Brain and become acquainted with Ezekiel and where he came from and the rest of the crew of the Delphi.
Ezekiel’s Brain may be purchased through Amazon by clicking HERE.
Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.