Book Review: The Book of Minds

Book Review:

The Book of Minds:

How to Understand Ourselves and Other Beings, from Animals to AI to Aliens

By Philip Ball

University of Chicago Press, 2022

When I first read Immanuel Kant, I was most struck with his reasoned conclusion that we could not perceive and understand the world if not for the fact that our minds were created in such a way as to perceive our reality in certain categories. When I read Stanislau Lem, I came away realizing that the way in which we divide up the world in our thinking may be unique to humans and, should we meet aliens from other worlds someday, it may well be that they perceive, act, and think very differently from us, so differently, that we may have no way of understanding each other.

In Philip Ball’s wonderful book, The Book of Minds, I found a convincing argument that even among our fellow inhabitants of our planet, it’s not likely that we know how other species think or perceive, and, as we increasingly produce more and more powerful artificial intelligences, it may also be true that we will not know how they think. Now, all these things are important considerations for someone such as me, who writes science fiction, particularly science fiction that includes both artificial intelligences and aliens from other worlds. But, although I purchased and began reading Ball’s book hoping to gain ideas for my novels, I soon became entranced by the subject matter itself and the questions it raised.

Ball uses a concept that he calls “mindedness,” which is basically what it’s like to be something as a way of defining mind, i.e., “For an entity to have a mind, there must be something that it is like to be that entity.”  It is mind, he says, that hosts an experience of some sort. Entities can possess different degrees of mindedness. Is mindedness the same as consciousness? He says not, but instead suggests that “mindedness is a disposition of cognitive systems that can potentially give rise to states of consciousness.”

Ball’s definitions are less important than his examples. When he begins to examine how other creatures differ from humans, he finds that they have different sensitivities, different innate cognitive systems, than we do. Sea creatures, those that fly, and night creatures, live in different worlds than we do, because they have different minds. Ball proposes that it doesn’t make sense to evaluate other creatures’ minds in terms of how they match up to human minds. Concepts of human intelligence don’t apply to creatures that can exceed human abilities to navigate by landmarks of smell or color or by magnetic directions, or by bouncing sound off objects. They are too different.  Ball shows that our standard view of other creatures as beings that are locked into rigid programmed interactions with their environment underestimates the flexibility of, for instance, bees, who have remarkable direction-finding skills that allow them to alter their method of finding their way back to the hive, based on circumstances.  Other creatures, such as corals, sea anemones, and jellyfish, possess “nerve nets” that propagate sensory signals from one part of them to another, so that they experience an “overall sensation, a unified internal representation of the organism’s situation.” These are not human-like skills or experiences, and Ball opts to create what he calls a “mindspace.” Rather than a scale on which to compare minds across the same traits or measures, he recommends locating different skills and abilities and properties in a sort of matrix in which each of them represents an axis. Humans might rank low or even nonexistent on using magnetic poles for orientation, or feeling integrated, unlocalized sensory experiences, while being high on extrapolating from one experience to another.

Ball cautions us not to assume that we are born into the world possessing a high-powered learning machine for a brain but one that is otherwise blank of knowledge. Evolution has been kinder to us than that. Just like other creatures, we have a lot built-in. He cites the work of Harvard psychologist, Elizabeth Spelke, showing that humans possess, at birth, a set of “core knowledge” systems, each of which work independently of one another and which allow us to process experience in a way that enhances our adaptiveness. Spelke has identified systems that allow us to conceptualize objects, to understand distance and orientation, to think in terms of number and quantity, to understand causality and see things in action-agent sequences, and to see others as agents with intentions and goals. These and other to-yet-be-identified innate cognitive systems have much to do with how our human minds experience the world, and to what extent other creatures have similar systems and experience the world similarly to us is an open question.

The innate characteristics of our mindedness, which shape how we learn, how we remember, and how we think, are extremely important, but they are qualities that those who create artificial intelligences have mostly ignored. Designers of AI have, at best, equipped their devices with just one or two of these traits, such as the ability to learn by reinforcement, or to scan edges of objects, but otherwise have devised AIs that are tabula rasas. Perhaps the field has an aversion to returning to the era of “expert systems,” in which their AI systems were loaded with both data and algorithms that were thought to match what human experts used to solve problems or make decisions. Since such data were highly situation-specific, it was hard to advance from such a system to an all-purpose AI that could learn across content areas. But the knowledge built into human minds is not high-level details it is basic ways to think about the sensory data being received and the kinds of motor outputs it provokes. The neural interactions behind it might be complex, but the way it affects the mind is simple, making it ready to support learning in a variety of situations. With humans, unlike most AIs, the cognitive processes we use were designed to work within human bodies and they are intimately tied to our bilateral sensory and motor systems, and, since we left the trees, our upright posture and locomotion, not to mention our sexual reproduction and group living situation. Ball cites neuroscientist Antonio Damasio’s observation that, “If the representations of the world that a mind produces were not designed for an organism in a body (and specifically this type of body) … the mind would surely be different from ours.”

Ball does address the question of whether AIs can have minds, and if so, what they might be like. After initial attempts to define both thinking and computation in terms of computational symbol manipulation and programming computers to think like humans, which was wrong at least on the human side, the field turned to teaching computers to learn and then providing them with tons of information and asking them to use that information and learning ability to create responses. The results have been impressive, especially in areas such as natural language learning, and image identification, but, at least to date, even the most successful systems don’t seem to exhibit the kind of “common sense” that would indicate that they know what they’re doing, as opposed to operating, well, mindlessly. But what did we expect, that creating a computer that could mimic human responses without being specifically taught how to do it would produce a wise mind as well? As Ball points out, the human mind just has too much information pre-loaded into it and it works along pathways that themselves were shaped by evolution. Its final goal is to enhance the survivability of its possessor. That has not been true of AIs, except in science fiction (e.g., my science fiction). A final note is that, currently, some of those designing AI, such as DARPA (the villain in Ezekiel’s Brain), in their “project common sense” are employing child psychologists, because, Ball quotes psychologist Tomer Ullman as saying, “The questions people are asking in child development are very much those people are asking in AI. What is the state of knowledge that we start with, how do we get more, what’s the learning algorithm?”

So far, AIs don’t possess human-like minds, but do they possess their own types of minds? And, if not, will they some day? Could they?  Ball is not sure about this. He says, “we may be best advised to grant them a kind of provisional mindedness.” He recommends studying what AIs do and how they do it (although this is sometimes obscure), in what he calls a “behavioral science of machines.” A main reason we need this is that, as we ask machines to do more and more, it could be dangerous to not know how we can expect them to act. Something on which Ball and I agree, is that, if machines are ever to become conscious (I think they will, and he is more dubious), we would need to program in the consciousness. It would not arise spontaneously on its own as an emergent property. That would mean identifying what the elements of consciousness are, at least as it exists in humans. In both I, Carlos and Ezekiel’s Brain I have identified some elements of consciousness, such as structuring experiences that involve the self in agent-action terms and embedding it in a goal-oriented narrative, plus some kind of feedback mechanism that creates the experience of observing one’s own thoughts, sensations, and actions within this narrative. As Ball points out, no one is attempting to do this at the moment.

Finally, we have the case of aliens from space. Ball takes the topic seriously enough to devote a chapter to it. He first points out that most science fiction stories create aliens who., regardless of their physical characteristics, behave and think like humans. Even our scientific projects, such as SETI and the old Project Blue Book assumed that aliens would want to communicate to other races on other planets and that they would develop advanced versions of similar technology to ours. In fact, there is no reason to believe either of these is true, but if either is not, it will make our task more difficult unless we establish an ability to visit other star systems.

For the sake of simplicity Ball takes for granted that “the laws of physics and chemistry are universal.” He also assumes that “Darwinian evolution by natural selection is the only viable way for complex organisms to arise from simple origins,” so whatever alien organisms are like, they will have been shaped to adapt to their environment. There may also be constraints to how far such adaptation can go. Flying creatures may need wings and sea creatures must have streamlined bodies that allow them to swim. On Earth, convergent evolution produced similar adaptations across different species, e.g., fish, whales and dolphins have similar bodies; eyes developed similarly across several species that have little else in common. This is because there are a limited number of solutions to certain environmental problems.  But, as Ball points out, this is all speculation. Lamarckian evolution that passes on adaptations that are made within the lifetime of an organism is not impossible. Environments on alien planets may differ much more than we have seen on Earth. What about planets whose entire surfaces are water.  Would fish learn to communicate at least as much as whales and dolphins? Could a species exist only in the atmosphere? We have no idea, really, and, if Earth’s environment and our need to adapt to it is what shaped our minds, then alien minds might be very different from ours, indeed.

The Book of Minds, contains a great deal of food for thought, and is filled with interesting facts across a wide range of disciplines (biology, psychology, computer science). I was amazed how much an author can know about different subjects. The writing is lively and contains a fair amount of humor. Some areas of philosophy I thought were too brief and superficial to be useful (what is free will, for instance), but otherwise it is a fascinating book and one that gave me some humility regarding identifying the human mind as something special and a model for all other successful minds. That’s not the case. I came away with my interest in the minds of AIs, (if their minds exist) and the minds of aliens (if aliens exist) renewed and heightened. I think it will enhance my science fiction writing.

What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.

Is AI Your Friend?

Buy Ezekiel’s Brain on Amazon. Click HERE

Subscribe to Casey Dorman’s Newsletter. Click HERE