Being You by Anil Seth: Book Review

Being You: A New Science of Consciousness

By Anil Seth

Dutton, 2021

Book Review


I have to admit that I became interested in Anil Seth’s book, Being You, because of the number of “best books” lists it was on. Amazon listed 5 of them, including “best science books“ listings from both The Guardian and the Financial Times, and it made the  list of “best philosophy books” in Five Books. It sounded like my kind of book, and I was not disappointed.

I’d have to say that Being You is stronger on its science side than its philosophical one, and I suspect that it achieved its “best books” designation in philosophy both because of its subject matter, which is consciousness, and its speculative nature. Nearly all of its speculations, however, are backed  by scientific findings, some of them cherry-picked, but all of them tantalizing. What makes the book stand out in my mind is that it doesn’t just summarize current ideas and research, it suggests a theory that does a fair job of encompassing them and suggests at least a partial answer to the “hard problem” of explaining how the brain produces consciousness.

Seth approaches consciousness the way I was taught, as a scientist, to approach all psychological phenomena, which is to discover how to “explain, predict and control” them. To do so, he argues that consciousness is not a single entity, but instead it can be studied in terms of its “level,” i.e., measuring how conscious someone or something is; also in terms of its content, i.e., what we are conscious of; and finally, in terms of the self,  or, as he calls it, the experience of “being you,” which is one type of conscious content, and for many people, the most salient, yet elusive phenomenon.

The measurement part of consciousness research, as described by Seth, can best be thought of as a preliminary or establishment phase of study in which the goal is to find a meaningful way to quantify how conscious one is. While it’s tempting to think of consciousness as either present or not present, our own experiences show us that this is not the case. We are all familiar with those moments when we are drowsy and barely cognizant of our environment, versus those in which we are highly alert. It turns out that the degree to which an electrical stimulus travels from the point of stimulation to different parts of the brain is a good index of how conscious we are. In higher states of consciousness, such as normal wakefulness and REM sleep, when we are dreaming, such a stimulus creates a complex pattern of activation that lasts for some time and involves many regions of the brain. In less conscious states, such as dreamless sleep or general anesthesia, the stimulus provokes a simple, short lasting, and mainly localized response in our brains. That the brain is more active and its response more complex when we are more conscious may not seem surprising, but this method of measurement has produced startling findings of what appears to be conscious brain activity in cases of “locked-in syndrome” and has discovered some people who appear to be in comas but have active conscious activity going on.This image has an empty alt attribute; its file name is Eeg_registration-1024x682.jpg

Measurement is not the goal of understanding consciousness; it is a tool and a step along the way. The main point of Being You is to argue that what we are conscious of is not the same as those things in the world that produce our sensations of being conscious of something. In other words, we don’t experience things as they really are, our experience is a best guess at the qualities of the world, including the state of our own body. We don’t directly experience the world; we predict what it is and experience our predictions. Perception then, is an inference, which we may update by comparing our guess to the data coming in via our sensory system. Our conscious experience is our brain’s best guess at what is producing our sensory input.

When we think about conscious experience this way, it can explain a lot of things, particularly how our past experience, our preconceptions, and our current needs can influence what we see and hear in the world around us.

When Seth addresses the self, he focuses on other aspects of perception, particularly interoception, the information we gather from within our own bodies, and social perception, with an emphasis on our inferences about the minds of others, particularly what they are thinking about us. We experience a “volitional self,” the inference that we are the cause of some things and a “narrative self,” an inference that we have an enduring personal identity. All of these are part of what it means to be you. To quote Seth, this “experience of being me, or being you, is a perception itself—or better, a collection of perceptions—a tightly woven bundle of neurally encoded predictions geared toward keeping your body alive.” And he tells us that “this, I believe is all we need to be, to be who we are.”

So, everything we experience, from the world around us, to our moods, and the aches and pains within us, to the sense of being us, are inferences about the way the world and our bodies are. These inferences are best guesses using data from multiple sources, and they are not just about what the world is like. We also experience inferences about how we can act on the world, and what affordances it offers us, how we may change our prediction by altering our perspective through taking actions. Our brains are not dispassionate observers trying to determine if the world we infer being  there really is there.  What Seth continually reminds us of is that this world of predictions in which we live, was built up through evolution, and the experiences we have are not inferences about what reality is, but about how we can use it to survive. Every experience we have is a best guess about something in the world or in ourselves and the experience is produced by brain mechanisms that evolved to help us survive. With our bodies, our experiences indicate deviations from some mean levels of essential variables such as body temperature, glucose levels, or heart rate, that need to be maintained to survive. Then we are able to do something about them. We experience hunger when we need to eat and thirst when we need to drink. We feel sad when our brain senses changes that  accompany loss or lack of reinforcement. To quote Seth, “Self-perception is not about discovering what’s out there in the world or here, in the body. It’s about physiological control and regulation—it’s about staying alive.”  We are more like the “beast machines” of Descartes’ depiction of nonhuman animals, and less like the thinking soul that he saw as separate from the body that housed it. Like every other animal, our need to survive is the foundation of our conscious experience.

This image has an empty alt attribute; its file name is 1600px-Alienbrainreader-1024x681.jpgSo, we create a conscious world that consists of our best guesses about the existence and condition of things and events inside and outside of us that have been important (evolutionally) for maintaining our existence. This is Seth’s conception of the “beast machine” that we are. Would, or even could, an artificial intelligence be conscious? According to Seth, probably not. The reason is that an AI has no body, no evolutionary history to have created predictions, and no need to manage its own materials to keep it alive. All the conditions that created consciousness in humans are missing in an AI. Do I agree with Seth? No, I don’t. An AI misses the ingredients that have contributed to the development of consciousness in humans, but that only means that it won’t develop a human-like consciousness. It won’t be a human psyche in a machine. But it can certainly have goals, and there is no reason I can think of that those goals couldn’t be used to shape its perceptions. Even the kinds of things it predicts to see, hear, or feel from its environment can be built into it. If, in the nightmare scenario envisioned by Nick Bostrom, an AI’s goal is to make paperclips, It could be built to predict that it will encounter objects and materials and to predict the characteristics of objects and materials that it can use to make paperclips. It would “see” the world as nothing but potential paperclip-making things or impediments to paperclip-making. Would what it saw be conscious? Well, what else would it be? But, there’s no reason to think it would have a “self.” In fact, such an AI would probably end up turning itself into a paperclip, since it would have no sense of itself being different than any other potentially paperclip-making material.

I think my question reveals a weakness in Being You. There are some philosophical issues that are skirted over or ignored. Most are taken for granted with a nod to the philosophers who have discussed them, such as Daniel Dennett. The underlying assumption, which is never flatly stated, is that our inferences or predictions about the world are consciousness, or, in other worlds, what we call consciousness is our experience of these inferences, from roses being red to our own existence. An inner experiencing “I” that sees our perceptual guesses or evaluates our predictions is not necessary and, in fact is a dead-end road to confusion and an unsolvable “hard problem.” That our consciousness is of what is useful for our survival is true for all the rest of our body and its functions (with some room for spandrels and vestigial elements). It’s an assumption derived from a faith in evolution more than from research findings. That our inferences are what we experience is, on the other hand, a conclusion that is supportable from much of the research described by Seth.

Has the “hard problem” been solved by Seth’s formulation (or other similar ones)? Not really, because there never could be a single answer to a problem that involved something like consciousness, which is multifaceted and utilizes multiple components and characteristics of our brains and our sense organs. I am intrigued by the question of whether the information contained in our inferences is contained in the pattern of electrical firing of our neurons or in some more organic cellular processes (I vote for the pattern of electrical firing). Evaluating information as input into a decision process to determine if it fits a predicted model requires a model of what is being predicted. Where do these come from? Basic ones, such as the concepts of objects, causality, directional motion, tissue damage and perhaps its avoidance, etc. are so common, probably even across species, that they would seem to be inherited. How is that possible?  There is lots of room for research to find out more before we really know ourselves in the sense of how our bodies work. How consciousness works is not a separate question, even though it may continue to be a more elusive one.  Anil Seth’s Being You makes this abundantly clear.

What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.

Is AI Your Friend?

Buy Ezekiel’s Brain on Amazon. Click HERE

Subscribe to Casey Dorman’s Newsletter. Click HERE