Can Artificial Intelligence Become Conscious?

Posted in Artificial Intelligence, Casey Dorman's Writer's Blog/Fan Page, Science Fiction, Uncategorized

In my novel, Ezekiel’s Brain, plus in many other sci-fi AI stories (Pandora’s Brain, Robopocalypse, 2001 a Space Odyssey, Isaac Asimov’s robot series), artificial intelligence is conscious, in fact, self-conscious.  Consciousness, perhaps even self-consciousness, is required in order to make the AI a character in the story, rather than just a machine that either helps or threatens humanity.

For many years, AI consciousness was a topic for philosophers, for computers scientists when they wanted to speculate on the future of AI, and for novelists. Today, the conversation has shifted from speculation to answering the question of whether certain AI developments, such as OpenAI’s GPT-3 (now Microsoft’s device) is or could become conscious and if not, why not, and how close is it already? This is exciting and, for many people, threatening and even frightening.

One of the difficulties in answering questions about AI consciousness is that there is no universal agreement on what consciousness is. We all know we are conscious; we are aware of what we are doing and thinking. But what does that mean? The conversation becomes murky and circular. Our vocabulary for describing behavior contains words such as thinking, angry, happy, aware, lying, and conscious, and we know how to use these words with humans, sometimes even with other animals. We watch their behavior and apply the words when it satisfies the criteria that our language community has agreed on about the use of these words. Most of these words imply internal states. We believe the behavior expresses the internal states. If we are in doubt, we can ask a person. Usually, they can tell us what they were thinking or feeling, and we take that as evidence that we were justified in using the word. But what happens when a machine produces similar behavior?  If Siri says, “I’m sorry you feel that way,” what does that mean? What state of a machine corresponds to feeling sorry?

No one, except perhaps Joaquin Phoenix, believes that a language processor, such as Siri, feels sorry, even if it says it does. Machines can’t feel, much less feel sorry. Why not? One answer is that feelings require organic components, which include hormones, chemical neurotransmitters, and special neural receptors that translate patterns of chemical activity across neural circuits in order to feel. So nonorganic entities can’t feel. But, except for the hormones, everything is the same with regard to seeing and hearing. Can AIs see or hear? You’ll probably say they can, but are they aware of what they see and hear? That’s where we started this conversation. Murkiness and circularity are on the horizon.

Computers and brains both act on electrical signals, so any signal or pattern of signals that the brain is capable of producing is, in theory, able to be reproduced in a nonorganic electrical circuit (this is partly a statement of faith). We don’t know what kind of electrical signal or pattern produces consciousness but presumably, it can be duplicated in a machine’s electrical circuitry. I say, presumably, because there are theorists, such as John Searles, who believe that there is something peculiar about organic neurons and chemical transmission along neural circuits that produces consciousness—something not present in nonorganic electrical circuits. This is also a statement of faith.

Those who believe that machines can develop consciousness usually favor either of two broad hypotheses: 1) Consciousness is based on a special type of programming that, in humans, evolved as neural architecture and would need to be supplied by a programmer if a machine were to be conscious. 2) Consciousness arises as a function of certain types of neural network activities that can develop as learning is reinforced by rewards. In either case, consciousness is neither confined to organic structures nor is it a non-functional epiphenomenon (like the subtitles in a film where the characters already speak your language).

In my novels, I, Carlos and Ezekiel’s Brain, I take position #1 and have computer scientists write programs that produce consciousness by framing an ongoing account of an AI’s activity in an agent/action or subject/verb format. That may be the case, but frankly I doubt it because it also means that consciousness is based on language, since “an ongoing account” is most easily imagined as an internal monologue. While I do think it is possible that we inherit a genetically determined neural structure, shaped by evolution, that produces consciousness, I don’t think it is an internal monologue, because nonverbal animals and babies would not have consciousness, which they no doubt do. I am more inclined to see consciousness, or awareness as simply one aspect of some kinds of neural activities. It could require that it be built in to a machine the way it is built into a human, but recent theorizing about the similarity between natural selection in evolution and selection of behaviors through reinforced learning lead me to believe that some learned behavioral activities may conceivably produce consciousness without it being “built in.” This could mean that unsupervised learning through encounters with a complex environment could produce self-organizing activities that bring about consciousness in a machine without any human deliberately creating it. The main thing that is necessary is that the presence of consciousness produces a gain in  the likelihood that the machine will achieve its goals or attain its rewards. We will then be faced with an AI that thinks, plans, and knows what it is doing. It will be a whole new world.

I’ve said a lot  in order to explain why it is worthwhile to write and to read science fiction about artificial intelligence. When we imagine conscious AIs, we may be talking about a whole new species of being, which will live alongside humans or supplant them. That is both an interesting and a frightening idea. If you want to read more about what this could look like, I explore it in depth in my novel, Ezekiel’s Brain.

If you want to read more technical speculation on how reinforcement learning can produce both what evolves and what is learned, read DeepMind’s’s paper, “Reward is Enough” at: https://deepmind.com/research/publications/Reward-is-Enough 

Ezekiel’s Brain may be purchased through Amazon by clicking HERE.

Want to subscribe to Casey Dorman’s fan newsletter? Click HERE.

Loading