In Cixin Liu’s “The Dark Forest,” the second book in the Chinese sci-fi writer’s “The Three-Body Problem” series, an alien says that it is puzzled by the fact that humans do not regard “think” and “say” as synonyms. The aliens’ thoughts are immediately discernible to each other, so they do not have a need to “say” anything. For them, speaking and thinking are the same. Humans are different because we cannot read each other’s thoughts and may choose to not speak about what they are thinking. So, for humans, speaking and thinking are different, but what about words and thoughts? Are they the same? Do we think in words?
Certainly, not all of our thoughts are in words. We think in images, in spatial frames of reference (“up” refers to a direction), we remember tastes and smells, we have reveries about plans and actions, and we can even think in mathematical operations. Thought can take many forms, but do we ever think in words?
Suppose that I’m sitting next to an AI that uses a type of Large Language Model (LLM) similar to ChatGPT-3, and we are both presented with a string of text. The text says, “The groin vault was an architectural innovation of the Romans.” We both are asked to “say the same thing in another way,” and we both say, ““The Romans introduced an architectural technique known as the groin vault,” changing the sentence from passive to active voice. We need not have to know that that is what we are doing. Our performance can be based on our implicit understanding of the structure of language, which is something we both gained by listening to or reading countless sentences (if Chomsky is right, my brain may be primed to implicitly learn language structure, particularly syntax).
Suppose I don’t know what a groin vault is. Neither has the AI ever been exposed to the phrase groin vault. We both are provided with a definition of a groin vault. The material includes picture of a groin vault as well. I read the sentence again, but this time I know what a groin vault is. The words I say are exactly the same as before, but there is a difference in the thoughts I have. I now understand the meaning of groin vault when before I didn’t. My understanding is based on my new associations to the phrase groin vault which are words and pictures. In connectionist or neural network language, my new associations, are activated when I see or hear the word groin vault. What about the AI next to me? Well, its new associations are also activated when it sees the phrase groin vault. Depending on the AI, it could also have associations to the digitally encoded images, although ChatGPT-3 doesn’t have this capability. Either one of us could now define groin vault or use it appropriately in a sentence.
If I had described this process only in reference to myself and never mentioned the AI next to me, most of you would say that I was thinking. When it comes to the AI, however, what it did can be described mechanistically and it seems as if it’s an automatic process, which is just part of the way the AI works. But, when I changed the sentence from passive to active voice, I didn’t consciously follow a prescription based upon a rule that I learned (although I could have), I did it automatically, relying upon my implicit knowledge of the rules of syntax. I also don’t have any sense of how I learned the meaning of groin vault, except that by being exposed to a definition and pictures, I now know what the phrase means, and I didn’t before. In other words, just like the AI, my own performance was based on my using automatic processes that are just part of the way my brain works.
If we are going to call what I did thinking, then we should call what the AI did thinking too. But suppose that my analysis is off-base, and the AI and I don’t use the same processes to produce our responses (which is no doubt true in many, if not most of the operations the AI carries out), does that mean that I am thinking, and it is not? It’s anthropocentric to only allow “thinking the way humans think” to qualify as thinking. There are probably several animals who demonstrate intelligent behavior that would be disqualified from being said to think if we use human thought as our definition of thinking, octopuses being a prime example, since their brain is distributed in different parts of their bodies. As far as I’m concerned, any behavior that demonstrates adaptive learning is thinking, and the AI, in this instance, as well as many other entities, qualifies as thinking.
Is the AI intelligent? If I define intelligence as the ability to acquire knowledge and use it to solve problems, then the AI is intelligent, although its intelligence is limited to verbal problems, such as answering questions and defining words and even in this area it has many limitations. But possessing intelligence doesn’t mean something is “smart” by human standards (a mouse is intelligent but not very smart).
An AI’s output may reflect some of the same processes, or at least similar ones, that go into human production of speech and writing. Language models, particularly those based on activation theories, do include probabilistic associations between word and phrases, as factors affecting human speech output. But our language system doesn’t work in isolation. Within our brains, our language system is integrated with other cognitive systems that process mathematics, images, music, social situations, etc. In such integration, language may affect the functioning of the other cognitive systems, or it may only use the other system’s output as information that can be used in conversing or reasoning. Large language model AIs function entirely within the domain of language. They don’t contain any other cognitive processes. Does that mean they produce “language without thought?” Based on my earlier argument, I would prefer to characterize them as having limited thought. In some respects, they are generating responses using some of the same (at least in principle) processes that humans use, or it is at least plausible that humans use, but they are extremely limited in the breadth of their thinking skills.
But we know what is going on in our minds when we think, and an AI doesn’t. In one sense that is right and in another, it is an illusion. We are aware of “thoughts”, but we are usually not aware of how they are produced in our minds, or how the information they are about is stored, or how we chose which word to follow the last word we said when we told someone about them. Most of the machinery of our minds is opaque. It operates behind the scenes, and we are only aware of its products. That’s one reason why we feel that we sometimes think in words, although words are the end products of a great deal of cognitive activity. It is true that we can consciously manipulate words and other thoughts, such as images and numbers, to solve problems, so there is a level of thinking that we are mostly aware of, although how we retrieve the thoughts or remember the procedure is usually not something of which we are aware. Can an AI be aware of what it is thinking? I don’t know, and I am reasonably sure that current AIs are not aware, but I’m pretty sure that awareness of our thoughts (which is not the same as awareness of our sensations or our environment) is something that evolved because it was useful to convey them to others. Because of that, it hinges on having a communication system, and mostly relies on words as the means of conveyance. For those reasons, it must be supported by cognitive processes, and I’m confident that we will be able to duplicate those processes in an AI in the future.
Let me summarize:
-
- It is likely that humans and existing AIs use similar processes at least part of the time in generating syntactically correct language and giving words meaning.
- The processes humans use qualify as thinking and so do the processes the AIs use.
- AI thinking is much more limited in scope than human thinking, and usually doesn’t extend beyond basic processes in a single domain, such as language or image recognition.
- Human intelligence includes integration of complex cognitive processes across multiple domains.
- Humans are often not aware of the cognitive processes that produce their responses, and those processes are mechanistic ones that, in most, if not all cases could someday be carried out by a computer.
- Humans have an awareness of at least the output of their cognitive processes and some ability to consciously direct their use and application.
- Today’s AIs probably are not conscious of their cognitive processes, but consciousness of one’s thoughts, the ability to consciously direct their use and application, and the ability to convey them to others are cognitive processes that should be able to be modeled by and carried out by a computer in the future.
*When we talk about an AI, such as ChatGPT-3, we talk about it as if it is an entity, although, in reality, it is a set of processes. It has no identity as a single unit, and this may limit the applicability of phrases such as “It doesn’t know what it is talking about,” since it is not clear what “it” refers to. If we say, “it decides,” or “it knows” we are projecting more sense of unity to the processes than may be warranted. The fact that ChatGPT-3 responds as if it is a single being masks the fact that it is a collection of processes that is producing language that mirrors the human entity model, but its underlying architecture does not.
Can an AI be intelligent, and if so, should we fear it? Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Books in Motion.
Subscribe to Casey Dorman’s Newsletter. Click HERE