The debate about whether or not artificial intelligence can achieve a level comparable to human intelligence is one that is even older than the field of artificial intelligence. It goes back to Descartes, La Mettrie and, if you include non-machine artificial beings, even Mary Shelley. Most of the historical discussions of man as machine have revolved around the issue of machines being unable to have a soul. We don’t know what a soul is, but most historical critics of machine-human equivalency have been clear that it is something people have, and machines don’t have.
Nowadays, we speak less about the question of whether artificial intelligences (AIs) can have souls and more about whether they can have a conscious mind, or whether they can achieve “general intelligence,” which is the ability to function with a human level of intelligence (or higher) across a wide variety of situations. The question of AI consciousness is almost as murky as the question of whether AIs can have souls, and for the same reason, which is our inability to define what it means to say that humans have consciousness (or souls). For that reason, it is easier to ask the questions of whether any AIs currently can function at human levels of intelligence and whether it is even possible for any AI in the future to function with a human level of intelligence across a wide variety of situations.
Current AIs do many remarkable things. They can beat humans at most board games, solve mathematical problems in very short times, guide robot appendages, create images from texts, compose music, art, and literature, and can conduct conversations that human listeners can’t easily distinguish from those of real humans. Mostly, an AI can only do one of these things, so general artificial intelligence is a long way off, or so it seems. Of course, it’s probably wise to remember that the current level of function of AIs on single tasks, such as playing chess, composing texts, or responding to questions, is beyond what most people expected a few years ago. Progress in the field is rapid.
The greatest progress in the field of AI is in the field of artificial neural networks (ANNs), and what is called “deep learning,” which involves layered networks of neuron-like connected nodes that are each weighted in terms of their contribution to outputs based on the network coming successively closer to producing a desired product (it is actually the connections that are weighted, making it more likely that they will be activated in producing a response). They achieve the weights that determine their role in producing the output through exposure to hundreds of millions of examples of the kind of output that is desired and using feedback to modify underlying layers, pulling successively higher-level features from the data to which they are exposed. The result bears a strong resemblance to what a human would produce, and sometimes outperforms humans on competitive games. Some artificial neural networks (ANNs) produce output that is able to fool humans into thinking that it is human-produced, which means that the ANN passes the “Turing Test,’ which, at one time, was considered the criterion for displaying human-level intelligence.
Most people don’t regard existing AIs as truly intelligent. The main reasons are 1) that the AIs are limited in what they can do, being able to deal with only one kind of input and produce one kind of output. An AI that can beat a chess master at chess, can’t figure out how to unlock a door. 2) AIs must be presented with hundreds of millions of samples to learn enough to produce human-like responses. 3) AIs don’t know what they are doing. An AI can’t be said to know what the sentences that it produces mean, in fact, it can’t be said to know what the questions it answers mean. The AI just gives the statistically most probable answer based on putting together elements of the samples on which it was trained.
How well do these criticisms hold up? Let’s take the first one, which is that most AIs are single-purpose devices and only do what they were trained to do. Certainly, that makes them different from humans, and if they were human, we would probably refer to them as having “savant syndrome” (formerly known as “idiot savants”). Even compared to those with savant syndrome, who can perform exceptionally well on some tasks such as drawing, calculating calendar dates, or playing a musical instrument, the AIs are deficient since most persons with savant syndrome can do a variety of other, albeit often simple, things. Truly intelligent people can apply their intelligence across a wide variety of areas, although there are those who are exceptionally able in some areas and closer to average in others. Most people can apply their learning in one area to another area or to a new situation that only somewhat resembles the situation in which their original learning took place. So far, AIs can’t do this, except within a circumscribed realm, such as using language although there are some which, after being trained for instance on playing one kind of game, can more quickly learn another or even produce intelligent responses on another game despite having no prior exposure to it. Progress is being made in these multipurpose (not yet general purpose) applications. These are first steps in creating a more general intelligence, so it is not something unachievable in principle, only something not achieved so far.
Criticisms of programs such as OpenAI’s ChatGPT, which will answer questions or complete essays or even compose fictional stories if given a prompt, are that they require training on hundreds of millions of samples of data before they can perform at near-human levels. I think those who make such criticisms haven’t got much idea how many words, for instance, we humans are exposed to as we mature into adults. A ten-year old child has been exposed to around 100 million words (not all different, but in different contexts, or sentences) and a 21-year-old adult has been exposed to considerably more than twice that number, which is much like a typical chat AI. Unlike the AIs, humans continue to be exposed to voluminous amounts of new data throughout their lives. I’m not at all sure that this criticism is valid.
While objections 1 and 2 can be answered by pointing out that we are only at the beginning of AI development, criticism 3 is more basic. Most people, including those within the AI field, exclude AIs from being able to understand what they learn or what they are doing because, being machines, they are inherently unable to be self-reflective. Critics phrase their views in various ways: computers can’t “understand” what they are doing, AI responses don’t have any “meaning” to the AI, they are just responding with the highest probability response to a context, given the corpus of data on which they were trained. These criticisms sound valid, but to make them so, it’s necessary to show that their opposite applies to humans. We all know what it means to understand something, right? Basically, we have two types of criteria: external criteria, such as the ability to answer questions about a subject, to relate it to other subjects, to paraphrase it, etc. and inner criteria, the feeling that something makes sense to us. AIs can meet the external criteria, but what about the inner criteria? I think most people’s argument is simply that AIs can’t feel anything, they have no “inner sense” in Kant’s words. But that answer is not satisfactory, because we have no evidence for or against it. We are in the same position with other people. We assume they feel something when they understand something, but we have no indication of that except through their external behavior, such as their words, their facial expressions, their actions. We can even question such a feeling. If someone tells us they understand something we aren’t as convinced as we would be if they showed it in their behavior. We, ourselves, can “feel” as if we understand something, but when we try to explain it, it turns out that we don’t. It happens a lot. So, the inner criterion is unreliable and unverifiable. We are left with external criteria for understanding and those are criteria many AIs can meet.
Because AIs can satisfy our external criteria for understanding something, we have no valid reason to claim that they don’t have understanding, But will that give AIs general intelligence?
Although ANNs are based on an abstract model of neural functioning in the brain, it is clear that they don’t really resemble brains and they don’t function the way that brains function. The progress that has caused this field to leap forward in recent years has been in the area of computing power, not in design of AI networks. Basically, developers begin with a network of neuron-like connected nodes that is pretty much the same regardless of the task that will be learned. Other than its ability to form and modify weights in the connections in its network using feedback from prior responses, the network begins with no knowledge embedded in it. Human brains are not like that. The growth of the human brain and its complex assembly of neural networks is based on algorithms that were shaped by evolution and the networks they produce are those that learn from certain types of stimuli and execute particular types of behaviors. Throughout the brain, these networks are different from each other even before they have ever been exposed to stimuli and they are further modified by such exposure. Infants at very young ages, before they have had a chance to learn much of anything, have brains that are designed to understand objects and their permanence, cause and effect, words as distinct from sounds, recognize faces, anticipate gravitational effects, recognize depth and elementary perspective and many other things we are still learning about. Early learning experiences interact with such neural design to further refine it as the brain grows. Understanding of other minds, control of impulses by future considerations, are things that the brain is ready to learn but require maturation of some of the networks that support them and may not emerge until the child is growing up. Our brains work as well as they do because evolution has selected genetic forces that cause them to develop in a certain way, sometimes only in response to environmental stimulation. This is not a characteristic of artificial neural networks, which, before they begin learning, are tabula rasas, without any prior knowledge or even predilections.
What difference does it make if brains and AIs don’t resemble each other? Many in the AI field would say it makes no difference. Evolution doesn’t always choose efficiency or simplicity. Evolutionary steps are built on what already exists, and what exists because it worked in an earlier environment, may not be the most useful for the new environment, but changes must be built on what was already there. ANNs also evolve by selecting the weights that maximize outcomes, but the basic networks within which they function remain the same. Does this place a limit on AI intelligence? It’s impossible to say for sure, given that the speed and power of AIs can allow them to do a lot of things differently than the way brains do them, yet achieve the same or better outcomes. The rubber will meet the road when it comes to extending an AI’s capabilities in the direction of general intelligence, which stretches across domains of expertise. It my mind, it’s doubtful that we will be able to create an AI with truly human-like intelligence until we take these things into consideration and build diverse types of networks into our AI devices. To solve an array of problems, such networks would need to be coordinated to work together. Right now, we are a long way from that.
What if robots replaced the entire human race? Is that the next evolutionary step for intelligence? For an imaginative, exciting look at this idea read Ezekiel’s Brain, Casey Dorman’s sci-fi adventure.
Buy Ezekiel’s Brain on Amazon. Click HERE
Subscribe to Casey Dorman’s Newsletter. Click HERE