The Most Important AI Paper You’ll Read This Year

(Unless  otherwise indicated, all quotations are from the paper listed as #1 in the reference section)

In a recent paper with the long, but unassuming title, “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness,” nineteen computer scientists, neuroscientists, cognitive psychologists, and philosophers examined scientific theories of consciousness and assembled a list of “indicators,” properties or processes that, if they were present in an artificial intelligence (AI), might suggest that it could achieve or already had achieved consciousness. They then used that indicator list to assess current AI models and devices to determine which, if any possessed any of such properties or processes. Their hope was that research in the future would pursue developing AI that possessed such indicators or assess future AIs to determine if they did possess such indicators, as well as spur further work in the science of consciousness to amplify their list and deepen understanding of the processes involved in such indicators.

Assumptions

Consciousness is notoriously hard to define, and the authors were comfortable adopting a rather vague concept of “phenomenal consciousness” or “subjective consciousness.” After Nagel (1974), in his famous paper, “What is it like to be a bat?2” they said, “A system is having a conscious experience when there is ‘something it is like’ for the system to be the subject of that experience.”

While we each have some idea of what it is like to be ourselves, and can imagine, by analogy, what it is like to be someone else, it is different, as Nagel pointed out, when the creature we’re talking about is very different from us. A bat is quite different, given that they fly, hang upside down, and use echolocation, and an AI might be even more so. Yet, if the AI is conscious, in the way we use that word to describe a mental state in humans, there must be something it is like to be it, and that “something” will be based on its subjective experience.

The authors reject behavioral criteria for consciousness because, as is well known by now, LLMs such as ChatGPT, can mimic many of the behaviors we associate with consciousness, and there are no generally agreed upon criteria anyway. Despite this, they examine research from the standpoint of whether processes that are purported to be associated with consciousness can be shown to be absent when information is processed unconsciously, such as in priming effects caused by backward masking of a stimulus, in which subjects do not report sensing the stimulus, but it has an effect on their behavior. They also look to evidence of physiological or neuroimaging effects that are known to accompany reports of conscious experience. They use such evidence to gauge the support for the neurocognitive processes that are specified by different psychological or philosophical theories of consciousness. They call their approach, “theory-heavy,” in the sense that they are looking for whether an AI system meets “functional or architectural conditions drawn from scientific theories,” instead of whether it meets behavioral criteria for consciousness.

In addition to using processes or properties delineated by scientific theories of consciousness to ascertain whether an AI could be conscious, the authors make another assumption, which is at the heart of their endeavor. They call this the assumption of “computational functionalism.” This means that implementing computations of a certain type is necessary and sufficient for consciousness, so it is possible, in principle, for non-organic artificial systems to be conscious. If this were not true, there would be no point in pursuing their inquiry.

While their assumptions may be necessary in order to develop a set of indicators of consciousness in AIs, it is useful to keep them in mind, and some are on better footing than others. There are severable reasonable theories of consciousness that limit it to organic brains. Any theory that claims there is something unique about biological neurons, chemical synapses, the quality of interactions by “brain substances” at the atomic or subatomic level, etc., precludes machines from being conscious. Even some of the theories these authors use to identify processes that are involved in consciousness, such as Anil Seth’s Predictive Processing theory3, are limited by their proponents to organic brains, although the authors of this paper still use such processes to characterize what an inorganic brain might do.

A limitation not mentioned by the authors is that the scientific theories of consciousness they examine are all theories about how human brains produce human consciousness. Short of creating an emulation of a human brain, no AI is going to exactly duplicate either the number of neurons  or synapses in the brain (86 billion, 1,000 trillion synapses), the exact modular structure of the brain in terms of regions and networks that process different kinds of information, patterns of network connections that are pre-wired genetically and not learned, or pre-wired but only activated by experiences, or the connections of the brain to the rest of the body. Which of these factors influences the development of consciousness of the type experienced by humans is unknown, and these factors don’t play much of a role in most of the mechanisms hypothesized to be necessary for consciousness.

The authors recognize some of these issues and limitations. They also point out that there may be degrees of consciousness and that consciousness may have components, not all of which are necessary for an entity to be conscious. In addition, only humans express themselves in words and it is difficult to separate human thought from language. Language may not be a necessary component of consciousness, or, among organic creatures, it would be limited to humans. But, since none of the theories they consider places language in a center or unique role, they may be leaving out a factor that figures heavily in human consciousness as we often experience it.

The indicators

 A total of 14 indicators are listed, and the text discusses each, as well as a number of potential indicators that were rejected. The discussion addresses theories first, and then develops indicators for each of the theories that show sufficient evidence of their usefulness to merit inclusion. Some of these indicators pertain to algorithmic qualities, some to the processing of information within modules, some to attention and integration functions at a level above modules and some to the quality and actions of a device as a whole.  They are presented below in the order in which they are presented in the paper, although I have collapsed the description of some indicators that, together make up a single functional system as described by the theory. In each case I have indicated the theory from which the indicator was taken. Actual indicators are underlined in the discussion.

  • From Recurrent Processing Theory:
    • Feed-forward neural networks with recurrent processing (i.e., feedback or re-entrant connections). Some of these modules will generate integrated perceptual representations of organized, coherent scenes.
  • From Global Workspace Theory
    • There must be multiple specialized modules capable of operating both sequentially and in parallel. These modules’ operations are unconscious, and they process “specific perceptual, motor, memory and evaluative information.”
    • There is a limited capacity selective attention mechanism that is a conscious workspace, which is capable of broadcasting information to all the modules and is sensitive to the state of the system so it can query the modules successively to perform complex tasks.
  • From Computational Higher Order Theories
    • Generative, top-down or noisy perception modules which contain monitoring mechanisms that can discriminate between different sources of activity in the system, e.g., self-generated imaginings vs information from outside the system and a metacognitive monitoring mechanism that can label representations as “real” or not and can “identify which perceptual states are accurate enough to be relied on” in planning actions.
    • The system must have the quality of agency, which is “guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring.” “this implies a system for reasoning and action selection with a holistic character, in which any belief can in principle be called on in examining any other or in reasoning about what to do.”
    • Phenomenal experience of subjective qualities may require “sparse and smooth coding in perceptual systems—that is, on qualities being represented by relatively few neurons, and represented according to a continuous coding scheme rather than one which divides stimuli into absolute categories,” allowing discriminations along a continuum of similarity.
  • Attention Schema Theory
    • An attention mechanism that represents the current state of attention and predicts how it will be affected by changes in its focus or in the environment.
  • Predictive Processing
    • A predictive model that determines coding within modules, so that the module works to reduce prediction errors based on feedback.
  • Agency and Embodiment
    • Agency, which requires an ability to learn from feedback and select outputs (actions) to pursue goals, including being able to respond flexibly to competing goals.
    • Embodiment, i.e., using a model of input-output contingencies to control perceptions and effects on the environment.

The 14 indicators, in the authors’ words, “jointly amount to a rubric… for assessing the likelihood of consciousness in particular AI systems.” None of them is claimed to be necessary, but “systems that have more of these features are better candidates for consciousness.” So far as I can see, none of them is incompatible with others. If the rubric is deficient, it may be so by omission of some qualities that are not included (e.g., time-binding, or narrative structure), or by being overinclusive and listing some indicators that are not related to consciousness.

The article goes on to examine various current AI systems in terms of the extent to which they operate in ways that are similar to the indicators listed above. I won’t go into those, but their conclusion was “There are some properties in the list which are already clearly met by existing AI systems”, and “In most cases, standard machine learning methods could be used to build systems that possess individual properties from this list, although experimentation would be needed to learn how to build and train functional systems which combine multiple properties.”

Finally, they conclude that “there is a strong case that most or all of the conditions for consciousness suggested by current computational theories can be met by existing tech,” and “If it possible at all to build conscious AI systems without radically new hardware, it may well be possible now.” And ominously, they warn, “We may create conscious AI systems long before we recognize we have done so.”

The article discusses implications of AI consciousness and brings up some interesting possibilities and dilemmas. In my opinion, this article is not only well worth reading, but also necessary reading for those who work in this field or think seriously about the issue of consciousness in artificial intelligence.

References

  1. Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., … & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.arXiv preprint arXiv:2308.08708.
  1. Nagel, T., 1974. What is it like to be a bat? The Philosophical Review, 83, pp.435–450.
  1. Seth, A., 2021. Being You: A New Science of Consciousness. Penguin.

Can we build a “friendly” Superintelligent AI that’s not a risk to humanity? We can try, but… Read Casey Dorman’s exciting sci-fi thriller, Ezekiel’s Brain.

Available in paperback and Kindle editions

Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.

Subscribe to Casey Dorman’s Newsletter. Click HERE