What’s Geoffrey Hinton Really Afraid Of?

Geoffrey Hinton Picture by Steve Jurvetson, on Creative Commons

Last week, Google’s Chief AI scientist, Geoffrey Hinton, sometimes referred to as the “Godfather of AI,” resigned and offered interviews to a number of media outlets, talking about his fears of artificial intelligence. Hinton was clear that he was not speaking specifically about Google, and he didn’t resign from the company because they were doing something wrong or wouldn’t allow him to speak. He explained that, as long as he was an employee of the tech company, he couldn‘t be sure he wouldn’t self-censor and not say exactly what he wanted to convey.

So, what was it that Hinton wanted to tell people?  Virtually all of the current AI devices use neural networks, a model of AI architecture based on human neuron connections, which Hinton, among others pioneered several decades ago. Large Language models or LLMs, like ChatGPT, Google’s Bert, and Bard  are examples of these, but many more exist or are being developed. There are several fears that have been voiced about these devices:

  • They can generate “fake news.”
  • They can create “deepfake” images and content that looks real.
  • They can create content that appears to be human-generated (e.g., students’ term papers).
  • They can provide inaccurate information.
  • They can be used to polarize people by posting incendiary comment on social media or as chat responses.
  • They may take over people’s jobs and put people out of work.
Optimus, Tesla’s Robot Design

Hinton says he fears all of these things, but they’re not his greatest fear. What he is most afraid of is the development of “superintelligent” AIs, those that can respond to multiple types of input and provide answers and solutions across a broad range of subjects and situations faster and better than any human or even group of humans. Such devices would be smarter than the humans that created them and able to outthink any humans who tried to control them.

No one has developed a superintelligent AI. They only exist in scientists’ imaginations and in science fiction, such as in my book, Ezekiel’s Brain. Hinton says they’re coming, and he’s afraid of them. He thinks we’re closer than most people think to such a development. In the AI scientist’s mind superintlelligent AIs are dangerous for several reasons:

  • Being smarter than humans, they will be able to manipulate them.
  • Most goals that an AI is given can be accomplished more easily if they control more resources, and superintelligent AIs would know that and know how to do that.
  • Those most likely to develop a superintelligent AI are governments, defense departments, and megacorporations, and they will use them for their own ends.

In Ezekiel’s Brain, DARPA, the U.S. Defense Department’s research arm, develops a superintelligent AI. To ensure it’s not a danger to humans, it’s taught “humanity’s highest values” and instructed to always uphold them. It determines that the greatest threat to those values comes from humans, and it wipes out the human race and substitutes copies of itself to replace humans and uphold the values. Ezekiel’s Brain is a novel, but it demonstrates one of the greatest difficulties with superintelligent AIs, perhaps their Achilles heel as far as humans are concerned. AIs take their commands literally, and they don’t have the social instincts and inhibitions that evolution has built into us that allows us to live together. It is very difficult to figure out what instructions will prevent an AI from harming humans.

AI scientists, call this problem “AI alignment.” It’s the problem of getting an AI to carry out a task in the way the human who instructed it meant it to. Suppose you allow a superintelligent AI to control the recruitment process for new police officers with the instructions to “Choose police officers so that your selection minimizes the likelihood of White officers enacting racist tactics against Black citizens.” It solves the problem by hiring only non-White police officers. Geoffrey Hinton gives the example of asking a superintelligent AI to “Get me to the airport as quickly as possible.” To do so it must get you a taxi. But you may have to wait for a taxi, so it takes control of the Taxi dispatch system and sends all the taxis to you. But what about getting caught in traffic? The AI would know that you might be halted at stop lights, so it takes control of the traffic signal system and turns all lights on your route green, causing massive traffic jams for other cars. What if our defense department asks the AI to “ensure that no nuclear weapons can be launched from Russia or North Korea toward the U.S.”  It blows up all the nuclear weapons possessed by Russia and North Korea while they are still within those countries.

A superintelligent AI needn’t be evil; it doesn’t need to be conscious. It doesn’t need to “want” to do anything to do terrible damage. It’s a machine and it’s designed to follow orders and figure out the most efficient way to do something, which may not be by doing what its instructor intended.  Goeffrey Hinton is afraid of this. He’s afraid that if an AI “decides” to carry out an activity, it can outsmart any human or group of humans who try to stop it. That’s what Geoffrey Hinton is afraid of.

Superintelligent AIs don’t exist. Our best defense against them is to realize they will exist in the future, and we need to start thinking about how to control them right now. There are people—real scientists and real philosophers— working on this, and some of our science fiction writers are using their imaginations to add their bits of wisdom to the process. We all need to think about it and worry about it. Solving the problem of controlling superintelligent AI so it doesn’t harm humans is a problem for all of us, and, like nearly everything else in the field of artificial intelligence, it requires using informed, speculative thinking, to come up with possible solutions.

Is Geoffrey Hinton right? Can an AI be superintelligent, and if so, should we fear it?  Read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions

Rather listen than read? Download the audio version of Ezekiel’s Brain from Audible.

Subscribe to Casey Dorman’s Newsletter. Click HERE

1 thought on “What’s Geoffrey Hinton Really Afraid Of?”

  1. There are individuals across all cultures, at all levels of “learning” & “influence”, that “don’t have the social instincts and inhibitions that evolution has built into us that allows us to live together.” Always have been.
    Every “tool” mankind has ever developed has been weaponized and AI is/will be no different.
    Sorry, Casey, I’m with Hinton….

Comments are closed.