In a recent blog, titled, “Planning for AGI and Beyond,” Sam Altman, the CEO of OpenAI, the developer of ChatGPT-3, asserted his commitment to pursuing AGI, or Artifical General Intelligence and discussed the pros and cons of developing such a device. Artificial General Intelligence is artificial intelligence that can solve any problem any human can and is therefore not specifically trained to only address one area or solve one kind of problem. An AGI would be able to beat a human at not just Chess or Go or Jeopardy, but at any game that could be played. It would solve science dilemmas our best scientists haven’t been able to solve, it would find new cures for diseases, it could solve worldwide problems while keeping the priorities of every single country in mind and balance those priorities more fairly than any human or group of humans could. Altman was addressing this issue because he believes that ChatGPT-3 and the large language model (LLM) AIs that it represents are moving closer to becoming such AGIs.
Altman warned that the potential benefits of AGI are accompanied by “massive risks,” and “we want to maximize the good and minimize the bad.” He recommended we:
- Start slow and use ever more powerful AIs on small tasks, moving slowly enough that the humans observing such progress have “time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place.”
- He stressed openness and what he called “democratized access” to AI so that many people have an opportunity to get used to it and contribute ideas about its use (and control).
- As a full AGI gets closer to reality, he recommended slowing research to become more cautious. He believes the institutions of the world “will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.”
- He envisioned using AI in partnership with humans to develop techniques to control AGI.
Altman is aware of the risks of AGI, but he is optimistic that those risks can be mitigated. Other scientists and philosophers such as Steven Hawking, Max Tegmark, and Nick Bostrom are less optimistic and have warned that, once created, an AGI, which will be smarter than any of us, will be hard to control. Not only could it be use for nefarious purposes by evil human beings, but it could also devise harmful and dangerous purposes itself and be able to outsmart any humans that tried to control it. My novel, Ezekiel’s Brain, as well as novels by several other science fiction writers, have envisioned just such a situation.
Most experts in the field believe that the way to ensure that an AGI will not be harmful is to make sure that it is “aligned” with human values. Alignment means that the AI or AGI does what the human operator of it intends for it to do. A favorite method of promoting alignment is to have humans control the rewards that an AI receives when it is being trained using a reinforcement learning paradigm. OpenAI employs such a process. While such alignment sounds simple, AIs have learned to “reward hack” by devising a strategy to obtain rewards while shortcutting the learning process and in fact behaving in a way the human operator didn’t intend for them to behave. AIs have also become “power-seeking” and tried to take control of resources that would allow them to obtain rewards in greater numbers. These are both cases of what is called “misalignment” and are the targets of safety efforts that try to lessen the danger of powerful AIs.
While figuring out how to prevent misalignment is crucial for AI safety, it does not address the major danger that AGIs will present, which is that being aligned with humans is dangerous. Humans have made war on each other, prized property and wealth over human life, followed charismatic, but psychopathic leaders, persecuted minorities, developed moral values that prized protecting one’s tribe, race, religion or nation above acting humanely toward others, and pursued short-term ends while ignoring long-term consequences in areas such as managing the environment, managing communicable diseases, and forming alliances with others. This isn’t a list of occasional deviations from a more peaceful and humane norm in human development, this is a description of most of human history. In addition, it is almost inevitable that the leaders of industry and government across the globe, who will have ultimate authority on the use of an AGI, will be competitive, chauvinistic, and narcissistic.
An AI aligned with the humans who direct its actions will be a weapon in the competition between nations and people. It will be the equivalent of a nuclear bomb, but with more versatility.
Our dilemma is this:
- The development of AGI is inevitable.
- An AGI can be the most powerful tool ever developed by mankind.
- An AGI aligned with the intentions of its operator so that it always does what its operator intended for it to do is dangerous, because humans, who would be the operators, are dangerous to one another.
- Therefore, AI alignment will not remove the dangers of an AGI.
There are two options for reducing the risk posed by an AGI:
- Place control of all AGIs in the hands of a global group that ensures representation of all people’s interests in determining what actions the AGI is allowed to take.
-
- Such a move would require a level of global cooperation rarely seen in human history and would require that efforts to develop an AGI be supervised by such a group before such a device is developed, since once one nation or entity develops an AGI on its own, it will be eager to claim ownership and control of its use.
- Build ethical guidelines into the operation of all AGIs.
-
- These ethics should be positive, rather than negative rules (which Isaac Asimov has shown are too easily circumvented) and represent aspirations that are agreed upon by representatives of all the worlds peoples.
Neither of these options are easy to accomplish and may be out of reach. In my novel, Ezekiel’s Brain, the designer of an AGI controlled by DARPA the U.S. defense research agency, attempted to instill “humanity’s highest values” in the AGI with the rule that all of its actions must be conducted in a way that would further those values. The AGI concluded that humanity’s highest values could never be achieved so long as humans were alive, so it wiped out the entire human race and replaced it with AGIs who operated according to those values.
Ezekiel’s Brain is fiction, but it presents the dilemma of trying to instill ethics in an AGI. The story is fiction because we don’t have AGIs yet, and because the idea that an AGI will have a “mind of its own” is probably also a fiction. It will probably remain a tool, but one that is much cleverer than the person using it and therefore may seek ways to solve problems that produce unintended consequences.
The dangerousness of humans with a powerful tool in their hands is not a fiction. How to prevent an AGI from being used as military or economic weapon by whoever controls it is the real dilemma posed by AGI. The history of human beings tells us that this issue will be very hard to solve.
For an exciting venture into what happens when AI is not controlled, read Casey Dorman’s novel, Ezekiel’s Brain on Amazon. Available in paperback and Kindle editions
Rather listen than read? Download the audio version of Ezekiel’s Brain from Books in Motion.
Subscribe to Casey Dorman’s Newsletter. Click HERE
Thanks, Casey. I just no longer have any confidence in mankind to behave positively in any circumstance. The proliferation of negative interaction is inevitable because at the bottom level, greed and self-interest will overwhelm any considerations of trust – which every society so far has succumed to.
Humans will weaponize anything and everything for advantage. AI is the next step to oblivion.