[Image source: Pixabay]
Google LLC has proposed the world’s first classification method for artificial general intelligence (AGI), which comes at a time when OpenAI is starting to see a collision over AI’s safety and service expansion.
AGI is a concept introduced by Professor Mark Gubrud of the University of North Carolina in 1997, which predicted the emergence of military AI with self-replicating systems. While it was once considered a term confined to science fiction at the time, it has is now closer than ever to becoming a reality.
This is similar to the way autonomous vehicles, which were once merely an imaginary concept, gained momentum in their development after the establishment of systematic standards.
Autonomous vehicles are structured into six levels from 0 to 5 according to the SAE International standards.
On Tuesday, Google’s DeepMind research team shared a paper on the levels of AGI in its archive before publication. The concept of AGI has evolved over time, with DeepMind stating that the concept of AGI is crucial in computing research but also acknowledged it as a concept that is sometimes controversial.
The researchers, however, countered that AGI has now changed from a philosophical debate target to a practical concept.
Google DeepMind broadly classified AGI into six levels from 0 to 5. Level 0 is “No AI,” level 1 is “Emerging,” similar to an unskilled adult, level 2 is “Competent,” exceeding 50 percent of skilled adults, level 3 is “Expert,” exceeding the top 10 percent of skilled adults, level 4 is “Virtuoso,” representing the top 1 percent skilled adults, and level 5 is “Superhuman,” surpassing the abilities of skilled adults.
They also differentiated between general AGI, capable of encompassing all aspects, and specialized AGI, which handles only one field.
Level 0 is represented by the crowdsourced web pages launched by AWS in 2001 while level 1 includes OpenAI’s ChatGPT, Google Bard, and Meta Llama 2.
Although there are no levels 2 or above in general AGI, specialized AGI has already hit level 5, an example of which is AI AlphaFold, which reveals protein structures. Uncovering protein structures usually takes months to years but AlphaFold analyzes them in two to three hours.
Google DeepMind listed the requirements that AGI must meet for such standards, including functionality, generality, performance, potential, metacognition, ecological validity, and directionality.
AGI is the functionality itself, not the process, and it must have the ability to learn new tasks, it added. Even if it is not finished, the product must have potential and it should be able to achieve the values that humans prioritize.
The concept and terminology of AGI have been around for a long time, but this is the first time it is being systematically organized.
There is a growing voice in the industry to prepare safety guidelines in advance for AGI’s advent.
AGI systems with capabilities beyond human imagination, such as Google AlphaZero or AlphaFold, could be challenging for humans to discern, even when used for deepfakes, deception, and manipulation.
There is also the need to prepare for the possibility of AGI becoming uncontrollable if it surpasses human abilities.
An example is OpenAI’s Superalignment project, which aims to align AI goals with human values to ensure that AGI, when it does emerge, does not harm humans.
The traditional safety guidelines assume that tasks performed by AI systems can be evaluated and checked by superior humans. But the emergence of superior AGI could mean that safety checks may become impossible, industry insiders noted.
By Lee Sang-duk, Won Ho-sup, and Lee Eun-joo
[ⓒ Pulse by Maeil Business Newspaper & mk.co.kr, All rights reserved]