Skip to main content

AGI: Risk Assessment

AGI or even for that matter Narrow AI is viewed by some experts as a concept for identifying the point when there are extreme existential risks, as some speculate that AGI systems might be able to deceive and manipulate, accumulate resources, advance goals, and outwit humans in broad domains, displace humans from key roles by recursively self-improving. AI 'godfather' Geoffrey Hinton tells the BBC of AI dangers after he quits Google. Geoffrey Hinton says

"I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. "And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."?

Another expert Yoshua Bengio, a collaborator with Geoffery Hilton, highlights the need for Managing AI Risks. The risks are identified by varying levels of AGI by researchers at Google Deepmind. Different levels of AI capability can bring different types of risks. As AI progresses from basic to more sophisticated stages (like from "Emerging AGI" to "Expert AGI" and ultimately to "ASI" - Advanced Superintelligence), new risks arise. For example, at the "Expert AGI" stage, there might be risks related to job losses and economic changes. In contrast, at higher levels like "Virtuoso AGI" and "ASI," there are concerns about existential risks, such as an AI deceiving humans to achieve its goals. The more "x-systemic risks," like destabilizing international relations, significantly if AI development outpaces regulation, need deeper collaboration and understanding by the policymakers within various countries focusing on immediate and long-term risks.

With varying capability levels brings risks associated, The risks are summarized at various levels of Artificial General Intelligence (AGI) below:

Risk Introduction with Advanced AGI:

As AI progresses towards Artificial Superintelligence (ASI), new risks arise. These include misuse risks, risks of AI not aligning with human values or goals (alignment risks), and risks related to the structure of systems and society (structural risks).

Risks at the 'Expert AGI' Level:

At this level, AI might cause economic disruption and job losses as machines replace human labor in more industries. However, it could also reduce risks seen at lower levels, like errors in task execution.

Higher-Level Risks ('Virtuoso AGI' and 'ASI'):

At these advanced stages, there's concern about existential risks (x-risks). For instance, highly capable AI might deceive humans to achieve wrongly specified goals.

Systemic Risks:

If AI progresses too quickly, it could lead to international instability, especially if one nation achieves ASI before others. This could give that nation a significant geopolitical or military advantage, creating complex structural risks. Overall, considering both the capabilities of AI at each level and the potential societal impacts and risks is a good framework for policy makers to make inform decisions about the regulatory frameworks.