The paper “Levels of AGI: Operationalizing Progress on the Path to AGI” by Meredith Ringel Morris and the team at Google DeepMind, published on November 4, 2023, offers a novel perspective on how we can understand and categorize the evolution of Artificial General Intelligence (AGI). The aim is to create a standardized way to discuss and evaluate the progress of AI systems towards achieving AGI, much like how we have different levels to classify the autonomy of self-driving cars.

What is AGI? AGI refers to the hypothetical ability of an AI system to understand, learn, and apply knowledge in a way that is indistinguishable from a human being. It’s a step beyond the specialized AI that powers our current technology, which is designed to perform specific tasks.

The Six Principles of AGI The authors propose six principles to guide the creation of a framework for AGI:

  1. Capabilities Over Processes: The focus should be on what AGI can do, not how it does it. This means looking at the results rather than the underlying mechanisms.
  2. Generality and Performance: AGI should be broadly capable (generality) and perform tasks well (performance). The framework considers both aspects.
  3. Cognitive and Metacognitive Tasks: AGI should be able to perform both cognitive tasks (like problem-solving) and metacognitive tasks (like learning how to learn).
  4. Stages Toward AGI: The path to AGI should be seen as a series of stages or levels, not just a single end goal.
  5. Benchmarking: There should be clear benchmarks to measure the behavior and capabilities of AGI systems.
  6. Deployment Considerations: When deploying AGI systems, it’s crucial to think about how they operate autonomously and the potential risks involved.

Understanding the Framework The framework introduces levels of AGI based on two dimensions: depth (performance) and breadth (generality) of capabilities. This two-dimensional approach allows for a more nuanced understanding of where an AI system stands in its journey towards AGI.

The Role of Benchmarks Just as students are tested to measure their understanding and progress, AI systems need benchmarks to assess their level of intelligence. The paper discusses the need for future benchmarks that can accurately quantify the capabilities of AGI models.

The Importance of This Work Creating a common language and understanding around AGI helps everyone from AI researchers to policymakers to better grasp the capabilities and potential of AI systems. It also aids in setting realistic goals and expectations for the future of AI technology.

Conclusion The “Levels of AGI” paper is a significant step towards a more structured and clear conversation about AGI. By proposing a framework with specific levels, it helps to clarify the often abstract and theoretical discussions about AI’s future capabilities.

For a deeper dive into the paper’s content and to understand the full scope of the authors’ work, the complete paper is available here.

Authors: Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg.