DeepMind Seeks Research Scientist for ‘Post-AGI’ Initiatives

Google Prepares for a Future Beyond Artificial General Intelligence
Understanding Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) is a concept referring to a type of AI that possesses the ability to understand, learn, and apply intelligence across a wide range of tasks, much like a human. Currently, research labs exploring AI have not demonstrated evidence substantiating that AGI is imminent. However, Google has begun laying groundwork for a scenario where AGI becomes a reality.
Google’s Forward-Thinking Approach
Google’s DeepMind AI lab has recently advertised a position specifically aimed at studying the potential societal impacts of AGI. The responsibilities highlighted in the job listing involve spearheading research into AGI’s effects on various fields, including:
- Economics
- Law
- Health and wellbeing
- Education
- Transitioning from AGI to ASI (Artificial Superintelligence)
Machine Consciousness
An intriguing aspect mentioned is “machine consciousness,” which draws from science fiction. This refers to the idea of a sentient machine capable of independent thought and understanding.
Notable Voices in the AI Arena
Prominent figures such as OpenAI’s CEO Sam Altman and DeepMind’s CEO Demis Hassabis are deeply engaged in discussions about AGI. They have shared their views on the likelihood and timing of achieving AGI, as well as its potential consequences for humanity. Despite the ongoing debates, the job listing from Google indicates that companies are not only considering the creation of AGI but also actively preparing for its societal implications.
The Loose Definition of AGI
AGI isn’t universally defined, leading to varied interpretations across the tech industry. For instance, a document from OpenAI and Microsoft equates AGI with an AI capable of generating $100 billion in profit, a definition that seems disconnected from standard scientific metrics. Sam Altman has expressed confidence in developing AGI as it has traditionally been understood, predicting AI agents will start becoming part of the workforce by 2025.
Conversely, Microsoft’s CEO, Satya Nadella, dismissed such milestone claims as trivial benchmarks. He characterized self-proclaimed AGI achievements as “nonsensical benchmark hacking.”
Marketing AGI: A Double-Edged Sword
Critics often point out that the concept of AGI can serve as an effective marketing tool for tech firms. By generating excitement around the potential of AGI, companies can elevate their perceived value, sometimes diverting attention from pressing issues caused by existing AI technologies.
Google’s Job Responsibilities
In light of the above discussions, Google’s recent job opening reflects an ambition to delve into AGI’s most dramatic implications. The key responsibilities outlined in the job listing include:
- Conducting research on AGI’s influence across various sectors
- Developing comprehensive studies to assess AGI’s societal effects
- Establishing evaluation frameworks to systematically examine the consequences of AI technologies
This job posting closely follows a report published by DeepMind, focusing on the essentiality of pursuing a responsible path to AGI. The report predicts that AI capable of performing most cognitive tasks like humans could emerge in the coming years. It emphasizes four primary risk areas in the pursuit of AGI: misuse, misalignment, accidents, and structural risks, with heightened scrutiny on misuse and misalignment issues.
While Google has not issued a specific comment regarding its plans, the proactive stance reflected in the job listing suggests a serious commitment to understanding and preparing for a future shaped by AGI. The efforts to foresee the societal impacts signal an essential consideration for AI companies as they continue to advance this transformative technology.