Demis Hassabis, CEO of Google DeepMind, Sounds Alarm on Unpreparedness for AGI’s Arrival

The Urgency of Preparing for Artificial General Intelligence (AGI)
Demis Hassabis on AGI’s Impending Arrival
Demis Hassabis, the CEO of Google DeepMind, has delivered an important warning regarding the imminent reality of Artificial General Intelligence (AGI). He emphasized that while this advanced technology may be just around the corner, society is not adequately prepared for its potential ramifications.
In a recent interview with Time magazine, Hassabis, who recently earned the 2024 Nobel Prize in Chemistry, indicated that AGI—a type of artificial intelligence with human-like cognitive capabilities—could emerge within the next five to ten years. Some experts suggest it might be even sooner.
“It’s coming… and I’m not sure society’s ready,” remarked Hassabis, expressing concerns about global coordination among countries, businesses, and researchers ahead of AGI’s arrival.
"It’s Coming Very Soon"
Hassabis candidly explained what worries him most: the lack of international cooperation and standards during the final stages of AGI development. He stated, “Maybe we are five to ten years out. Some people say shorter; I wouldn’t be surprised.”
He highlighted that while discussions about AGI may have persisted for years, its impending reality is no longer a concept restricted to speculation. “It’s coming, and I don’t think society is fully prepared for that yet,” he stated, underscoring the vital issues concerning system controllability and access to this groundbreaking technology.
The Need for a Global Framework on AGI Safety
As a strong advocate for enhanced international collaboration regarding AGI, Hassabis stressed that proper safety and transparency are necessary in its development. He proposed the creation of a global governing body to oversee AGI advancements and flagged potential misuse.
Hassabis suggested establishing an organization akin to CERN, which focuses on international research and collaboration in AGI development. This organization would strive to ensure safety in the technology’s advancement.
He elaborated that along with this collaborative body, an institution similar to the International Atomic Energy Agency (IAEA) should be set up to monitor potentially hazardous projects. Essentially, he envisions a United Nations-like framework, tailored specifically for the emerging challenges of AGI.
Real Risks Associated with AGI
The concern surrounding AGI is not merely theoretical. Recently, a study from DeepMind illustrated the severe risks if AGI is improperly managed, including the possibility of catastrophic consequences that could threaten humanity’s existence.
According to the research, the immense potential impact of AGI also brings the risk of serious harm. The paper emphasized that the dangers posed by AGI could lead to existential threats, making it critical to handle this technology with the utmost care.
Differences Between AGI and Current AI
Current artificial intelligence systems generally excel at performing narrow, defined tasks. In contrast, AGI strives to emulate the comprehensive and multifaceted intelligence of humans. This means that an AGI would possess the capability to learn, reason, and apply knowledge across various domains, potentially resulting in a technology that is far more potent and, therefore, unpredictable.
As AI technology evolves rapidly, Hassabis’s insights serve as a crucial reminder: the benefits of technological advancements are tied directly to our capacity to govern and manage them effectively. Proper frameworks and preparations for AGI are essential to minimizing risks and ensuring safe development in this next frontier of artificial intelligence.