AI Pioneers Grounding the AGI Discussion

The Debate on AI and Human-Like Intelligence
The Controversial Nature of AI Discussions
During a recent dinner in San Francisco with business leaders, I sparked a conversation that made the room tense. I simply asked whether they believed current artificial intelligence (AI) could eventually match human-like intelligence, known as Artificial General Intelligence (AGI), or even surpass it.
The Current Perspective on AI Development
As of 2025, many tech CEOs are optimistic about the future of large language models (LLMs), the technology driving popular chatbots such as ChatGPT and Gemini. These leaders often suggest that highly advanced AI could lead to significant societal benefits in a relatively short time. Dario Amodei, CEO of Anthropic, stated in an essay that exceptionally advanced AI could emerge as soon as 2026 and potentially be "smarter than a Nobel Prize winner in most fields." Furthermore, Sam Altman, the CEO of OpenAI, claimed his organization has insights on building "superintelligent" AI, which could revolutionize scientific progress.
Skepticism about AI’s Capabilities
Despite these encouraging declarations, not everyone is on board with this optimistic vision. A growing number of AI experts express doubts about current LLMs reaching AGI without major breakthroughs in technology. Thomas Wolf, co-founder and chief scientist of Hugging Face, criticized aspects of Amodei’s assessments as “wishful thinking.” Wolf, who holds a PhD in statistical and quantum physics, believes that genuine breakthroughs stem from asking new questions rather than answering existing ones, a domain where AI still falls short.
In an interview, Wolf emphasized the need for realistic discussions on the journey to AGI rather than excessive hype. He asserts that while AI can certainly transform society, true human-level intelligence may not be achievable under the current technological framework.
The Optimism vs. Realism Divide in AI
The enthusiasm surrounding AGI often leads those who are skeptical to be labeled as "anti-technology" or uninformed. Wolf refers to himself as an "informed optimist" who believes in AI’s potential while remaining grounded in reality. Other AI leaders, such as Demis Hassabis, CEO of Google DeepMind, and Yann LeCun, Meta’s Chief AI Scientist, also share similar cautious views. Hassabis has pointed out that the industry might still be a decade away from achieving AGI, noting the limitations of current AI technology.
Exploring New Frontiers in AI Research
Kenneth Stanley, a former researcher at OpenAI now leading Lila Sciences, is delving into how to foster creativity within AI models. He aims to create systems that can automate scientific innovation, including the crucial step of formulating insightful questions and hypotheses. Stanley shares that merely being knowledgeable does not necessarily breed original ideas—a sentiment he echoes regarding Wolf’s article.
Stanley is focused on developing AI that embodies creativity, a goal that is challenging to realize with current models. Optimists like Amodei highlight AI reasoning models, which employ increased computational power to improve fact-checking and answer accuracy. However, Stanley believes generating original ideas requires a different type of intelligence, emphasizing that current reasoning frameworks may limit creativity.
Addressing the Role of Subjectivity in AI
According to Stanley, establishing truly intelligent AI necessitates replicating human-like judgment concerning promising new ideas. While AI excels in well-defined domains like mathematics or programming, it struggles with subjective tasks where there are no strict answers. He urges the research community to embrace subjectivity as a viable area for algorithmic exploration, viewing it as integral to the data AI processes.
The Future of Open-Endedness in AI Research
The field focused on fostering creativity within AI, known as open-endedness, is gaining attention. Research initiatives are now being launched by organizations like Lila Sciences and Google DeepMind. More discussions around AI creativity are emerging, but Stanley believes there is much more groundwork to cover.
Ultimately, leaders like Wolf and LeCun represent a pragmatic perspective on AGI and superintelligence, advocating for realistic assessments of AI’s inherent limitations. Their aim is not to undermine advancements in the field but to inspire comprehensive conversations about bridging the gap between current AI capabilities and the potential for AGI.