At Google I/O this week, amid the same old parade of dazzling product demos and AI-powered announcements, something unusual happened: Google declared war — quietly — within the race to construct artificial general intelligence (AGI).
“We fully intend that Gemini might be the very first AGI,” said Google co-founder Sergey Brin, who made a surprise, unscheduled appearance at what was originally planned as a solo fireside chat with Demis Hassabis, CEO of DeepMind, Google’s AI research powerhouse. The conversation, hosted by Big Technology founder Alex Kantrowitz, pressed each men on the longer term of intelligence, scale, and the evolving definition of what it means for a machine to think.
The moment was fleeting, but unmistakable. In a field where most players hedge their talk of AGI with caveats — or avoid the term altogether — Brin’s comment stood out. It marked the primary time a Google executive has explicitly stated an intent to win the AGI race, a contest often associated more with Silicon Valley rivals like OpenAI and Elon Musk than with the search giant.
Yet Brin’s boldness contrasted sharply with the caution expressed by Hassabis, a former neuroscientist and game developer whose vision has long steered DeepMind’s approach to AI. While Brin framed AGI as an imminent milestone and competitive objective, Hassabis called for clarity, restraint, and scientific precision.
“What I’m inquisitive about, and what I might call AGI, is admittedly a more theoretical construct, which is, what’s the human brain as an architecture in a position to do?” Hassabis explained. “It’s clear to me today, systems don’t have that. And then the opposite thing that why I feel it’s form of overblown the hype today on AGI is that our systems aren’t consistent enough to be considered to be fully General. Yet they’re quite general.”
This philosophical tension between Brin and Hassabis — one chasing scale and first-mover advantage, the opposite warning of overreach — may define Google’s future as much as any product launch.
Inside Google’s AGI timeline: Why Brin and Hassabis disagree on when superintelligence will arrive
The contrast between the 2 executives became much more apparent when Kantrowitz posed an easy query: AGI before or after 2030?
“Before,” Brin answered without hesitation.
“Just after,” Hassabis countered with a smile, prompting Brin to joke that Hassabis was “sandbagging.”
This five-second exchange encapsulates the subtle but significant tension in Google’s AGI strategy. While each men clearly consider powerful AI systems are coming this decade, their different timelines reflect fundamentally different approaches to the technology’s development.
Hassabis took pains throughout the conversation to ascertain a more rigorous definition of AGI than is usually utilized in industry discussions. For him, the human brain serves as “a vital reference point, since it’s the one evidence we now have, perhaps within the universe, that general intelligence is feasible.”
True AGI, in his view, would require showing “your system was able to doing the range of things even the perfect humans in history were in a position to do with the identical brain architecture. It’s not one brain but the identical brain architecture. So what Einstein did, what Mozart was in a position to do, what Marie Curie and so forth.”
By contrast, Brin’s focus appeared more oriented toward competitive positioning than scientific precision. When asked about his return to day-to-day technical work at Google, Brin explained: “As a pc scientist, it’s a really unique time in history, like, truthfully, anybody who’s a pc scientist shouldn’t be retired without delay. Should be working on AI.”
DeepMind’s scientific roadmap clashes with Google’s competitive AGI strategy
Despite their different emphases, each leaders outlined similar technical challenges that should be solved on the trail to more advanced AI.
Hassabis identified several specific barriers, noting that “to get all of the technique to something like AGI, I feel may require one or two more recent breakthroughs.” He pointed to limitations in current systems’ reasoning abilities, creative invention, and the accuracy of their “world models.”
“For me, for something to be called AGI, it might should be consistent, rather more consistent across the board than it’s today,” Hassabis explained. “It should take, like, a few months for perhaps a team of experts to seek out a hole in it, an obvious hole in it, whereas today, it takes a person minutes to seek out that.”
Both executives agreed on the importance of “considering” capabilities in AI systems. Google’s newly announced “deep think” feature, which allows AI models to interact in parallel reasoning processes that check one another, represents a step on this direction.
“We’ve all the time been big believers in what we’re now calling this considering paradigm,” Hassabis said, referencing DeepMind’s early work on systems like AlphaGo. “If you take a look at a game like chess or go… we had versions of AlphaGo and AlphaZero with the considering turned off. So it was just the model telling you its first idea. And, , it’s not bad. It’s perhaps like master level… But then if you happen to turn the considering on, it’s been way beyond World Champion level.”
Brin concurred, adding: “Most of us, we get some profit by considering before we speak. And although not all the time, I used to be reminded to try this, but I feel that the AIs obviously, are much stronger when you add that capability.”
Beyond scale: How Google is betting on algorithmic breakthroughs to win the AGI race
When pressed on whether scaling current models or developing recent algorithmic approaches would drive progress, each leaders emphasized the necessity for each — though with barely different emphases.
“I’ve all the time been of the opinion you would like each,” Hassabis said. “You must scale to the utmost the techniques that about. You want to use them to the limit, whether that’s data or compute, scale, and at the identical time, you ought to spend a bunch of effort on what’s coming next.”
Brin agreed but added a notable historical perspective: “If you take a look at things just like the N-body problem and simulating just gravitational bodies… as you plot it, the algorithmic advances have actually beaten out the computational advances, even with Moore’s law. If I needed to guess, I might say the algorithmic advances are probably going to be much more significant than the computational advances.”
This emphasis on algorithmic innovation over pure computational scale aligns with Google’s recent research focus, including the Alpha-Evolve system announced last week that uses AI to enhance AI algorithms.
Google’s multimodal vision: Why camera-first AI gives Gemini a strategic advantage
An area of clear alignment between the 2 executives was the importance of AI systems that may process and generate multiple modalities — particularly visual information.
Unlike competitors whose AI demos often emphasize voice assistants or text-based interactions, Google’s vision for AI heavily incorporates cameras and visual processing. This was evident in the corporate’s announcement of latest smart glasses and the emphasis on computer vision throughout its I/O presentations.
“Gemini was built from the start, even the earliest versions, to be multimodal,” Hassabis explained. “That made it harder firstly… but in the long run, I feel we’re reaping the advantages of those decisions now.”
Hassabis identified two key applications for vision-capable AI: “a very useful assistant that may come around with you in your day by day life, not only stuck in your computer or one device,” and robotics, where he believes the bottleneck has all the time been the “software intelligence” slightly than hardware.
“I’ve all the time felt that the universal assistant is the killer app for smart glasses,” Hassabis added, an announcement that positions Google’s newly announced device as central to its AI strategy.
Navigating AI safety: How Google plans to construct AGI without breaking the web
Both executives acknowledged the risks that include rapid AI development, particularly with generative capabilities.
When asked about video generation and the potential for model degradation from training on AI-generated content — a phenomenon some researchers call “model collapse” — Hassabis outlined Google’s approach to responsible development.
“We’re very rigorous with our data quality management and curation,” he said. “For all of our generative models, we attach SynthID to them, so there’s this invisible AI-made watermark that’s pretty, very robust, has held up now for a yr, 18 months since we released it.”
The concern about responsible development extends to AGI itself. When asked whether one company would dominate the landscape, Hassabis suggested that after the primary systems are built, “we are able to imagine using them to shard off many systems which have secure architectures, form of built under… provably underneath them.”
From simulation theory to AGI: The philosophical divide between Google’s AI leaders
Perhaps essentially the most revealing moment got here at the tip of the conversation, when Kantrowitz asked a lighthearted query about whether we live in a simulation — inspired by a cryptic tweet from Hassabis.
Nature to simulation on the press of a button, does make you wonder…
? https://t.co/lU77WHio4L
— Demis Hassabis (@demishassabis) May 7, 2025
Even here, the philosophical differences between the 2 executives were apparent. Hassabis offered a nuanced perspective: “I don’t think this is a few form of game, regardless that I wrote numerous games. I do think that ultimately, underlying physics is information theory. So I do think we’re in a computational universe, but it surely’s not only a simple simulation.”
Brin, meanwhile, approached the query with logical precision: “If we’re in a simulation, then by the identical argument, whatever beings are making the simulation are themselves in a simulation for roughly the identical reasons, and so forth so forth. So I feel you’re going to must either accept that we’re in an infinite stack of simulations or that there’s got to be some stopping criteria.”
The exchange captured the essential dynamic between the 2: Hassabis the philosopher-scientist, approaching questions with nuance and from first principles; Brin the pragmatic engineer, breaking problems down into logical components.
Brin’s declaration during his Google I/O appearance marks a seismic shift within the AGI race. By explicitly stating Google’s intent to win, he’s abandoned the corporate’s previous restraint and directly challenged OpenAI’s position because the perceived AGI frontrunner.
This isn’t any small matter. For years, OpenAI has owned the AGI narrative while Google fastidiously avoided such daring proclamations. Sam Altman has relentlessly positioned OpenAI’s entire existence around the pursuit of artificial general intelligence, turning what was once an esoteric technical concept into each a company mission and cultural touchstone. His frequent hints about GPT-5’s capabilities and vague but tantalizing comments about artificial superintelligence have kept OpenAI in headlines and investor decks.
OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5:
We need to do a greater job of sharing our intended roadmap, and a significantly better job simplifying our product offerings.
We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten.
We hate…
— Sam Altman (@sama) February 12, 2025
By deploying Brin — not only any executive, but a founder with near-mythic status in Silicon Valley — Google has effectively announced it won’t cede this territory with out a fight. The move carries special weight coming from Brin, who rarely makes public appearances but commands extraordinary respect amongst engineers and investors alike.
The timing couldn’t be more significant. With Microsoft’s backing giving OpenAI seemingly limitless resources, and Meta’s aggressive open-source strategy threatening to commoditize certain facets of AI development, Google needed to reassert its position on the vanguard of AI research. Brin’s statement does exactly that, serving as each a rallying cry for Google’s AI talent and a shot across the bow to competitors.
What makes this three-way contest particularly fascinating is how in another way each company approaches the AGI challenge. OpenAI has bet on tight secrecy around training methods paired with splashy consumer products. Meta emphasizes open research and democratized access. Google, with this recent positioning, appears to be staking out middle ground: the scientific rigor of DeepMind combined with the competitive urgency embodied by Brin’s return.
What Google’s AGI gambit means for the longer term of AI innovation
As Google continues its push toward more powerful AI systems, the balance between these approaches will likely determine its success in what has turn into an increasingly competitive field.
Google’s decision to bring Brin back into day-to-day operations while maintaining Hassabis’s leadership at DeepMind suggests an understanding that each competitive drive and scientific rigor are crucial components of its AI strategy.
Whether Gemini will indeed turn into “the very first AGI,” as Brin confidently predicted, stays to be seen. But the conversation at I/O made clear that Google is now openly competing in a race it had previously approached with more caution.
For an industry watching every signal from AI’s major players, Brin’s declaration represents a major shift in tone — one that will pressure competitors to speed up their very own timelines, whilst voices like Hassabis proceed to advocate for careful definitions and responsible development.
In this tension between speed and science, Google can have found its unique position within the AGI race: ambitious enough to compete, cautious enough to do it right.