The Phone Call

On a January morning in 2026, Demis Hassabis picked up his phone and called Sundar Pichai. Again.

This had become routine. The CEO of Google DeepMind and the CEO of Alphabet were now speaking every day — sometimes multiple times — about everything from model architecture decisions to competitive intelligence on OpenAI’s latest moves. It was a rhythm that would have been unimaginable three years earlier, when Hassabis ran a semi-autonomous research lab in London that published papers but shipped nothing, and Pichai ran a search engine company that happened to own the lab.

“We’ve got into our groove,” Hassabis told CNBC in January 2026, describing the relationship with an understatement that belied the scale of what had changed. The groove he referred to was a $2.4 trillion company reorganizing its entire identity around a research laboratory that, until 2023, had never shipped a product to consumers.

Thirty-three months earlier, Google DeepMind had never shipped a product to consumers. Now Alphabet was announcing $175 billion to $185 billion in 2026 capital expenditure — nearly double the $91.4 billion spent in 2025, 50% above Wall Street consensus, more than the GDP of 140 countries — and the vast majority of it was going to AI compute capacity for Hassabis’s division.

Pichai told analysts the company would remain “supply constrained” throughout 2026. He could not build data centers fast enough to feed the appetite of his AI lab.

What follows is the story of that transformation — and its costs. The researchers who left. The culture that bent. The competitive gap that narrowed but never closed. And the underlying question that Google DeepMind’s existence forces on the entire industry: can a 7,600-person division inside a 180,000-person corporation out-innovate a startup?

The Merger: What Actually Happened

The public announcement came on April 20, 2023. Sundar Pichai sent an internal memo describing the combination of Google Brain and DeepMind into a single unit called Google DeepMind. The stated purpose was to “significantly accelerate our progress in AI.”

The private reality was messier.

Google Brain and DeepMind had coexisted for nearly a decade in a state of productive rivalry that frequently tipped into dysfunction. They worked on the same problems, published at the same conferences, recruited from the same PhD programs, and regularly duplicated each other’s work without knowing it.

The rivalry had roots. DeepMind, founded in London in 2010 by Hassabis, Shane Legg, and Mustafa Suleyman, was acquired by Google for approximately $500 million in 2014. Google granted it unusual autonomy — an ethics board, independent governance, the latitude to pursue fundamental research without commercial constraints. The arrangement worked until it didn’t. By 2019, Suleyman had been moved out of DeepMind after internal complaints about his management style, eventually landing at Google as a VP before leaving in 2022 to co-found Inflection AI. In 2024, Microsoft hired Suleyman to lead its consumer AI division — meaning one of DeepMind’s three co-founders was now running AI products at Google’s primary competitor. The departure underscored a recurring pattern: DeepMind produced extraordinary talent that it could not always retain.

Google Brain, by contrast, was born inside Google. Founded by Andrew Ng and Jeff Dean in 2011, it operated as a division of Google Research and was deeply integrated with the company’s product infrastructure. Brain researchers worked on TensorFlow, contributed to Google Translate, and built the attention mechanism that would eventually become the transformer architecture — the foundation of every frontier AI model in existence.

The two teams produced complementary breakthroughs. DeepMind gave the world AlphaGo, AlphaFold, and reinforcement learning techniques that expanded the boundaries of what AI could do. Brain gave the world transformers, sequence-to-sequence models, and the infrastructure that made large-scale AI training possible. But they also produced redundancy, internal competition for compute resources, and a cultural divide between DeepMind’s research-first ethos and Brain’s product-integration orientation.

The catalyst for merger was ChatGPT.

When OpenAI launched ChatGPT in November 2022, the internal reaction at Google was described by multiple accounts as a “code red.” The company had the research talent, the data, the compute, and the foundational technology — the transformer was literally invented at Google Brain — but it had failed to ship a competitive consumer AI product. The gap between Google’s research capabilities and its product execution became, overnight, the most embarrassing strategic failure in the company’s history.

Sergey Brin, who had been largely absent from day-to-day operations, returned to assist in the development of what would become Gemini. Hundreds of engineers from both Brain and DeepMind were conscripted into the effort. The merger, when it came, was presented as a strategic consolidation. Internally, it was understood as an emergency response.

Hassabis was named CEO of the combined entity. Jeff Dean, who had led Brain, was given the title of Chief Scientist — a role that was prestigious but, by many accounts, less operationally central. The arrangement was telling: Google chose the research leader over the infrastructure leader to run its most important division. It was a bet on scientific vision over engineering execution, made at a moment when the company’s most urgent need was to ship a product.

Google’s first attempt to respond to ChatGPT — Bard, launched in February 2023 — had already failed in spectacular fashion. During its debut in a promotional ad, Bard confidently stated that the James Webb Space Telescope had taken the first pictures of a planet outside our solar system. The claim was wrong. The error was spotted within hours. Alphabet’s market capitalization dropped by approximately $100 billion in a single trading session. The lesson was seared into the organization: in AI, a single factual mistake could cost more than most companies are worth.

The Culture Collision

The first year was difficult.

Former employees describe a period of adjustment that ranged from productive friction to genuine frustration. DeepMind’s London culture — academic, deliberate, oriented toward multi-year research programs — collided with Brain’s Mountain View culture, which was faster-paced, more product-oriented, and accustomed to the rhythms of a public technology company.

Researchers who had joined DeepMind specifically because of its research independence found themselves subject to new constraints. Projects were evaluated not just on scientific merit but on their relevance to Gemini and Google’s product roadmap. Publication timelines were scrutinized. The latitude to pursue curiosity-driven research, which had been central to DeepMind’s identity, was compressed.

“Some researchers felt frustration with having to stick to guidelines from leadership,” one account noted. “This pressure has created a sense of fatigue.”

The fatigue was compounded by Gemini’s early stumbles. The initial release of Gemini 1.0 in December 2023 was accompanied by a controversy over a misleading demo video that suggested real-time multimodal interaction but was actually a carefully edited sequence. The subsequent launch of Gemini’s image generation feature produced historically inaccurate images — including racially diverse depictions of Nazi-era German soldiers — that became a national news story and a political flashpoint. Google paused the feature and issued a public apology.

Neither incident reflected a fundamental technical failure. But both crystallized a narrative that Google could not seem to shake: the company that invented the transformer could not figure out how to ship AI without embarrassing itself. Inside DeepMind, the lesson landed differently. Ship fast — but understand that you are shipping inside a panopticon where every mistake becomes a front-page story about whether Google has lost the AI race.

The talent bleed accelerated. In 2025 alone, Google lost at least eleven AI and cloud executives, mostly to Microsoft. Microsoft hired roughly two dozen employees from DeepMind earlier in the year. The departures included distinguished engineers, senior directors, and research scientists who had been at Google for over a decade.

The pattern extended further back. Over the past eight years, twenty top researchers who contributed to milestone papers at DeepMind or Brain had left to found companies including Character.AI, Cohere, and Adept, or to work at Meta, Hugging Face, and Anthropic. The transformer paper alone — “Attention Is All You Need” — has produced an extraordinary diaspora: virtually all eight authors have left Google to start or join competing organizations.

Google’s response to the talent drain was revealing. In the United Kingdom, where DeepMind is headquartered, the company began enforcing noncompete agreements that could prevent departing employees from joining competitors for up to a year. Senior researchers faced twelve-month restrictions. Individual contributors on high-profile projects like Gemini faced six-month bars. The restrictions came with continued salary — effectively paying people to not work — but the approach generated significant controversy.

In March 2025, a Microsoft VP posted publicly about DeepMind staff reaching out to him “in despair” over their inability to escape the noncompete clauses. A former DeepMind director, now at Microsoft, publicly advised researchers not to sign such agreements. Google defended the practice as industry-standard protection of commercial interests.

The irony was precise. The urgency to ship products and compete with OpenAI made talent retention critical. That same urgency — the compression of research freedom, the product deadlines, the scrutiny — was exactly why researchers wanted to leave. Google was paying people not to work at competitors while simultaneously creating the conditions that made people want to work at competitors.

There was a counterpoint, though, buried in Google’s own hiring data: 20% of the AI software engineers the company hired in 2025 were former employees who had left and returned. Some of them had tried the startup life and discovered that research freedom at a startup meant research freedom with a fraction of the compute.

The Gemini Comeback

Then the models got good.

Gemini 2.5, launched in March 2025, was the turning point. The model represented a genuine leap in capability, particularly in reasoning tasks. It topped the Chatbot Arena leaderboard — a crowdsourced benchmark where users evaluate models head-to-head — and held the position for months. For the first time since the merger, Google’s AI model was not playing catch-up. It was leading.

Then came Gemini 3 in November 2025, and the reception broke from the pattern of cautious praise that had greeted every previous release. Developers switched mid-project. Tech CEOs posted unsolicited endorsements. The numbers backed the enthusiasm: 1501 Elo on reasoning benchmarks, 91.9% on GPQA, 81% on MMMU-Pro for multimodal understanding, 72.1% on SimpleQA Verified for factual accuracy — state-of-the-art on every metric that mattered.

What mattered more than the scores was the deployment. Gemini 3 shipped simultaneously across the Gemini app, AI Studio, Vertex AI, Google Search, and Google Workspace — the first time a Gemini model launched across multiple products on day one. This was the thing the merger was supposed to make possible. For two years, it hadn’t. Now it did.

By December 2025, Google launched Gemini 3 Flash — a smaller, faster variant optimized for cost efficiency — and the company’s competitive position had materially improved. The model was priced at $2 per million input tokens for prompts under 200,000 tokens, with a free tier available for experimentation. The pricing undercut OpenAI significantly on several dimensions.

The user numbers reflected the shift. Google’s Gemini app surpassed 750 million monthly active users by the end of Q4 2025, up from approximately 450 million at the beginning of the year. In India, Gemini captured 52% of the AI chatbot download market, surpassing ChatGPT’s 32% share. Enterprise adoption also accelerated: 27 million enterprise users were on Gemini Pro as of June 2025, with healthcare and finance as the fastest-growing sectors at 3.4x growth.

The overall market share picture remained humbling — ChatGPT at 64.5% of tracked usage in January 2026, Gemini at 21.5% — but the trajectory had reversed. Eighteen months earlier, the question was whether Google could compete. Now the question was whether Google could catch up. The distinction mattered.

The Gemini 3 Deep Think mode, released alongside the base model, pushed capabilities further. It achieved gold-medal level performance on the 2025 International Mathematics Olympiad, solving five of six problems perfectly for a total of 35 points — matching OpenAI’s score exactly. On the International Collegiate Programming Contest (ICPC), Gemini solved 10 of 12 problems at gold-medal level, trailing only OpenAI’s perfect 12-of-12 score. On the International Physics and Chemistry Olympiads, Deep Think achieved gold-medal results on the written portions of both competitions.

The research applications were even more remarkable. An evaluation against 700 open problems on the Bloom-Erdos Conjectures database resulted in the autonomous solution of four open mathematical questions, including Erdos-1051 — a problem that had been open for decades. In one case, the model’s solution led to a generalization that was subsequently reported in a peer-reviewed research paper.

These were not marketing benchmarks. A machine solved a problem that mathematicians had failed to solve for decades. Whatever else the merger produced, it produced that.

The Science Machine

Gemini is the product. AlphaFold is the argument.

The argument goes like this: Google DeepMind is not just another AI lab competing on chatbot benchmarks. It is a scientific institution that happens to be inside a corporation. The evidence is hard to dismiss. Over 200 million protein structures predicted. Over three million researchers in 190 countries using the database. The AlphaFold Protein Structure Database — freely available, no paywall, no API key — has been described by working biologists as the most significant new tool in structural biology since X-ray crystallography.

In October 2024, Demis Hassabis and John Jumper received the Nobel Prize in Chemistry for their work on AlphaFold. It was the first Nobel Prize awarded for an AI system and among the most broadly celebrated in recent memory. The Nobel committee recognized not just the technical achievement but the downstream impact: AlphaFold’s predictions were already being used to develop new drugs, understand diseases, design enzymes for industrial applications, and study the molecular basis of honeybee immunity for conservation purposes.

But the Nobel Prize, for all its significance, also highlighted a strategic question. AlphaFold was a product of the old DeepMind — the research lab that operated with minimal commercial pressure, published freely, and measured success in scientific impact rather than revenue. The new Google DeepMind needed to translate that kind of breakthrough into commercial value.

The answer was Isomorphic Labs.

Founded in 2021 as a DeepMind spinoff, Isomorphic Labs was established to apply AI to drug discovery. Hassabis served as CEO of both Google DeepMind and Isomorphic — an unusual arrangement that concentrated an extraordinary amount of strategic authority in a single individual.

By 2025, Isomorphic had raised $600 million in its first external funding round, led by Thrive Capital, and had grown to more than 200 employees. The company established partnerships with Eli Lilly and Novartis worth a potential $3 billion combined. Novartis initially agreed to work on three drug targets; by early 2026, the collaboration had expanded to six.

In February 2026, Isomorphic released a technical report on its Drug Design Engine (IsoDDE), which demonstrated capabilities that went significantly beyond AlphaFold 3. The engine doubled AlphaFold 3’s accuracy in predicting protein-ligand structures that diverged from training data — a critical capability for practical drug design, where the most interesting molecular interactions are often the ones least represented in existing datasets.

Scientists who reviewed the report described the system as “an AlphaFold 4” in terms of capability. The company was preparing to dose its first patients in clinical trials of AI-designed drugs — four years from founding to human trials, in an industry where that timeline typically spans a decade or more.

Isomorphic represented a new model for how Google DeepMind could create value. Rather than force fundamental research through the product pipeline of a consumer technology company, the spinoff structure allowed a focused team to pursue deep scientific applications while leveraging Google DeepMind’s research infrastructure and talent. If successful, Isomorphic could validate the thesis that pure scientific breakthroughs at DeepMind were not just prestigious curiosities but precursors to transformative commercial businesses.

The strategic importance extended beyond drug discovery. Hassabis had been explicit about his vision: AI as a tool for accelerating the entire scientific method. Weather prediction with GraphCast. Materials discovery with GNoME, which identified 2.2 million new crystal structures. Nuclear fusion optimization with work at the Joint European Torus facility. Each application followed a pattern: take a hard scientific problem, apply DeepMind’s research capabilities, and demonstrate results that exceeded human baselines.

In a February 2026 Fortune interview, Hassabis predicted a “new golden era of discovery” within ten to fifteen years — “a kind of new renaissance” driven by AI-accelerated science. It was the kind of statement that would sound grandiose from anyone who hadn’t won a Nobel Prize for doing exactly what he was describing. From Hassabis, it landed as a research agenda with a budget behind it. A very large budget.

The Infrastructure War

There is a dimension of the AI race that rarely makes headlines but may ultimately decide it: who controls the silicon.

Google designs its own chips. OpenAI does not. Anthropic does not. This fact, more than any benchmark score, may be Google DeepMind’s most durable advantage.

While OpenAI relied primarily on Nvidia GPUs and Anthropic built on Amazon Web Services with a mix of Nvidia and custom hardware, Google had been developing Tensor Processing Units (TPUs) since 2015. The sixth generation, called Trillium, became generally available in late 2024 with a 4.7x improvement in peak compute performance per chip over the previous generation, doubled HBM capacity and bandwidth, and 67% better energy efficiency.

But Trillium was merely the bridge to what came next.

At Google Cloud Next in April 2025, Google unveiled Ironwood — the seventh-generation TPU. Each chip delivered 4,614 teraflops of compute with 192 GB of HBM memory and 7.2 TB/s of memory bandwidth. In a 9,216-chip pod configuration, Ironwood delivered 42.5 exaflops of peak compute with 1.2 Tbps of bidirectional interconnect bandwidth. For context, a single Ironwood pod exceeded the combined floating-point performance of every supercomputer in the world’s Top 500 list.

Ironwood was the first Google TPU designed with native FP8 support and was optimized specifically for the massive key-value caches required by long-context reasoning models — the kind of models that Google DeepMind was building with Gemini 3 Deep Think. The chip was designed to support context windows stretching into the millions of tokens, a capability that was becoming central to the most advanced reasoning architectures.

The strategic implication was straightforward. OpenAI, Anthropic, and every other lab that relies on Nvidia pays whatever Nvidia charges, receives chips whenever Nvidia ships them, and designs models around whatever architecture Nvidia builds. Google does not. The TPU team and the Gemini team sit in the same organization and co-design hardware and software together. When Gemini 3 Deep Think needed massive key-value caches for long-context reasoning, Ironwood was designed with exactly that workload in mind. Try getting that from a vendor.

The $175 billion to $185 billion capital expenditure budget for 2026 reflected the scale of this ambition. Approximately 60% of technical infrastructure investment in Q4 2025 went to servers, with 40% going to data centers and networking equipment. The spending was so large that Pichai publicly acknowledged it would still not be enough — the company expected to remain supply-constrained for the entire year.

Google Cloud’s financial results validated the investment thesis. Cloud revenue increased 48% year over year to $17.7 billion in Q4 2025, driven primarily by AI infrastructure and AI solutions. The cloud backlog surged 55% sequentially and more than doubled year over year, reaching $240 billion at the end of Q4. Google Cloud ended 2025 at an annual run rate exceeding $70 billion.

The broader Alphabet financial picture was equally strong. Revenue grew 18% year over year in Q4, and net income reached $34.46 billion, up nearly 30% from the prior year. The market rewarded the results: Alphabet’s stock posted its best annual performance since 2009.

But the spending also raised questions. Could any company sustain capital expenditure at this level indefinitely? The $185 billion earmarked for 2026 was more than Alphabet’s total revenue in any single year before 2024. If AI demand plateaued or competition compressed margins, the infrastructure would become an albatross rather than a moat.

Amazon was projecting $200 billion. Microsoft and Meta were adding their own hundreds of billions. Combined, the four hyperscalers were expected to spend nearly $700 billion in 2026 on AI infrastructure. Every dollar of it was a sunk cost the moment the concrete was poured.

The OpenAI Question

Every discussion of Google DeepMind eventually arrives at the same comparison. The answer depends on what you measure, and the measurements tell contradictory stories.

The models are now nearly indistinguishable on hard benchmarks. At the 2025 IMO, both Gemini Deep Think and OpenAI’s model solved five of six problems for identical scores of 35 points. At ICPC, OpenAI led 12-to-10. On the Chatbot Arena leaderboard, the top position changed hands between the two companies roughly every month through late 2025.

The market tells a different story. ChatGPT held 64.5% of tracked AI chatbot usage in January 2026. Gemini held 21.5%. OpenAI’s first-mover advantage and its deep integration with Microsoft’s enterprise stack created a commercial position that better models alone could not displace. When a developer needs an AI API, the default choice is still OpenAI. That default is worth more than any benchmark.

Google’s counter-argument is infrastructure. OpenAI rents its compute from Microsoft. Google builds its own chips, operates its own data centers, and can co-design hardware and software in a way that no competitor can replicate. The Ironwood TPU represents a hardware lead that would take years to close — if competitors even tried.

And then there is the dimension where no comparison is possible. OpenAI has not won a Nobel Prize. It has not solved open mathematical conjectures. It does not have a drug discovery spinoff entering human clinical trials. AlphaFold remains the single most impactful real-world application of AI, full stop. OpenAI’s research output, once the envy of the field, has shifted toward model scaling and commercial applications. Google DeepMind still publishes fundamental science.

The revenue picture adds another layer. Google does not need AI to be a standalone business. AI is already embedded in Search (8.5 billion queries per day), Workspace (over a billion users), Android (two billion devices), and Cloud ($70 billion annual run rate). OpenAI is building its commercial infrastructure from scratch. The revenue potential of AI-enhanced Google Search alone exceeds anything in OpenAI’s current portfolio.

There is also the hedging strategy that few discuss publicly but everyone in the industry recognizes. Google invested $2 billion in Anthropic — the company founded by former OpenAI executives that is now valued at $380 billion and is, in some product categories, Google DeepMind’s direct competitor. The investment gives Google a financial interest in the success of a company building rival AI models. It is the kind of move that a corporation makes when it is not entirely sure its internal lab will win: bet on yourself, but also bet on someone else.

The strategic contest is really between two theories of value creation. OpenAI is building an AI company that competes with Google. Google is embedding AI into everything it already does. The first approach concentrates risk and reward. The second distributes it across products, infrastructure, and — via Anthropic — even across competitors.

What the Numbers Cannot Answer

The data makes the case for a successful merger. The data also makes the case for a troubled one. The resolution depends on questions that numbers cannot settle.

The first is whether research and product development can coexist at the same intensity within a single organization. Gemini’s improvement has been dramatic, but the researchers who produced AlphaFold did so under conditions that no longer exist. They had years of uninterrupted focus, minimal commercial pressure, and the freedom to pursue ideas that had no obvious product application. The current structure rewards speed, relevance to the product roadmap, and quarterly impact. These are not the conditions under which scientific breakthroughs typically occur. The question is whether Deep Think’s mathematical results and Isomorphic’s drug design engine represent the continuation of that research culture — or its last echoes.

The second is whether distribution trumps brand. Google has two billion Android devices, a billion Gmail users, 8.5 billion daily search queries. If Gemini becomes the default intelligence layer across all of them, the adoption curve could be steeper than anything a standalone AI company can achieve. The Gemini app already has 750 million monthly users — more than ChatGPT in absolute terms. But ChatGPT owns the verb. People “ChatGPT it.” Nobody “Geminis” anything. Engagement depth and willingness to pay may matter more than raw user counts, and on those metrics, Google has not published comparisons.

The third is whether $185 billion per year in capital expenditure is an investment or a trap. Google Cloud’s backlog is $240 billion. The revenue growth is 48%. But the spend implies that AI compute demand will continue to grow faster than it can be built for at least the next two years. If that assumption holds, the infrastructure becomes an unassailable moat. If it doesn’t — if open-source models become good enough, if inference costs drop faster than expected, if enterprises decide they need less compute than the hyperscalers are projecting — the concrete will have already been poured.

The fourth is organizational. OpenAI has 3,000 employees and the focus of a company whose survival depends on every launch. Anthropic has 1,500 and maintains intense research discipline. Google DeepMind has 7,600 — larger than both combined — embedded in a corporate structure that includes legal, compliance, policy, public affairs, and communications functions that add friction to every decision. When Gemini’s image generator produced historically inaccurate images, the fallout consumed weeks of executive attention across the entire company. At a startup, it would have been a patch and a blog post. At Google, it became a Senate hearing topic.

The Verdict, For Now

The simplest way to evaluate Google DeepMind is to ask what it has produced in the thirty-three months since the merger. A Nobel Prize. A model that matches the best in the world on the hardest benchmarks. A drug discovery engine entering human trials. Custom silicon that outperforms everything on the market. A cloud business growing at 48% with a $240 billion backlog. Seven hundred and fifty million monthly active users on the Gemini app.

Set against those numbers is a list of losses. At least twenty senior researchers departed for competitors or startups over the past eight years. The eight authors of “Attention Is All You Need” — the paper that created the transformer — are nearly all gone. The noncompete controversy turned a talent retention strategy into a PR liability. And OpenAI still controls 64.5% of the chatbot market, three-to-one against Gemini’s 21.5%.

The merger has produced a competitive frontier lab inside a corporation that can fund it at a scale no startup can match. It has not produced clear AI leadership. The gap is closing, but it has not closed.

Hassabis still calls Pichai every day. The calls are about model architecture, competitive positioning, compute allocation, product timelines. Three years ago, Hassabis ran a lab that published papers. Now he runs the engine room of a $185 billion annual investment program. The transformation is complete. Whether it was worth it — whether the research culture that produced AlphaFold can survive inside a machine optimized for quarterly product launches — is a question that only the next generation of breakthroughs can answer.

Gemini 4 is in development. Ironwood is ramping to general availability. Isomorphic is dosing its first patients. The $185 billion is flowing into concrete and copper and silicon across four continents.

The researchers who stayed are watching. The researchers who left are watching too.


Published March 6, 2026. This investigation covers Google DeepMind’s evolution from the April 2023 merger through early 2026.

About the Author

Gene Dai is the co-founder of OpenJobs AI, focusing on AI-powered recruitment technology and the intersection of artificial intelligence with enterprise software.