$40 Billion Anthropic Deal Trades Equity for 1M Google TPUs
Anthropic will receive $10 billion in upfront cash and up to 1 million Ironwood TPUs in a $40 billion infrastructure agreement with Google.
Google’s latest $40 billion agreement with Anthropic provides the startup with $10 billion in immediate cash and access to up to one million seventh-generation Ironwood TPUs. The transaction values Anthropic at $350 billion and brings Google’s total commitment to $43 billion, making it the largest single investment in an AI provider to date.
The structure ties the majority of the capital directly to hardware consumption and milestone delivery. The remaining $30 billion is contingent on undisclosed performance targets. In exchange, Google commits to providing five gigawatts of computing power over the next five years.
This infrastructure requirement follows Anthropic’s rapid revenue growth, which reached a $30 billion annualized run rate as of April 2026. The capital influx allows Anthropic to maintain its trajectory after its internal Claude Code agent became the dominant tool for software engineering, placing intense pressure on competing AI coding assistants across the industry. Secondary market estimates for Anthropic’s valuation have reportedly reached $1 trillion.
The Mythos Catalyst
The investment directly follows the April 7 preview of Anthropic’s Mythos model. The company withheld public release of the weights or API access due to the model’s autonomous offensive cybersecurity capabilities.
In technical benchmarks, Mythos demonstrated a distinct capability jump in complex problem-solving.
| Metric | Claude Opus 4.6 | Claude Mythos |
|---|---|---|
| SWE-bench Verified | 80.8% | 93.9% |
| Cybench | Not specified | 100% |
During testing, Mythos autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD (CVE-2026-4747) without any human guidance. The model successfully found vulnerabilities across Linux, OpenBSD, Firefox, and Chrome.
To manage these risks, Anthropic launched Project Glasswing. The initiative provides restricted access to Mythos for defensive applications. Critical infrastructure partners and open-source developers can use the system to identify and patch system vulnerabilities before malicious actors can exploit them, addressing concerns raised after the model found zero-days in every major OS.
Market Implications
The scale of the hardware requirement underscores the raw capital needed to train and deploy autonomous systems at the frontier. Just four days before the Google announcement, Amazon committed $25 billion to Anthropic in exchange for a decade-long, $100 billion compute commitment.
While Google develops competing models, this structure ensures that Anthropic’s heavy inference workloads generate revenue for Google Cloud infrastructure. The arrangement operates essentially as a cloud consumption deal structured as equity financing.
For developers building multi-agent systems, these capital injections guarantee the sustained availability of frontier models capable of agentic execution. If you rely on Claude models for complex reasoning tasks, you can expect Anthropic to have the physical infrastructure required to scale their operations globally without throttling API access.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
Google Graduates LiteRT NPU Acceleration to Production
Learn how to configure LiteRT for hardware-accelerated on-device AI inference using Google's production-ready NPU capabilities.
Google Inks Multibillion GB300 Deal With Thinking Machines Lab
Google signed a multibillion-dollar agreement to provide Thinking Machines Lab with access to Nvidia GB300 infrastructure for reinforcement learning.
Build Autonomous Tools 10x Faster via Claude Managed Agents
Anthropic debuts Claude Managed Agents, a cloud-hosted API suite that handles infrastructure, sandboxing, and persistent state for production AI agents.
NVIDIA Demos Gemma 4 VLA on $249 Jetson Orin Nano Super
NVIDIA showcased Google's Gemma 4 VLA running natively on the Jetson Orin Nano Super using NVFP4 quantization and a new 25W hardware performance mode.
AWS SageMaker adds NVIDIA Blackwell G7e inference instances
Amazon SageMaker AI now offers G7e instances on NVIDIA RTX PRO 6000 Blackwell GPUs, with 96GB memory and 2.3x faster inference over G6e.