Ai Engineering 3 min read

Google Inks Multibillion GB300 Deal With Thinking Machines Lab

Google signed a multibillion-dollar agreement to provide Thinking Machines Lab with access to Nvidia GB300 infrastructure for reinforcement learning.

On April 22, 2026, Google announced a multibillion-dollar agreement to provide Mira Murati’s Thinking Machines Lab (TML) with high-priority access to its AI infrastructure. The non-exclusive partnership, revealed at the Google Cloud Next conference, centers on providing the startup with Nvidia GB300 “Blackwell Ultra” chips to train custom frontier AI models. This secures massive compute capacity for TML while marking a major design win for Google Cloud’s high-margin services.

Hardware and Cloud Integration

TML will deploy Google’s A4X Max virtual machines, which are built on Nvidia GB300 NVL72 rack-scale systems. These specific systems deliver a 2x training and serving speedup compared to previous GPU generations. The massive scale of these clusters requires specialized networking to prevent bottlenecking during weight updates.

The lab’s reinforcement learning workloads rely heavily on high-bandwidth transfers, which Google will support using its Jupiter network. TML will integrate its operations deeply into the Google Cloud ecosystem. The lab will utilize Spanner for database operations, Google Kubernetes Engine (GKE) for orchestration, and Cluster Director to manage distributed AI inference and training workloads across nodes.

Strategic Workloads and Hardware Roadmap

The dedicated compute capacity is earmarked for scaling Tinker, TML’s internal tool for automating the development of custom AI agents and frontier models using reinforcement learning. TML is now the third major AI lab to secure massive compute from Google this month, following multibillion-dollar TPU and Blackwell agreements with Anthropic and Meta.

This Google deal provides TML with immediate access to Blackwell silicon. This bridges a critical hardware gap before TML’s separate one-gigawatt data center partnership with Nvidia bears fruit. That separate agreement, signed in March 2026, targets the deployment of Nvidia Vera Rubin systems in early 2027.

Corporate Valuation and Talent Attrition

Thinking Machines Lab was founded as a public benefit corporation in February 2025. The company raised a $2 billion seed round led by Andreessen Horowitz at a $12 billion valuation in April 2025. The leadership team includes CEO Mira Murati, CTO Barret Zoph, and Chief Scientist John Schulman.

The infrastructure announcement arrives alongside a high-profile talent battle. Meta recently poached seven founding members of TML. This departure included the lead engineer for Tinker and foundational researchers Joshua Gross, Andrew Tulloch, Mark Jen, and Yinghai Lu. Following the official Cloud Next announcement, Alphabet shares rose approximately 2% as investors favored the locked-in revenue from another frontier lab.

If you build systems that rely on complex multi-agent coordination, expect major cloud providers to continue prioritizing these large-scale lab partnerships for early access to next-generation silicon. Smaller deployments will face longer lead times for GB300 capacity until these multibillion-dollar anchor tenant contracts are fulfilled. Plan your infrastructure scaling and model training timelines accordingly.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading