Ai Agents 3 min read

Google Launches Gemini Enterprise Agent Platform for AI Fleets

Google has replaced Vertex AI with the Gemini Enterprise Agent Platform, a centralized control plane for building and managing autonomous AI agent fleets.

At Google Cloud Next ‘26, Google introduced the Gemini Enterprise Agent Platform, a unified environment for managing autonomous AI workflows. The platform serves as the direct successor to Vertex AI, consolidating all future AI model building and tuning services into a single control plane. The release targets technical users and IT departments building production fleets of long-running software agents.

Architecture and Model Support

The platform provides access to over 200 models, establishing a flexible base for different execution tasks. This includes Google’s first-party Gemini 3.1 Pro and Gemini 3.1 Flash Image, internally codenamed Nano Banana 2. Support extends to Lyria 3 for multimodal tasks and the open-source Gemma 4. The environment also offers first-class integration with third-party models, including Anthropic’s Claude Opus, Sonnet, and Haiku.

Developers build these systems using an updated Agent Development Kit (ADK). The ADK utilizes a new graph-based framework to organize individual models into multi-agent systems for complex reasoning tasks. This allows distinct agents to handle specific sub-tasks within a broader workflow.

Agent Runtime and Governance

Google engineered a custom runtime specifically for agentic execution. The engine delivers sub-second cold starts and natively supports long-running processes that operate autonomously for days. To safely execute model-generated code and browser automation, the platform includes an Agent Sandbox. This hardened environment prevents autonomous operations from accessing or compromising host systems.

The control plane introduces centralized governance tools designed to prevent unauthorized or untracked execution. Administrators use the Agent Registry to maintain a library of approved tools and skills. The Agent Identity service assigns unique, trackable identifiers to every deployed instance. All execution requests route through the Agent Gateway, which enforces security policies and applies Model Armor protections.

Developers rely on the Optimization Suite to evaluate and test AI agents before production deployment. The suite includes Agent Simulation for testing against synthetic workloads, alongside Agent Observability tools that generate visual execution traces for debugging reasoning failures.

Hardware and Infrastructure

The platform runs on Google’s 8th-generation Tensor Processing Units. The infrastructure utilizes TPU 8t for training and TPU 8i for inference, providing optimized hardware acceleration for the underlying models.

To support enterprise data requirements, Google launched the Agentic Data Cloud alongside the platform. This AI-native data architecture integrates directly with the agent orchestration layer. Early enterprise adoption includes a $1 billion agreement with Merck to deploy the system across drug research and manufacturing workflows.

If you maintain enterprise AI infrastructure, the deprecation of Vertex AI necessitates migrating your orchestration layer to the new control plane. Teams should audit their existing prompts and routing logic against the new ADK graph framework to utilize the sub-second cold starts and sandboxed execution environments.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading