Ai Agents 4 min read

How to Build Long-Running AI Agents With Google ADK 1.0

Google's Agent Development Kit 1.0 enables multi-day workflows that survive restarts. Learn to configure durable state machines and persistent session storage.

Google Developers AI recently launched Agent Development Kit (ADK) 1.0, a framework designed to manage complex, multi-day enterprise workflows. Instead of stateless single-turn chatbot interactions, ADK provides durable state machines and persistent storage. This architecture allows an agent handling HR onboarding or sales prospecting to completely shut down while waiting for external triggers, and later resume from the exact state of its last execution. Here is how to configure the toolkit, implement database sessions, and handle human-in-the-loop approvals.

Installation and Setup

ADK 1.0 is officially stable across Python, Go, and Java, with TypeScript support also available. You can initialize new agent projects using the newly released CLI tool.

Install the CLI globally via uv:

bash uv tool install google-agents-cli

The kit operates as part of the broader Gemini Enterprise Agent Platform. It leverages the Agent Runtime, which is specifically optimized for the Gemini 3.x model line, including Gemini 3 Pro and Gemini 3 Flash. This runtime provides sub-second cold starts, which is critical when agents are frequently waking up from an idle state to process asynchronous events.

Configuring Persistent Session Storage

Stateless agents lose their execution context when the underlying application restarts or scales down. ADK solves this by moving beyond local, in-memory storage using the DatabaseSessionService.

This service natively integrates with Google Cloud SQL for PostgreSQL and Firestore to serialize and store the entire session state. When an agent wakes up, the runtime hydrates the state machine from the database, ensuring context survives cluster evictions or scheduled downtime. You can configure the backend during initialization by passing your database credentials and specifying the persistence strategy in the agent configuration.

Managing Context with Event Compaction

Long-running workflows inevitably accumulate massive execution histories. Appending all historical interactions into a single prompt increases latency, drives up token costs, and pushes models past their effective context windows.

ADK addresses this via Event Compaction. Instead of treating history as a raw text log, the framework maintains ordered lists of processors. These processors dynamically filter and summarize context before it reaches the language model. When designing your agent, you define which variables and outcomes are critical for the current execution step, allowing the compaction engine to prune irrelevant conversational turns or repetitive tool outputs.

Implementing Human-in-the-Loop Workflows

Enterprise processes frequently require human authorization before executing destructive actions or finalizing financial transactions. ADK introduces the ToolConfirmation component to handle these scenarios natively.

When a tool requires approval, the agent triggers the requestConfirmation() method. This immediately pauses the LLM flow and serializes the current reasoning chain. The agent enters a dormant state until the user provides the necessary payload or approval click. Once authorized, the framework resumes execution precisely where it left off. You do not need to reconstruct the prompt or ask the model to re-evaluate the previous steps.

Agent Memory Bank and Profiles

Beyond basic session state, ADK includes a Memory Bank feature designed to recall high-accuracy details from past interactions. This system utilizes Memory Profiles to separate transient session data from persistent user preferences.

For example, if a user mentions a specific dietary restriction during a travel booking workflow, that detail is stored in their long-term memory profile. Weeks later, during a completely separate dining reservation workflow, the agent can retrieve this low-latency profile and apply the constraint automatically. This removes the need to constantly query a vector database for basic user facts.

Structuring Multi-Agent Networks

For complex domains, you can organize networks of specialized sub-agents using ADK’s graph-based framework. The toolkit includes native support for the Agent-to-Agent (A2A) Protocol, meaning your ADK workflows can collaborate seamlessly with agents built in other frameworks like LangGraph, CrewAI, or Semantic Kernel.

When connecting disparate systems, ensure you map out the specific capabilities of each node to avoid circular dependencies. If you are migrating an older system, you may want to start by refactoring your monolithic agents into specialized functional blocks before networking them together.

To move these multi-agent systems into production, you can deploy them via the GKE Agent Sandbox. This hardened environment within Google Kubernetes Engine allows you to safely execute model-generated code, spinning up to 300 sandboxes per second per cluster.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading