Ai Agents 2 min read

Claude Managed Agents: Built-In Memory Is Now Live

Anthropic released a built-in memory layer for Claude Managed Agents, enabling cross-session persistence via a mounted filesystem.

Anthropic has launched built-in memory for Claude Managed Agents in public beta. The update provides a persistent, cross-session memory layer integrated directly into the Claude platform. Developers building autonomous systems no longer need to provision custom vector databases to maintain state across long-running workflows.

Filesystem Architecture

The new memory system operates as a directly mounted filesystem rather than a simple context window appendage. Memories are stored as files within an agent’s /memories directory. Claude interacts with this storage using its native code execution capabilities. The model reads, writes, updates, and deletes memory files programmatically.

By mounting memory directly onto the agent sandbox, the model determines what to retain for specific tasks. This prevents context bloat and allows the agent to be discerning about data retention. Developers manage these files via the API, which includes support for memory exports and audit trails. The feature requires the managed-agents-2026-04-01 beta header. If you actively add memory to AI agents natively, this architecture simplifies state management.

Enterprise Controls

The implementation includes controls designed for organizational scale. Development teams can establish shared memory repositories accessible by multiple agents simultaneously. A global organizational store can be configured as read-only while individual user stores remain writable.

Multiple agents operate against the same memory store concurrently without data collisions. The memory tool is also eligible for Zero Data Retention (ZDR). This satisfies compliance requirements for enterprise deployments.

Benchmarks and Platform Context

Early users reported significant efficiency improvements. Rakuten achieved a 97 percent reduction in first-time errors and cut costs by 27 percent by using the native memory layer to avoid redundant context processing. Wisedocs reported faster validation speeds for complex documentation, and Netflix used the system to retain conversational insights across extended sessions.

The memory release coincides with the introduction of new connectors for the Claude ecosystem and follows the recent release of Claude Opus 4.7. The underlying Managed Agents platform, which entered public beta earlier in April, incurs a runtime fee of $0.08 per session-hour in addition to standard token rates. As agents become more capable of complex tasks, the ability to effectively evaluate and test AI agents over long horizons depends heavily on predictable state retention.

When you configure your next long-horizon workflow, evaluate whether filesystem-based memory can replace your external retrieval pipelines. Offloading state management to the Claude runtime can reduce your total token expenditure and simplify infrastructure overhead.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading