Ai Coding 4 min read

Anthropic Launches Claude Code Auto Mode

Anthropic launched Auto mode for Claude Code, a research-preview permissions feature that lets coding agents run longer tasks with fewer approvals.

Anthropic launched Auto mode for Claude Code on March 24, giving coding agents a new middle ground between per-action approval prompts and full permission bypass. For developers using Claude Code on longer tasks, the change matters because it reduces interruption during file edits and shell execution while still inserting a safety check before each tool call.

Execution model

Auto mode is a new permission mode, not a new model. In Claude Code’s default mode, every file write and bash command requires approval. In Auto mode, a classifier evaluates each tool call before execution and screens for potentially destructive actions.

Anthropic’s examples are concrete: mass deletion, sensitive data exfiltration, and malicious code execution. Calls classified as safe run automatically. Risky calls are blocked, and repeated attempts to push through blocked actions eventually trigger a user permission prompt.

This is the operational distinction that matters. Claude Code can keep moving through routine implementation work without pausing on every edit, but it does not receive unrestricted shell access the way --dangerously-skip-permissions does.

Availability and supported models

Auto mode launched as a research preview on the Team plan. Enterprise plan and API support are slated to follow in the coming days.

It works with Claude Sonnet 4.6 and Claude Opus 4.6.

For CLI users, enablement is straightforward:

claude --enable-auto-mode

Inside an active session, you can cycle permission modes with Shift+Tab. Existing Claude Code controls such as mode switching and team policy now matter more, because permissioning is becoming part of agent design, not just a convenience setting. If you are already standardizing coding workflows or Claude Code channels, this fits into the same governance layer.

Admin controls

On Desktop and in the VS Code extension, admins must enable Auto mode in settings before users can use it. On the Claude desktop app, it is disabled by default and can be toggled in Organization Settings → Claude Code.

Managed environments also get an explicit policy control:

ControlValue
Disable Auto mode policy"disableAutoMode": "disable"

This is a useful signal about intended deployment. Anthropic is treating Auto mode as an organization-level feature, not just an individual productivity toggle.

Safety tradeoffs

Auto mode adds automation by inserting a classifier in front of tool execution. It does not guarantee safe execution.

Anthropic explicitly states the classifier can make both categories of error. It can allow risky actions when intent is ambiguous or environment context is incomplete. It can also block benign actions. For teams building internal coding agents, this means the failure mode shifts from frequent human interruption to classifier judgment error.

Anthropic also recommends using Auto mode in isolated environments. If your agent can touch production credentials, shared network drives, customer data, or deployment tooling, isolated execution should be your default anyway. This is consistent with the broader security direction across coding agents, where runtime controls increasingly matter as much as prompt controls, especially for systems using shell tools and external integrations such as function calling or MCP-style access patterns in agent architectures.

Cost and latency impact

Each tool call in Auto mode passes through an additional classifier step. Anthropic says this can add a small impact on token consumption, cost, and latency.

No benchmark numbers or pricing deltas are attached to the launch, so the practical implication is architectural rather than financial. If your workflow triggers many short shell calls, frequent file operations, or iterative search-edit loops, the classifier becomes part of your tool-call budget. Teams already tracking LLM observability and agent evaluation should account for this in throughput and cost baselines.

Where Auto mode fits in the coding-agent stack

Claude Code already had multiple permission modes, including conservative approval-first settings and full bypass. Auto mode fills the missing middle tier: higher autonomy than default mode, lower trust than unrestricted execution.

That makes it relevant beyond Claude users. Most coding agents are converging on the same product problem, how to let agents work unattended for longer stretches without handing them an open-ended execution surface. Anthropic’s answer is a per-tool classifier gate. Other vendors are taking adjacent paths through monitoring, policy enforcement, or environment isolation. If you are comparing AI coding assistants, permission architecture is becoming a first-class differentiator alongside model quality and editor UX.

Use Auto mode for bounded repository work in disposable or isolated environments, and keep explicit approval gates around anything that can delete broadly, exfiltrate data, or touch production systems.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading