Ai Agents 3 min read

Agent View Brings Parallel Task Orchestration to Claude Code

The May 2026 update to Claude Code introduces Agent view, a centralized dashboard for backgrounding, monitoring, and interacting with parallel agent workflows.

Anthropic has released Agent view in Claude Code, providing a centralized interface for managing parallel agentic workflows. Included in the Claude Code v2.1.139 update, the dashboard eliminates the need for multiple terminal tabs or external multiplexers by allowing developers to launch, background, and monitor multiple concurrent sessions natively.

Session Management and Command Routing

The dashboard organizes active processes into three categories: running, blocked on you, and done. Users launch the interface via the claude agents command. Active tasks can be moved to the background using the /bg command, or spawned directly into a detached state with claude --bg [task].

When an agent requires human intervention or approval, developers can provide input inline directly from the dashboard. This prevents the need to fully attach to the session context. Full attachment remains supported by pressing Enter on a session, which opens the complete transcript and interactive terminal.

Autonomous Goals and Orchestration

The release introduces the /goal command for extended operations. Developers can define a specific completion condition, instructing the agent to run autonomously across multiple turns until the objective is met. This mechanism functions natively across interactive mode, -p (prompt) mode, and Remote Control.

To support observability across multi-agent systems, Anthropic updated the underlying API request structure. Requests initiated by subagents now carry x-claude-code-agent-id and x-claude-code-parent-agent-id headers, ensuring telemetry and token usage can be traced back through the parent hierarchy.

Agent view ships alongside experimental orchestration features from the recent “Code with Claude” event. A new Dreaming capability allows backgrounded agents to evaluate past sessions, extracting patterns to optimize their local agent memory between active runs. Additionally, an Outcomes feature provides rubric-based grading, letting developers define explicit success criteria for automated tasks. This grading system integrates directly with parallel sessions, automating quality checks before a backgrounded agent marks itself as complete.

Infrastructure Limits and Scale

Handling parallel agents requires high compute ceilings. Anthropic increased the hourly usage limits for Claude Code users by leveraging Colossus 1, a 220,000 GPU data center built through a partnership with SpaceX.

The infrastructure expansion coincides with a broader push for enterprise adoption, including the general availability of the Claude Platform on AWS. API volume grew 80x year-over-year in the first quarter of 2026. Developer usage metrics reflect this scale, with Claude Code users now averaging 20 hours per week in the tool. Companies operating at this scale report significant output shifts; Airbnb currently generates approximately 20% of its new code using Claude agents.

Agent view is currently in Research Preview for users on Claude Pro, Max, Team, and Enterprise plans, as well as API subscribers. If you maintain large codebases, utilizing the /bg and /goal commands allows you to run test generation, refactoring, and code review simultaneously without blocking your primary development terminal.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading