Ai Coding 3 min read

Claude Opus 4.7: Better Coding, 3x Vision, Cyber Controls

Anthropic releases Claude Opus 4.7 with major software engineering gains, 3x higher image resolution, automated cybersecurity safeguards, and a new xhigh effort level.

Anthropic released Claude Opus 4.7 on April 16, 2026, a direct upgrade to Opus 4.6 with meaningful gains in software engineering, vision resolution, and a new set of automated cybersecurity safeguards. The model is generally available across all Claude products, the API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry at the same pricing as Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can access it via claude-opus-4-7 in the Claude API.

Software Engineering Improvements

Opus 4.7 targets the kind of complex, long-running coding work that previously needed close human supervision. It follows instructions more precisely and verifies its own outputs before reporting back. On CursorBench, Opus 4.7 scores 70% versus 58% for Opus 4.6. A 93-task coding benchmark from one early tester showed a 13% resolution improvement, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve.

Instruction following is substantially better, which has a practical trade-off: prompts written for earlier models may produce unexpected results because Opus 4.7 takes instructions literally instead of interpreting them loosely. Anthropic recommends re-tuning prompts and harnesses when migrating.

The model is also better at using file system-based memory, retaining important notes across long, multi-session work. This reduces the amount of up-front context needed when returning to a project. If you work with AI coding assistants on multi-day tasks, this is the specific capability gap it addresses.

3x Vision Resolution

Opus 4.7 can process images up to 2,576 pixels on the long edge, approximately 3.75 megapixels. That is more than three times the resolution of prior Claude models. This is a model-level change, not an API parameter, so images sent to Claude are automatically processed at higher fidelity.

XBOW reported 98.5% on their visual-acuity benchmark versus 54.5% for Opus 4.6. The resolution increase supports use cases like computer-use agents reading dense screenshots, data extraction from complex diagrams, and work requiring pixel-perfect references.

Cybersecurity Safeguards

Anthropic deliberately reduced Opus 4.7’s cyber capabilities compared to Claude Mythos Preview and added automated safeguards that detect and block requests indicating prohibited or high-risk cybersecurity uses. This is part of the approach outlined in Project Glasswing, where Anthropic tests safeguards on less capable models before broadly releasing Mythos-class models.

Security professionals who need Opus 4.7 for legitimate purposes like vulnerability research, penetration testing, and red-teaming can join the Cyber Verification Program.

New Platform Features

Three additional features ship alongside the model:

  • xhigh effort level: A new effort level between high and max, giving finer control over the reasoning-latency trade-off. Claude Code now defaults to xhigh for all plans.
  • Task budgets (public beta): Developers can guide Claude’s token spend so it can prioritize work across longer runs.
  • /ultrareview in Claude Code: A dedicated slash command that produces a review session flagging bugs and design issues. Pro and Max users get three free ultrareviews. Auto mode also extends to Max users, allowing longer tasks with fewer permission interruptions.

Migration Notes

Two changes affect token usage. Opus 4.7 uses an updated tokenizer where the same input maps to roughly 1.0 to 1.35x more tokens depending on content type. The model also thinks more at higher effort levels in agentic settings, producing more output tokens. Anthropic’s migration guide provides details on tuning effort levels.

If you are upgrading from Opus 4.6, start by measuring the token usage difference on your actual traffic before adjusting effort levels or prompting for conciseness.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading