Ai Agents 3 min read

GPT-5.5 Hits Bedrock as AWS Ships First-Party Autonomous Agents

AWS has launched autonomous Frontier Agents for security and SRE tasks alongside a native Amazon Bedrock integration for OpenAI's GPT-5.5 and Codex models.

On May 11, 2026, AWS introduced two autonomous systems for persistent cloud operations and ended Microsoft’s exclusive enterprise hold on OpenAI models. The general availability of Frontier Agents allows developers to deploy independent systems for penetration testing and incident response. Simultaneously, OpenAI’s GPT-5.5 and Codex models are now available natively through the Amazon Bedrock API.

The Frontier Agents Suite

AWS is targeting specialized, multi-day operations rather than simple chat completions. The Frontier Agents operate independently to achieve long-term goals without constant human oversight.

The AWS Security Agent functions as an on-demand penetration tester. It ingests source code, architecture diagrams, and system documentation to build context before attempting multi-step attack chains. Preview customers reported compressing evaluation timelines from weeks to hours, representing up to a 90% reduction in testing duration. An average 24-hour evaluation costs approximately $1,200. This is significantly less than traditional manual engagements, which typically range from $10,000 to $50,000.

The AWS DevOps Agent serves as an autonomous Site Reliability Engineering (SRE) teammate. It correlates telemetry across services like CloudWatch, Datadog, and Splunk with deployment data and code repositories to identify root causes during incidents. If you regularly evaluate and test AI agents in production, the key performance metric is the reported 75% reduction in Mean Time to Resolution (MTTR) observed by early adopters.

Agent SystemPricing ModelPrimary FunctionSupported Environments
AWS Security Agent$50 per task-hourVulnerability validationAWS, Azure, GCP, On-premises
AWS DevOps Agent$0.50 per active minuteRoot cause analysisAWS, Azure, On-premises

Both systems extend beyond AWS infrastructure. The DevOps agent utilizes the Model Context Protocol (MCP) to access and correlate on-premises data streams.

OpenAI Models Native to Bedrock

The addition of OpenAI to Amazon Bedrock marks a structural shift in enterprise AI procurement. GPT-5.5 and GPT-5.4 are currently in limited preview on the platform. Developers can also access Codex via the AWS CLI, desktop applications, and VS Code extensions. If you build coding assistants or automation scripts, this brings OpenAI’s specialized model directly into the AWS perimeter.

AWS combined OpenAI’s reasoning capabilities with the Amazon Bedrock Managed Agents service. Developers can now build production agents backed by GPT-5.5 while enforcing AWS security controls like IAM policies, PrivateLink, and CloudTrail auditing.

Financial integration removes a major barrier for enterprise adoption. OpenAI usage through Bedrock applies directly toward existing AWS cloud commitments. Organizations can consolidate their AI spending under established enterprise agreements rather than maintaining separate vendor contracts.

If you manage cloud infrastructure, compare the $0.50 per minute active cost of the DevOps Agent against your current incident downtime metrics. For teams already invested heavily in AWS enterprise discount programs, routing your OpenAI API traffic through Bedrock instantly converts isolated AI tool spending into committed cloud spend.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading