Ai Coding 3 min read

Bugbot Now Learns From Human Feedback to Fix More Code

Cursor's Bugbot introduces Learned Rules, a self-improving system that analyzes human reviews to reach a 78% resolution rate in pull requests.

Anysphere has updated its AI code review agent, Bugbot, with a self-improvement system that learns directly from human feedback. The April 2026 release of Bugbot shifts the tool from relying on offline model improvements to a real-time learning loop. For teams managing high volumes of pull requests, this changes the baseline effectiveness of automated code reviews.

Rule Generation and Lifecycle

Bugbot now treats every code review as a natural experiment. It monitors developer reactions and replies on pull requests to determine if a suggested fix was actually implemented. Based on these signals, the agent generates candidate rules.

When a candidate rule consistently accumulates positive signals, Bugbot promotes it to Active Status. The system automatically disables active rules that generate negative feedback or consistent false positives. Since the feature entered beta, over 110,000 repositories have enabled this learning mechanism, generating more than 44,000 active rules.

Benchmark Results and Market Position

The shift to real-time feedback has heavily impacted Bugbot’s resolution rate. The agent now sees 78.13% of its flagged bugs fixed before merge, up from 52% at its July 2025 launch. As teams use AI for code review across larger codebases, resolution rates offer a clearer metric of utility than mere detection volume.

Cursor released comparative analysis across public repositories to position this update against competitors. Based on an analysis of 50,310 pull requests, Bugbot’s 78.13% resolution rate outpaces Greptile at 63.49% and CodeRabbit at 48.96%. GitHub Copilot achieved a 46.69% resolution rate across 24,336 analyzed pull requests.

To reduce prompt noise alongside these metrics, Bugbot’s Autofix feature now only triggers on substantial findings. It filters for relevant rules before applying fixes to keep the context window focused.

System Integrations and Cursor 3 Context

The update expands how Bugbot connects to external developer environments. Bugbot now supports the Model Context Protocol, allowing enterprise users to connect the agent to MCP servers for access to private tools and additional repository context.

This follows the early April rollout of Cursor 3, which introduced a unified Agents Window and support for self-hosted cloud agents. That platform update enabled support for frontier models including GPT-5.4, Opus 4.6, and Gemini 3 Pro.

Inside the newly redesigned Bugbot dashboard, developers can separate team and personal settings while managing their learned rules. A new batch operation also lets developers apply all suggested Bugbot fixes across a pull request in a single action.

If you configure automated code review pipelines, you should track your false positive rates against Bugbot’s new baseline. The addition of automated rule deprecation means you can allow the agent to generate aggressive candidate rules without permanently degrading the signal-to-noise ratio in your team’s pull requests.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading