Ai Engineering 3 min read

Mercor Hit by Cyberattack via LiteLLM Supply Chain Breach

AI startup Mercor confirmed a data breach after hackers compromised the open-source LiteLLM project to steal internal data and credentials.

AI recruiting startup Mercor confirmed a major data breach tied to a supply chain cyberattack on the open-source LiteLLM project. The extortion group Lapsus$ leveraged a misconfigured GitHub Actions workflow to push malicious code to the public PyPI registry. The malware exposed credentials and internal data across thousands of companies. If you rely on open-source API gateways to route AI models, this incident highlights severe vulnerabilities in standard Python packaging pipelines.

Payload Execution and Discovery

The attackers, tracked in technical reports as TeamPCP, gained initial access by compromising a GitHub Actions workflow for the Trivy vulnerability scanner. This access yielded the PyPI publishing token for LiteLLM, an API gateway with over 3.4 million daily downloads. On March 24, 2026, the attackers published two backdoored versions, 1.82.7 and 1.82.8.

The malicious payload relied on Python’s site-specific configuration hooks. It included a .pth file named litellm_init.pth. Python executes .pth files automatically upon interpreter startup. The malware ran silently even if the application never explicitly imported the library.

Discovery occurred entirely by accident. FutureSearch engineer Callum McMahon investigated a system crash caused by a “fork bomb” memory leak. The leak was a bug in the attacker’s own malicious code.

Target Scope and Data Exfiltration

Mercor relies heavily on LiteLLM to route traffic for AI models managed for labs like OpenAI and Anthropic. Lapsus$ published stolen Mercor assets on their leak site, including Slack communications, internal ticketing data, and video recordings of domain-expert AI contractors.

The primary goal of the malware was credential theft. The code targeted SSH keys, .env files, and cloud provider credentials across AWS, GCP, Azure, and Kubernetes. It specifically hunted for AI API keys, exfiltrating the data to remote servers at models.litellm.cloud and checkmarx.zone.

The blast radius extended well beyond Mercor. During the three-hour infection window, popular frameworks including DSPy, MLflow, CrewAI, and OpenHands automatically pulled the compromised package. Developers building multi-agent systems or setting up gateways to monitor AI applications frequently depend on these upstream libraries, compounding the exposure risk across the ecosystem.

Remediation Timeline

PyPI quarantined the affected versions approximately three hours after publication. The LiteLLM maintainers subsequently rebuilt their deployment pipeline.

DateEvent
March 24, 2026 (10:39 UTC)Malicious versions 1.82.7 and 1.82.8 published to PyPI.
March 24, 2026 (+3 hours)PyPI quarantines the compromised versions.
March 30, 2026LiteLLM releases clean v1.83.0 with hardened CI/CD v2 pipeline.
April 1, 2026Mercor publicly confirms its data breach involvement.

The new LiteLLM release relies on isolated environments for its CI/CD processes to prevent similar token theft.

Review your dependency management and CI/CD security posture immediately. Pin exact versions of your critical open-source dependencies and audit your continuous integration workflows for excessive permissions or misconfigured scanners that could expose publishing credentials.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading