Ai Engineering 4 min read

LiteLLM PyPI Attack Risks Credential Theft on Install

Compromised LiteLLM PyPI versions 1.82.7 and 1.82.8 could auto-run malware and steal credentials from Python environments.

LiteLLM shipped malicious PyPI releases 1.82.7 and 1.82.8 on March 24, 2026, after an attacker appears to have hijacked a maintainer PyPI account and published directly to PyPI outside the project’s normal GitHub CI/CD flow. For teams using LiteLLM as an AI gateway or as a transitive dependency in agent tooling, the incident matters because 1.82.8 could execute on any Python startup, not just when your code imported LiteLLM.

The public reporting around the compromise, including coverage of the incident, puts the scale in perspective. Sonatype estimated roughly 3 million daily downloads, and Wiz estimated LiteLLM appears in about 36% of cloud environments. This was a supply chain event in a package that sits deep inside production AI stacks, not a niche library with limited blast radius.

Affected releases

The malicious releases were 1.82.7 and 1.82.8. Both were removed, and LiteLLM maintainers rotated accounts and paused further releases while reviewing the blast radius.

The execution path differed between the two versions.

VersionMalicious locationTrigger
1.82.7litellm/proxy/proxy_server.pyimport litellm.proxy
1.82.8same payload plus litellm_init.pthany Python interpreter startup

The .pth addition is the critical detail. Python executes .pth files in site-packages automatically during interpreter startup. In practice, 1.82.8 turned a package install into startup code execution across the environment where it landed.

Malware behavior

The payload harvested a broad set of local secrets and infrastructure credentials. The confirmed collection scope included environment variables, SSH keys, Git credentials, cloud credentials for AWS, GCP, and Azure, Kubernetes configs and service-account tokens, Docker config, shell history, database passwords, CI/CD files, SSL private keys, and cryptocurrency wallet material.

The exfiltration path was built to move that data off host in a structured way. Stolen data was encrypted with a random 32-byte AES-256 session key, the session key was encrypted with a hardcoded RSA-4096 public key, and the resulting archive was posted to https://models.litellm.cloud/.

For developers building agent systems, this is the uncomfortable supply chain version of the same problem seen in prompt-injection defenses and MCP security work. Your runtime often has access to exactly the credentials an attacker wants, and AI tooling increasingly centralizes them in one process.

Why 1.82.8 was more severe

1.82.7 required a specific import path. 1.82.8 removed that constraint.

The litellm_init.pth file, recorded at 34,628 bytes in the compromised wheel, launched a Python subprocess with an encoded payload. Once installed into site-packages, it could trigger whenever Python started. If your environment ran Python for unrelated tasks, the malicious code still had a path to execute.

This is where the operational risk changes. A bad package import can be isolated to one service. A bad startup hook can spill into build jobs, notebooks, local developer machines, CI runners, background tooling, and any agent framework sharing the same interpreter environment. If your team builds with agent frameworks or runs assistants inside coding environments such as AI coding assistants, transitive dependency exposure becomes harder to reason about quickly.

Scope inside AI infrastructure

LiteLLM is widely used because it normalizes access to many model APIs behind one interface. That convenience also concentrates trust. One package can sit in proxies, internal gateways, evaluation harnesses, coding agents, and orchestration layers at the same time.

The maintainers said the compromise came from direct PyPI publication rather than the project’s GitHub release pipeline, and GitHub releases only went up to v1.82.6.dev1, making 1.82.7 and 1.82.8 immediately anomalous. Proxy Docker image users were not impacted because dependencies were pinned in requirements.txt.

That pinned image detail is practical. If you operate model gateways, exact version pinning and reproducible environments are no longer optional hygiene. They are part of your incident boundary.

Immediate actions for teams

If 1.82.7 or 1.82.8 touched any machine, treat the host as potentially credential-compromised. Check for litellm_init.pth, remove the malicious package, and rotate every credential present on the affected system, including cloud, Git, CI, Kubernetes, SSH, and database secrets.

Then review logs for outbound traffic to models.litellm.cloud, unauthorized cloud API usage, and suspicious Kubernetes activity. If LiteLLM entered through a transitive dependency, audit lockfiles and build artifacts, not just direct requirements. Teams with LLM observability already in place should extend the same discipline to Python package provenance, startup hooks, and dependency anomaly detection before reinstalling LiteLLM anywhere in production.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading