Malicious element-data Release Steals Cloud API Credentials
A supply-chain attack on the popular element-data Python package exposed cloud provider keys and warehouse credentials for roughly 12 hours.
On Friday, April 25, 2026, attackers published a malicious version of the popular Python package element-data that harvested local and CI/CD environment credentials. The supply-chain attack exploited a script injection vulnerability in a developer-configured GitHub Actions workflow. The compromised package, which facilitates anomaly detection and LLM observability in machine learning systems, averages over 1 million monthly downloads.
Credential Theft and Scope
The malicious release, version 0.23.3, remained live for approximately 12 hours before discovery and removal on April 26. Attackers pushed the tainted release simultaneously to the Python Package Index (PyPI) and the project’s Docker image account. The elementary-data CLI is widely integrated into automated pipelines, giving the rogue code broad access to production secrets.
When executed, the payload systematically scoured the local environment or CI/CD runner for sensitive configuration data.
| Target Category | Specific Assets Extracted |
|---|---|
| Cloud Providers | AWS, Azure, and Google Cloud credentials |
| Development Keys | API tokens and .ssh directory contents |
| Configuration | .env file contents and dbt profiles |
| Infrastructure | Data warehouse access tokens |
Attack Execution Path
The breach originated from a malicious pull request that bypassed standard validation. This request exploited a script injection flaw in the project’s GitHub Actions configuration. The vulnerability allowed the attackers to run arbitrary bash scripts within the context of a privileged developer account.
Once inside the execution environment, the attackers extracted PyPI signing keys and account tokens. They used these credentials to publish version 0.23.3. The rogue package was functionally indistinguishable from a legitimate release, mirroring methods seen when the LiteLLM PyPI package was compromised in previous supply-chain incidents.
Required Remediation
The project maintainers removed the package within three hours of discovery, rotated all internal credentials, and patched the underlying GitHub Actions flaw. They subsequently released a safe version, 0.23.4. Other products, including Elementary Cloud and the Elementary dbt package, were unaffected by this specific breach.
If your systems pulled version 0.23.3, you must assume total compromise of any environment-accessible secrets. You should upgrade your installation to pin elementary-data==0.23.4 immediately. Delete local package caches to prevent accidental re-installation of the compromised artifact.
Finally, rotate all API keys, cloud tokens, and warehouse credentials that were present in the affected environments. Search your CI/CD runner logs for unexpected artifacts or marker files to confirm whether the malicious payload executed during the 12-hour exposure window.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
Google Graduates LiteRT NPU Acceleration to Production
Learn how to configure LiteRT for hardware-accelerated on-device AI inference using Google's production-ready NPU capabilities.
OpenAI Secures ChatGPT macOS App After Axios Library Attack
OpenAI rotated its macOS code-signing certificates and hardened GitHub workflows following a dependency confusion attack on the ChatGPT desktop client.
Cisco Source Code and AWS Keys Stolen in Trivy Supply Chain Attack
Cisco confirms a major data breach involving stolen AI product source code and AWS keys following a malicious compromise of the Trivy vulnerability scanner.
LiteLLM Drops Delve After Supply Chain Attack Exposes Fraudulent SOC 2 Audits
LiteLLM terminates its relationship with compliance startup Delve following a major PyPI supply chain attack and allegations of fraudulent SOC 2 certifications.
DeepInfra Brings $0.08/1M Inference to Hugging Face Hub
Developers can now route Hugging Face API requests directly to DeepInfra's serverless GPU infrastructure for high-performance model inference.