OpenAI Secures ChatGPT macOS App After Axios Library Attack
OpenAI rotated its macOS code-signing certificates and hardened GitHub workflows following a dependency confusion attack on the ChatGPT desktop client.
OpenAI has rotated its macOS code-signing certificates following a supply chain attack that compromised its GitHub Actions build environment. The incident surfaced on April 13, 2026 after a malicious package masquerading as an Axios update infiltrated the automated CI/CD pipeline for the ChatGPT macOS desktop client. For development teams relying on automated package managers, this breach highlights the fragility of dynamic dependency resolution during sensitive signing steps.
The Axios Supply Chain Compromise
The attack leveraged a typosquatting and dependency confusion technique against the npm registry. On April 10, 2026, a malicious package was uploaded mimicking a legitimate update to the popular Axios library. OpenAI’s workflow was configured to pull the latest version of its dependencies automatically during the code-signing phase. The runner fetched and executed the compromised package without manual verification.
This vector specifically targeted high-value continuous integration environments rather than end-user machines. Researchers from Socket and Phylum flagged the anomalous package shortly after deployment. The technique shares characteristics with the recent LiteLLM package compromise, where build infrastructure was targeted via poisoned upstream modules.
Payload and Exfiltration
Once executed inside the runner, the malicious package triggered a post-install script designed to harvest environment variables. The payload successfully exfiltrated the DEVELOPER_ID_APPLICATION certificate and its corresponding private key. This key pair is used to cryptographically sign the ChatGPT macOS application for Apple’s Gatekeeper validation.
The blast radius was restricted to the build environment. The production ChatGPT backend and user data were not accessed. The primary threat model shifted to the potential for attackers to distribute unauthorized software signed with OpenAI’s official Apple Developer identity.
Remediation and Hash Pinning
OpenAI initiated an emergency response on April 12. The compromised certificates were revoked, and new cryptographic identities were generated for the macOS client. Apple was brought in to invalidate the stolen certificate at the operating system level. macOS Gatekeeper will now block any software attempting to run with the compromised signature.
Users of the ChatGPT macOS app are receiving mandatory update prompts to install version 1.2026.104. This release is signed with the secured certificates.
To prevent recurrence, OpenAI modified its build configuration to enforce dependency pinning. The workflow now references specific cryptographic hashes rather than dynamic version numbers. This ensures that any modification to an upstream package will break the build before execution. It represents a necessary operational shift for teams building desktop AI software where client-side binaries require secure cryptographic signatures.
If you maintain automated build pipelines for compiled applications, audit your dependency resolution configurations immediately. Replace dynamic version tags in your package manifests with hardcoded cryptographic hashes. Limit the environment variables exposed to your runners during the build phase and isolate the code-signing step into a separate workflow that only executes after all dependencies are verified.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
How to Use Symbolic Execution for Automated BPF Analysis
Learn how Cloudflare uses the Z3 theorem prover to instantly generate magic packets and reverse-engineer BPF bytecode for security research.
Cisco Source Code and AWS Keys Stolen in Trivy Supply Chain Attack
Cisco confirms a major data breach involving stolen AI product source code and AWS keys following a malicious compromise of the Trivy vulnerability scanner.
LiteLLM Drops Delve After Supply Chain Attack Exposes Fraudulent SOC 2 Audits
LiteLLM terminates its relationship with compliance startup Delve following a major PyPI supply chain attack and allegations of fraudulent SOC 2 certifications.
ChatGPT Shopping Gets Visual Browsing and Product Comparisons
OpenAI rolled out richer shopping in ChatGPT with visual browsing, product comparisons, and an expanded commerce protocol for discovery.
OpenAI Details New ChatGPT Agent Defenses Against Prompt Injection
OpenAI outlined layered defenses for ChatGPT agents against prompt injection, tying together Safe Url, instruction hierarchy training, and consent gates.