Ai Engineering 3 min read

Developer Claims to Crack Google SynthID AI Watermarking

A new open-source tool dubbed 'reverse-SynthID' claims to bypass Google DeepMind’s invisible watermarks using signal processing and spectral analysis.

On April 14, 2026, a developer operating under the pseudonym Aloshdenny released an open-source tool claiming to reverse-engineer Google’s SynthID image watermarking system. The GitHub repository, reverse-SynthID, uses signal processing and spectral analysis to strip invisible signatures from images generated by models accessible through the Google Gemini API. If you build systems that rely on provenance tracking, this release challenges the assumption that embedded AI watermarks are tamper-proof.

Spectral Analysis and Carrier Frequencies

The developer isolated the watermark signal by generating approximately 200 completely white or black images using Google’s Nano Banana Pro model. By amplifying exposure and contrast on these “pure” outputs, the tool identifies the underlying carrier frequency structure.

This structure is heavily dependent on the output resolution. The spectral codebook used for a 1024x1024 image fails entirely when applied to a 1536x2816 image. The extraction method relies entirely on identifying these spatial patterns, requiring no access to Google’s proprietary encoder or decoder models.

Bypass Metrics and Detection Accuracy

Version 3 of the extraction tool relies on multi-resolution spectral bypass to degrade the watermark. The repository claims substantial reductions in the signal footprint while preserving visual fidelity. The developer has also launched parallel projects examining theoretical vulnerabilities in text-based watermarks.

MetricClaimed Performance
Carrier Energy Reduction75%
Phase Coherence Drop91%
Retained Visual Quality43+ dB PSNR
Custom Detector Accuracy90%

Aloshdenny asserts their custom detector identifies modified Gemini images with 90% accuracy. Google DeepMind disputes this capability. Spokesperson Myriam Khan stated that the tool cannot systematically remove the signatures and that the technology remains a robust mechanism for tracking AI-generated content. Google maintains that the watermark is inherently baked into the generation process, meaning complete removal without quality degradation is highly improbable.

Architectural Vulnerabilities

Technical communities note the repository lacks a standardized test suite to verify the 90% detection claim across broad datasets. Many security practitioners suspect Google employs a dual-watermark architecture. This approach would pair a fragile public watermark with a highly robust internal signature reserved for platform moderation and law enforcement requests.

Unlike the C2PA standard, which relies on cryptographic metadata, SynthID embeds data directly into the pixel space. This pixel-level integration makes it susceptible to targeted spectral interference. Advanced signal processing and “re-nosing” techniques can jam the public detector’s ability to read the embedded signature.

The Threat of Artificial Provenance

Extracting a watermark is only one side of the vulnerability. The same spectral mapping techniques theoretically allow developers to manually insert an AI watermark into a human-created photograph. This opens an attack vector where authentic media is intentionally flagged as synthetic to damage credibility. Platforms that automatically evaluate AI output for authenticity face significant operational risk when relying solely on single-signal detection.

Treat pixel-embedded watermarks as a secondary verification signal rather than a cryptographic guarantee. If you operate platforms hosting user-generated media, combine spectral watermark detection with standard cryptographic provenance metadata to secure your verification pipeline against targeted spoofing.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading