Ai Engineering 3 min read

NVIDIA Ising Models Slash Quantum Calibration Times by Days

NVIDIA launches the Ising open AI model family to automate quantum processor calibration and accelerate real-time error correction with 3x higher accuracy.

NVIDIA released the Ising AI model family to act as a control plane for quantum computing systems. The release introduces open-source models engineered specifically for processor calibration and real-time error correction. If you build quantum-GPU infrastructure, this replaces manual tuning with automated workflows to significantly reduce system downtime.

Calibration Workflows and Agentic Control

The Ising Calibration model is a 35-billion parameter Vision Language Model trained on diverse qubit telemetry. It interprets raw scientific outputs from superconducting circuits, quantum dots, trapped ions, and neutral atoms. You can use it to build systems where AI agents autonomously adjust Quantum Processing Unit (QPU) parameters until the hardware meets specific operational baselines. This automation compresses standard calibration routines from several days down to a few hours.

Benchmark Results

NVIDIA used QCalEval, a newly introduced framework for evaluating AI agents on real-world quantum hardware data, to measure calibration performance. The 35B model outperforms general-purpose frontier language models on these specialized control tasks.

ModelQCalEval Score vs Baseline
NVIDIA Ising CalibrationBaseline
Gemini 3.1 Pro-3.27%
Claude Opus 4.6-9.68%
GPT 5.4-14.5%

Real-Time Decoding Architecture

The Ising Decoding tier shifts away from language architectures to 3D Convolutional Neural Networks designed for surface-code error correction. NVIDIA split the decoders into two variants optimized for different production constraints. Ising Decoder SurfaceCode 1 Fast targets low-latency real-time decoding requirements. Ising Decoder SurfaceCode 1 Accurate focuses entirely on maximizing logical error rate reduction.

These models operate 2.5x faster and are 3x more accurate than pyMatching, the previous open-source standard for surface-code decoding. They achieve these metrics while requiring 10x less training data than earlier non-AI approaches. If you run these decoders in production, they are optimized for FP8 quantization on NVIDIA Blackwell and Hopper architectures via the NVQLink interconnect.

Open Licensing and Deployment

Both model lines are available under an Apache-2.0 commercial license. You can deploy the weights directly from GitHub or Hugging Face, or run them as managed microservices through NVIDIA NIM. Major quantum hardware providers, including IonQ, IQM, and Rigetti, are already integrating the architecture into their control systems. This broad support ensures the models will interface smoothly with existing CUDA-Q platform pipelines.

Transitioning to an AI-driven control plane requires updating your existing QPU telemetry ingestion pipelines. You should benchmark the SurfaceCode 1 Fast variant against your current classical decoders to determine if the inference latency meets your real-time error correction thresholds.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading