NV-Raw2Insights-US Processes Raw Ultrasound Sensor Data
NVIDIA and Siemens Healthineers have released a physics-informed AI model that generates personalized speed of sound maps from raw baseband IQ channel data.
On April 28, 2026, NVIDIA and Siemens Healthineers released NV-Raw2Insights-US, a physics-informed AI model designed for adaptive ultrasound imaging. The release introduces the Raw2Insights model class, designed to transition ultrasound reconstruction from hand-engineered processing pipelines directly to end-to-end AI native software.
Direct Processing of Baseband Data
Traditional ultrasound hardware relies on fixed beamforming. These systems assume a constant speed of sound throughout the human body. This static assumption causes phase aberrations and misfocusing when acoustic waves travel through heterogeneous tissue structures. NV-Raw2Insights-US bypasses the reconstructed image layer entirely. The model listens directly to the raw baseband IQ channel data from the ultrasound sensor array.
By analyzing these raw signals, the model generates a personalized 2D speed of sound map for the individual patient. The system calculates how an individual’s specific tissue uniquely shapes acoustic waves. This adaptive mapping allows the software to correct focusing errors dynamically during the scan, resulting in a clearer view of underlying structures.
Physics-Informed Neural Networks
The architecture moves past standard image-to-image translation techniques. The neural network explicitly incorporates the physical principles of acoustic wave propagation into its hidden layers. Siemens Healthineers researchers Ismayil Guracar and Rickard Loftman from the AI & Advanced Platforms group collaborated on the design to bridge deep learning with classical acoustic physics.
Embedding physical laws into the network restricts the model from generating physically impossible outputs. If your team works on specialized hardware sensors, this constraint mechanism provides a template for evaluating AI output in environments where physical accuracy is a strict requirement.
Simulation Datasets and Edge Integration
NVIDIA published the NV-Raw2Insights-US Simulations dataset alongside the model weights. The dataset contains synthetic Full Synthetic Aperture (FSA) ultrasound data simulated using a 180-element linear array over heterogeneous tissue phantoms. The package includes the raw baseband IQ channel data, ground truth sound speed maps, binary cyst segmentation masks, and phase aberration values. The entire dataset is licensed under CC BY 4.0.
The model is optimized to run within the NVIDIA Holoscan SDK. This SDK serves as the primary processing platform for streaming sensor data in healthcare environments. Running physics-informed models at the edge introduces strict latency limits, requiring engineers to understand exactly what AI inference requires at the bedside. Deploying large models onto embedded medical consoles involves targeted optimizations, utilizing principles similar to running models locally on Jetson hardware.
NVIDIA notes that the technology remains under investigational development and lacks regulatory clearance for clinical sale. If you build diagnostic imaging software, you should begin testing your inference infrastructure against raw sensor data rather than pre-processed images. Validating end-to-end AI pipelines on baseband signals will become a mandatory capability as medical hardware shifts toward software-defined reconstruction.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
Build a Fast Multilingual OCR with Nemotron-OCR-v2
Learn how to deploy NVIDIA Nemotron-OCR-v2 for high-speed document extraction across six languages using synthetic data and GPU acceleration.
Google Maps Platform Adds Gemini 3.1 Grounding and AI Models
Google updated the Maps Platform at Cloud Next with Gemini 3.1 features, introducing new spatial grounding APIs and automated overhead imagery analysis.
Falcon Perception: TII's Open-Source Model for Dense Segmentation and OCR
Falcon Perception introduces an early-fusion Transformer architecture that outperforms Meta's SAM 3 in dense image segmentation and OCR-guided grounding.
Claude Opus 4.7: Better Coding, 3x Vision, Cyber Controls
Anthropic releases Claude Opus 4.7 with major software engineering gains, 3x higher image resolution, automated cybersecurity safeguards, and a new xhigh effort level.
Ineffable Intelligence Raises $1.1B for RL-Based Superlearner
David Silver's new AI research lab secured a $1.1 billion seed round at a $5.1 billion valuation to build systems using pure reinforcement learning.