Ai Engineering 3 min read

NV-Raw2Insights-US Processes Raw Ultrasound Sensor Data

NVIDIA and Siemens Healthineers have released a physics-informed AI model that generates personalized speed of sound maps from raw baseband IQ channel data.

On April 28, 2026, NVIDIA and Siemens Healthineers released NV-Raw2Insights-US, a physics-informed AI model designed for adaptive ultrasound imaging. The release introduces the Raw2Insights model class, designed to transition ultrasound reconstruction from hand-engineered processing pipelines directly to end-to-end AI native software.

Direct Processing of Baseband Data

Traditional ultrasound hardware relies on fixed beamforming. These systems assume a constant speed of sound throughout the human body. This static assumption causes phase aberrations and misfocusing when acoustic waves travel through heterogeneous tissue structures. NV-Raw2Insights-US bypasses the reconstructed image layer entirely. The model listens directly to the raw baseband IQ channel data from the ultrasound sensor array.

By analyzing these raw signals, the model generates a personalized 2D speed of sound map for the individual patient. The system calculates how an individual’s specific tissue uniquely shapes acoustic waves. This adaptive mapping allows the software to correct focusing errors dynamically during the scan, resulting in a clearer view of underlying structures.

Physics-Informed Neural Networks

The architecture moves past standard image-to-image translation techniques. The neural network explicitly incorporates the physical principles of acoustic wave propagation into its hidden layers. Siemens Healthineers researchers Ismayil Guracar and Rickard Loftman from the AI & Advanced Platforms group collaborated on the design to bridge deep learning with classical acoustic physics.

Embedding physical laws into the network restricts the model from generating physically impossible outputs. If your team works on specialized hardware sensors, this constraint mechanism provides a template for evaluating AI output in environments where physical accuracy is a strict requirement.

Simulation Datasets and Edge Integration

NVIDIA published the NV-Raw2Insights-US Simulations dataset alongside the model weights. The dataset contains synthetic Full Synthetic Aperture (FSA) ultrasound data simulated using a 180-element linear array over heterogeneous tissue phantoms. The package includes the raw baseband IQ channel data, ground truth sound speed maps, binary cyst segmentation masks, and phase aberration values. The entire dataset is licensed under CC BY 4.0.

The model is optimized to run within the NVIDIA Holoscan SDK. This SDK serves as the primary processing platform for streaming sensor data in healthcare environments. Running physics-informed models at the edge introduces strict latency limits, requiring engineers to understand exactly what AI inference requires at the bedside. Deploying large models onto embedded medical consoles involves targeted optimizations, utilizing principles similar to running models locally on Jetson hardware.

NVIDIA notes that the technology remains under investigational development and lacks regulatory clearance for clinical sale. If you build diagnostic imaging software, you should begin testing your inference infrastructure against raw sensor data rather than pre-processed images. Validating end-to-end AI pipelines on baseband signals will become a mandatory capability as medical hardware shifts toward software-defined reconstruction.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading