5GW of NVIDIA DSX Infrastructure Planned in $3.4B IREN Deal
NVIDIA and IREN will deploy up to 5 gigawatts of DSX AI infrastructure, anchored by a $3.4 billion managed cloud services contract for Blackwell systems.
NVIDIA and IREN Limited announced a strategic partnership to deploy up to 5 gigawatts (GW) of next-generation AI infrastructure across a global pipeline. The agreement establishes a massive physical footprint for hardware rollouts while tying the silicon provider directly to grid-scale power assets.
The collaboration centers on NVIDIA’s DSX architecture, a reference design requiring deep integration of accelerated compute, high-speed networking, power systems, and software. IREN provides the physical infrastructure layer, including land acquisition and data center operations, while NVIDIA supplies the silicon and software stack.
Financial Agreements and R&D Compute
The partnership includes two distinct financial mechanisms that bind the companies. IREN signed a five-year, $3.4 billion contract to provide NVIDIA with managed GPU cloud services. This dedicated compute will support NVIDIA’s internal AI research and development.
Simultaneously, NVIDIA secured a five-year warrant to purchase up to 30 million ordinary shares of IREN at an exercise price of $70 per share. If fully exercised, this represents a potential $2.1 billion investment by the chipmaker. IREN’s stock reached approximately $71 in extended trading immediately following the announcement.
Sweetwater Campus and DSX Integration
The deployment pipeline designates IREN’s 2,000-acre Sweetwater campus in Texas as the flagship site for the DSX architecture. The site spans Nolan and Jones Counties and is divided into two operational phases. The 1.4 GW Sweetwater 1 phase was energized the same week as the announcement, with Sweetwater 2 set to add an additional 600 MW.
| Deployment Site | Capacity | Assigned Workload | System Specifications |
|---|---|---|---|
| Sweetwater (Phase 1 & 2) | 2.0 GW | Flagship DSX Architecture | NVIDIA DSX-aligned |
| Childress | 60 MW | NVIDIA Internal R&D | Air-cooled Blackwell |
The internal R&D capacity reserved for NVIDIA will be housed at IREN’s Childress facility. This 60 MW allocation utilizes air-cooled Blackwell systems. Container orchestration for these workloads will be handled in partnership with Mirantis.
The Infrastructure Bottleneck
The scale of this deployment highlights a structural shift in AI engineering. Power procurement and grid interconnects have replaced hardware manufacturing as the primary constraints for AI inference. IREN previously secured a $9.7 billion agreement with Microsoft in November 2025 for GPU cloud infrastructure utilizing NVIDIA GB300 units, cementing its transition from cryptocurrency mining to a dedicated “neocloud” provider.
For enterprise teams managing production AI inference across GPU clusters, these gigawatt-scale facilities change the operational math. When you deploy massive multi-region workloads, physical proximity to dedicated power grids dictates latency limits and scaling caps. You must now factor underlying data center power contracts into your cluster provisioning strategies.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
How to Fine-Tune Qwen3 on AMD MI300X Using ROCm
Learn how to configure ROCm 6.1 environment variables and use the Hugging Face stack to fine-tune Qwen3-1.7B on AMD hardware without CUDA.
DeepInfra Brings $0.08/1M Inference to Hugging Face Hub
Developers can now route Hugging Face API requests directly to DeepInfra's serverless GPU infrastructure for high-performance model inference.
Intel’s Xeon 6 and Custom IPUs Coming to Google Cloud
Intel and Google expand their partnership to co-develop custom IPUs and deploy Xeon 6 processors for high-performance AI and hyperscale workloads.
SpaceX Fuels Cursor Training With 1M GPUs for Composer 2.5
SpaceX and Cursor announce a strategic partnership using the Colossus supercomputer to train next-gen coding models, with a $60 billion acquisition on the table.
SpaceX Terafab Will Manufacture 1TW of AI Compute Capacity
SpaceX has filed plans to build a $55 billion semiconductor manufacturing facility in Texas designed to produce 1 terawatt of AI compute annually.