SpaceX Terafab Will Manufacture 1TW of AI Compute Capacity
SpaceX has filed plans to build a $55 billion semiconductor manufacturing facility in Texas designed to produce 1 terawatt of AI compute annually.
SpaceX has initiated plans for a vertically integrated semiconductor manufacturing facility in Grimes County, Texas. According to a public hearing notice filed in Texas, the project, named Terafab, begins with a $55 billion initial investment. Capital expenditures could reach $119 billion if all multi-phase buildouts are completed.
The facility will operate as a captive foundry, producing proprietary AI accelerators exclusively for Musk’s portfolio of companies. This includes processors for Tesla’s Autopilot and Optimus robots, xAI systems, and SpaceX’s space-based data centers, expanding on recent orbit-based deployments.
Scale and Production Targets
Terafab is designed to address extreme internal demand for hardware. The production target is 1 terawatt of AI compute capacity per year. At peak utilization, the facility itself is projected to draw over 10 gigawatts of power.
The primary site replaces a former coal-fired power station at the Gibbons Creek Reservoir. This location provides the physical footprint necessary for what government filings describe as the largest chip manufacturing project to combine logic, memory production, and advanced packaging in a single facility.
The product lines will split into two categories. One line will manufacture lower-power, high-efficiency processors optimized for terrestrial robotics and vehicle autonomy. The second line focuses on high-performance accelerators built to withstand the radiation and thermal constraints of orbital computing.
The Intel 14A Partnership
Intel serves as the primary technical partner for the initial production phases. High-volume manufacturing at Terafab will utilize the Intel 14A process node. This represents the first major customer adoption of Intel’s most advanced foundry service.
While high-volume production occurs in Grimes County, preliminary engineering relies on a localized feedback loop. Tesla recently filed permits for a 2 million-square-foot research and development center adjacent to Gigafactory Austin. This secondary site will function as a pilot line, allowing engineers to validate chip architectures before moving them to the primary Terafab lines for mass production.
These proprietary chips will eventually replace third-party hardware across Musk’s infrastructure, shifting capital away from merchant silicon toward internal capabilities. S-1 registration documents confirm SpaceX’s intent to produce its own GPUs, signaling a permanent architectural shift for its vast data center network and massive training workloads.
If you are planning long-term hardware deployments, Terafab demonstrates the massive capital requirements necessary for total vertical integration. The project underscores a broader industry shift where mega-scale operators are internalizing chip design and fabrication to control their physical compute supply chains.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
How to Fine-Tune Qwen3 on AMD MI300X Using ROCm
Learn how to configure ROCm 6.1 environment variables and use the Hugging Face stack to fine-tune Qwen3-1.7B on AMD hardware without CUDA.
Cambridge HfO2 Memristor Cuts AI Energy Use by 70%
The University of Cambridge has developed a heterointerface memristor using hafnium oxide that integrates memory and processing to reduce AI energy use by 70%.
AI Chip Startup Rebellions Raises $400M for Rebel100
South Korean startup Rebellions hits a $2.3B valuation in a pre-IPO round to scale its Rebel100 AI accelerator and compete with industry leaders.
SpaceX Fuels Cursor Training With 1M GPUs for Composer 2.5
SpaceX and Cursor announce a strategic partnership using the Colossus supercomputer to train next-gen coding models, with a $60 billion acquisition on the table.
CyberSecQwen-4B Defeats Cisco 8B on CTI-MCQ Benchmark
Team athena19 fine-tuned a 4-billion parameter model on a single AMD MI300X GPU that outperforms Cisco's 8B model for defensive cyber threat intelligence.