Kepler Space Cloud Fires Up 40 NVIDIA GPUs in Orbit
Kepler Communications opens the first scalable orbital compute cluster, utilizing NVIDIA Jetson Orin modules to bring AI edge processing to space.
Kepler Communications launched its Tranche 1 orbital compute cluster for commercial operations, bringing 40 NVIDIA GPUs online in Earth orbit. The Toronto-based company transitioned its Aether constellation from an experimental testbed into a functioning decentralized cloud. Sophia Space is the first commercial customer to test distributed software configuration across the orbital fabric.
Orbital Hardware and Network Architecture
The cluster operates across 10 satellites placed in orbit by a SpaceX Falcon 9 in January 2026. Each node in the Aether constellation carries four NVIDIA Jetson Orin modules backed by terabytes of SSD storage. The satellites communicate through a real-time optical mesh network using SDA-compatible optical inter-satellite links (OISLs).
| Component | Specification |
|---|---|
| Total Compute | 40 NVIDIA Jetson Orin GPUs |
| Fleet Size | 10 operational satellites |
| Networking | SDA-compatible optical inter-satellite links |
| Storage | Terabytes of onboard SSD |
This creates an IP-based edge fabric capable of executing distributed workloads directly in space. If you build systems requiring high-bandwidth data processing, this infrastructure shifts the bottleneck from radio downlinks to onboard compute capacity. Developers can run models locally on Jetson hardware before deploying them to the orbital cluster. It allows for advanced AI inference tasks to run precisely where the data originates.
Distributed Software Deployment in Space
Sophia Space is using the Kepler network to validate its Sophia Orbital Operating System (SOOS). The trial involves deploying software across six distributed GPUs located on two separate spacecraft. This replicates the management patterns of a terrestrial data center within an orbital environment.
The deployment serves as a de-risking phase for the TILE (Thermal-Integrated LEO Edge) architecture. Sophia Space plans to launch its own passively cooled space computers in late 2027. Configuring software across separate moving nodes requires novel approaches to distributed state, similar to how developers implement multi-agent coordination patterns in highly constrained terrestrial networks.
Edge Processing and Downlink Economics
Kepler currently serves 18 commercial and government customers by acting as a pure infrastructure layer. The primary advantage of orbital compute is addressing the downlink bottleneck for Earth observation and Signal Intelligence operations. Operators typically wait hours to transmit raw data to ground stations for processing.
Executing AI workloads in orbit allows operators to perform real-time detection of maritime anomalies, wildfires, or military movements. The cluster transmits only actionable pixels rather than raw sensor feeds. For developers used to terrestrial limitations, this mirrors the shift seen when edge compute for AI pushed processing to local nodes to save bandwidth and latency.
If you develop software for Earth observation or remote sensing, evaluate your data pipelines for edge compatibility. Moving your processing logic into an orbital compute cluster requires optimizing your models for Jetson architectures and designing systems that prioritize transmitting structured outputs over raw sensor data.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
How to Implement Multi-Agent Coordination Patterns
Learn five production-grade architectural patterns for multi-agent systems to optimize performance, hierarchy, and context management in AI engineering.
Cloudflare Agents Week Redefines Edge Compute for AI
Cloudflare launches Agents Week, introducing Dynamic Workers and the EmDash CMS to provide the high-performance infrastructure needed for autonomous AI agents.
Hackers Exploit Critical RCE Flaw in Marimo Python Notebooks
A critical pre-auth vulnerability in Marimo is under active exploitation, allowing attackers to gain full shell access and steal sensitive API keys.
Qualcomm Acquires AI Infrastructure Startup Exostellar
Qualcomm acquires Cornell startup Exostellar to integrate AI-driven workload optimization and live migration into its data center software stack.
Intel’s Xeon 6 and Custom IPUs Coming to Google Cloud
Intel and Google expand their partnership to co-develop custom IPUs and deploy Xeon 6 processors for high-performance AI and hyperscale workloads.