Ai Engineering 3 min read

Kepler Space Cloud Fires Up 40 NVIDIA GPUs in Orbit

Kepler Communications opens the first scalable orbital compute cluster, utilizing NVIDIA Jetson Orin modules to bring AI edge processing to space.

Kepler Communications launched its Tranche 1 orbital compute cluster for commercial operations, bringing 40 NVIDIA GPUs online in Earth orbit. The Toronto-based company transitioned its Aether constellation from an experimental testbed into a functioning decentralized cloud. Sophia Space is the first commercial customer to test distributed software configuration across the orbital fabric.

Orbital Hardware and Network Architecture

The cluster operates across 10 satellites placed in orbit by a SpaceX Falcon 9 in January 2026. Each node in the Aether constellation carries four NVIDIA Jetson Orin modules backed by terabytes of SSD storage. The satellites communicate through a real-time optical mesh network using SDA-compatible optical inter-satellite links (OISLs).

ComponentSpecification
Total Compute40 NVIDIA Jetson Orin GPUs
Fleet Size10 operational satellites
NetworkingSDA-compatible optical inter-satellite links
StorageTerabytes of onboard SSD

This creates an IP-based edge fabric capable of executing distributed workloads directly in space. If you build systems requiring high-bandwidth data processing, this infrastructure shifts the bottleneck from radio downlinks to onboard compute capacity. Developers can run models locally on Jetson hardware before deploying them to the orbital cluster. It allows for advanced AI inference tasks to run precisely where the data originates.

Distributed Software Deployment in Space

Sophia Space is using the Kepler network to validate its Sophia Orbital Operating System (SOOS). The trial involves deploying software across six distributed GPUs located on two separate spacecraft. This replicates the management patterns of a terrestrial data center within an orbital environment.

The deployment serves as a de-risking phase for the TILE (Thermal-Integrated LEO Edge) architecture. Sophia Space plans to launch its own passively cooled space computers in late 2027. Configuring software across separate moving nodes requires novel approaches to distributed state, similar to how developers implement multi-agent coordination patterns in highly constrained terrestrial networks.

Kepler currently serves 18 commercial and government customers by acting as a pure infrastructure layer. The primary advantage of orbital compute is addressing the downlink bottleneck for Earth observation and Signal Intelligence operations. Operators typically wait hours to transmit raw data to ground stations for processing.

Executing AI workloads in orbit allows operators to perform real-time detection of maritime anomalies, wildfires, or military movements. The cluster transmits only actionable pixels rather than raw sensor feeds. For developers used to terrestrial limitations, this mirrors the shift seen when edge compute for AI pushed processing to local nodes to save bandwidth and latency.

If you develop software for Earth observation or remote sensing, evaluate your data pipelines for edge compatibility. Moving your processing logic into an orbital compute cluster requires optimizing your models for Jetson architectures and designing systems that prioritize transmitting structured outputs over raw sensor data.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading