Kepler Communications announced the successful commissioning of distributed on-orbit computing across its Tranche 1 optical data relay constellation on Monday, March 16, 2026. This milestone transitions Kepler’s network from a high-speed data transport layer into a scalable, cloud-native processing environment, allowing customers to execute AI-driven workloads directly in orbit rather than relying on ground-based data centers.
The Hardware of Orbital AI
The “Kepler Compute” fabric is powered by 40 NVIDIA Jetson Orin modules, deployed as distributed edge GPUs across the ten satellites that make up the Tranche 1 “Aether” series. By integrating these modules with SDA-compatible optical inter-satellite links (OISLs), Kepler has created a decentralized compute cluster where workloads can scale dynamically across the constellation.
| Component | Specification |
| Compute Units | 40x NVIDIA Jetson Orin Modules (4 per satellite) |
| Storage | Terabytes of SSD-based onboard storage |
| Connectivity | Real-time Optical Mesh (SDA & ESTOL compatible) |
| Architecture | IP-based decentralized edge fabric |
| Deployment | 10 Satellites (Tranche 1) |
Overcoming the Downlink Bottleneck
Traditionally, Earth Observation (EO) and Signal Intelligence (SIGINT) operators have been limited by “downlink latency”—the hours-long wait for a satellite to pass over a ground station to dump raw data. By running NVIDIA CUDA-accelerated AI models directly on Kepler’s satellites, mission operators can perform:
- Real-time Detection: Automated identification of wildfires, maritime anomalies, or military movements.
- Data Optimization: Thinning massive imagery archives to transmit only “actionable pixels” to the ground.
- Autonomous Tasking: Using on-orbit insights to automatically retask sensors without human intervention.
Strategic Context and Constellation Growth
The commissioning follows the successful January 2026 launch of the Tranche 1 satellites aboard a SpaceX Falcon 9. This deployment marks Kepler’s shift from technology pathfinders to a fully operational commercial network. The architecture is designed for high resiliency; if an individual satellite node becomes unavailable, the network’s software-defined routing can shift compute tasks to other nodes in the ring to maintain service continuity.
“By leveraging NVIDIA AI infrastructure in our optical network, data can be processed, routed, and acted on in orbit rather than waiting to return to Earth,” said Mina Mitry, CEO and co-founder of Kepler. “As we extend the scale of our infrastructure, this becomes a natural extension of terrestrial computing, enabling faster decision-making and new mission architectures.”
Tranche 2 and 100-Gigabit Links
Kepler plans to launch additional tranches every two years, with Tranche 2 scheduled for early 2028. Future tranches will introduce 100-gigabit optical technology and increased GPU density to support the growing demand for “Orbital Data Centers” (ODCs). This roadmap aligns with Kepler’s ongoing partnership with Axiom Space, which seeks to operationalize large-scale data processing for the first commercial modules of the Axiom Station.
