Everyone's Racing to Build Data Centers in Space. Nobody's Solving the File Transfer Problem.
Billions of dollars are flowing into orbital compute. The hardware problem is getting solved. The data movement problem at 550km altitude is not.
The Orbital Compute Gold Rush
Over the past two years, a serious amount of capital has moved toward putting compute infrastructure in low Earth orbit (LEO). This is not science fiction anymore. It is engineering with funding attached.
SpaceX has filed FCC applications describing Starlink satellites with onboard compute capabilities beyond basic routing. Lumen Orbit announced Starcloud, a plan to deploy GPU clusters in orbit for AI training workloads, leveraging uninterrupted solar power and the vacuum of space for passive cooling. Google's Project Suncatcher is exploring orbital solar energy collection, which maps directly to powering orbital compute. Multiple startups have raised seed rounds on the same thesis: space has free cooling, 24/7 solar, and no neighbors complaining about noise.
The economics are real. Cooling accounts for 30-40% of terrestrial data center operating costs. In orbit, radiative cooling to the 2.7K background temperature of space is essentially free. Solar panels in LEO receive about 1,361 W/m² of irradiance with no atmospheric absorption, weather, or nighttime (for sun-synchronous orbits). These are meaningful advantages for power-hungry AI training workloads.
What Everyone Is Ignoring
Most of the orbital data center pitch decks stop at the compute layer. They describe the GPU racks, the thermal management, the power systems. Then they wave their hands at data transfer.
This is a problem because an orbital data center that cannot efficiently move data to and from Earth is a very expensive space heater.
The numbers are sobering. At 550km altitude (standard Starlink orbit), the physics impose hard constraints:
- Round-trip latency: 20-40ms depending on elevation angle. This varies continuously as the satellite moves at ~7.5 km/s relative to the ground.
- Packet loss: 3-8% under typical conditions due to atmospheric effects, handoffs between ground stations, and Doppler-induced frequency shifts.
- Contact windows: A single ground station sees a LEO satellite for roughly 10 minutes per pass. You get maybe 4-6 passes per day per station.
- Throughput ceiling: Current Starlink user terminals deliver 50-200 Mbps down, 10-25 Mbps up. Dedicated ground stations do better, but not by the order of magnitude you need for petabyte-scale AI datasets.
None of these constraints are bugs. They are physics. And they break the protocols that every current file transfer tool is built on.
Why TCP Collapses in Orbit
TCP was designed for terrestrial networks where packet loss signals congestion. When TCP detects loss, it halves its sending rate. This is the correct response on a fiber link where loss means a router buffer overflowed.
On an Earth-to-orbit link, loss means a cloud drifted across the beam path, or the satellite rotated slightly during an antenna handoff, or ionospheric scintillation caused a brief fade. The link is not congested. But TCP does not know that.
The result: TCP on a 550km LEO link with 3% packet loss achieves roughly 10-15% of the theoretical channel capacity. At 5% loss, it drops below 8%. The bandwidth-delay product (BDP) for these links requires TCP windows far larger than most implementations support, and even with window scaling, the congestion control loop still interprets every lost packet as a signal to back off.
This is well-documented in satellite communication literature and it is not a new discovery. What is new is the assumption that we can run data center workloads over these links using the same transfer protocols that struggle to saturate a transatlantic fiber connection. For deeper analysis, see our breakdown of why TCP fails for AI-scale data transfer.
The Scale of the Problem
Consider what an orbital AI training cluster actually needs to move. A single training run for a large language model consumes terabytes to petabytes of input data. Model checkpoints during training can be 10-100GB each, saved every few hours. Results, logs, and intermediate artifacts add more.
Back-of-Envelope: Orbital Training Data Requirements
- Training dataset upload: 50-500TB (one-time, but the pipeline is continuous)
- Checkpoint downloads: 50-100GB every 2-4 hours
- Available transfer window: ~60 minutes/day per ground station
- Required sustained rate: 1.4-14 Gbps just for the initial dataset (at 60 min/day, 30 day window)
- TCP effective rate at 3% loss: 10-15% of link capacity
The gap between what TCP delivers and what orbital AI workloads require is not 2x or 3x. It is closer to 10x. You can throw more ground stations at the problem, and many orbital data center proposals do exactly that. But ground stations cost $1-5M each to build, and you still need a protocol that can actually use the available bandwidth efficiently.
What a Solution Looks Like
The protocol layer for Earth-to-orbit data transfer needs specific properties that TCP does not have:
- Latency independence: Throughput should not degrade as round-trip time increases. A 40ms path and a 4ms path should achieve the same fraction of link capacity.
- Packet loss tolerance: The protocol must distinguish between congestion-induced loss and link-layer loss. It should not halve its rate because an atmospheric fade caused a 100ms dropout.
- Session resumption: When a satellite passes out of view and the link drops, the transfer should resume from exactly where it stopped on the next pass. No re-handshake, no re-negotiation, no re-transmission of already-confirmed data.
- Encryption that does not add round trips: TLS handshakes add 1-3 round trips before any data flows. At 40ms RTT that is tolerable. When your contact window is 10 minutes, every second of handshaking is bandwidth you will never get back.
Handrive's transfer protocol was built with these properties. It is latency-independent and packet-loss tolerant by design, not as an afterthought bolted onto TCP. The protocol maintains throughput at latencies and loss rates where TCP degrades to single digits of its theoretical capacity. It has not been tested on orbital links specifically, but the network conditions it was engineered for -- variable latency, non-congestion packet loss, intermittent connectivity -- are exactly the conditions that define Earth-to-orbit communication.
If you are evaluating the full landscape of data transfer challenges for orbital and AI data center infrastructure, the protocol layer is where most solutions fall short.
The Missing Layer in the Orbital Stack
The orbital data center stack has four layers, and three of them are getting serious engineering attention:
- Launch: SpaceX Falcon 9 has driven costs to ~$2,700/kg to LEO. Starship promises $100-200/kg.
- Power: Solar arrays and battery systems for continuous operation are well-understood.
- Compute: Radiation-hardened GPUs and custom silicon are in active development.
- Data transfer: Everyone is assuming existing protocols will work. They will not.
Layer 4 is the bottleneck. You can have the cheapest launch costs, unlimited solar power, and the fastest GPUs in orbit, and none of it matters if you cannot get data in and out efficiently. This is the data gravity problem applied to the most extreme possible scenario: your data and your compute are separated by 550km of vacuum and atmosphere.
What Comes Next
Orbital compute is coming. The economics are too compelling for it not to happen. But the companies that succeed will be the ones that solve the full stack, not just the parts that look good in a pitch deck.
The file transfer problem is solvable. It requires protocols designed for the actual physics of Earth-to-orbit links, not terrestrial protocols optimized for fiber and copper. For a deeper look at the technical constraints, read our technical primer on Earth-to-orbit data transfer. For broader context on how these challenges fit into AI infrastructure, see orbital data center transfer challenges.
Built for Extreme Networks
Handrive's protocol is latency-independent and packet-loss tolerant -- designed for the conditions that break TCP. Download it and test on your own challenging links.
Download Handrive