Technical

How Data Moves Between Earth and Orbit: A Technical Primer

The physics of Earth-to-orbit communication impose hard limits that most software engineers never encounter. Here is what actually happens when you try to move data to 550km altitude.

Orbital Mechanics: The Constraints You Cannot Negotiate

Satellites at different altitudes have fundamentally different communication profiles. The three regimes that matter for data center applications:

Orbital Altitude Comparison

ParameterLEO (550km)MEO (8,000-20,000km)GEO (35,786km)
One-way latency3-12ms27-67ms~120ms
Round-trip time6-24ms (path) + processing54-134ms~240ms (path only)
Orbital period~95 minutes5-12 hours24 hours (geostationary)
Ground station visibility~10 min per pass2-4 hours per passContinuous (fixed position)
Orbital velocity~7.5 km/s~3.5-4.5 km/s~3.07 km/s

LEO is where the orbital data center activity is concentrated, primarily because of launch cost. SpaceX Falcon 9 delivers payload to 550km for roughly $2,700/kg. GEO insertion costs 3-5x more per kilogram due to the higher delta-v requirement.

But LEO comes with a fundamental trade-off: lower latency in exchange for intermittent contact. A satellite at 550km orbits Earth every ~95 minutes. From any single ground station, that satellite is above the horizon for about 10 minutes per pass. You get 4-6 usable passes per day. That gives you roughly 40-60 minutes of transfer time per ground station per day.

For orbital data centers handling AI workloads, this intermittent connectivity is the central engineering constraint. Everything else flows from it.

RF Spectrum and Link Budgets

Satellite communication uses allocated RF spectrum. The bands relevant to data transfer:

  • Ku-band (12-18 GHz): Used by Starlink for user terminals. Provides 200-500 MHz of usable bandwidth per beam. Susceptible to rain fade -- heavy precipitation can attenuate the signal by 5-10 dB.
  • Ka-band (26.5-40 GHz): Higher bandwidth potential (up to 1-2 GHz per beam) but more severe atmospheric attenuation. Rain fade can exceed 15 dB. Used by Starlink for gateway links.
  • V-band (40-75 GHz): Even higher bandwidth ceiling, even worse atmospheric effects. SpaceX has filed for V-band spectrum for next-generation Starlink.
  • Optical (laser): Starlink inter-satellite links use 1550nm laser communication. Essentially unlimited bandwidth in vacuum, but atmospheric turbulence makes ground-to-space optical links unreliable except at high-altitude sites with adaptive optics.

The link budget determines how much data you can actually push through these bands. A simplified Ka-band link budget for a 550km LEO satellite:


Transmit power (ground station):     +50 dBm (100W)
Antenna gain (3m dish):              +54 dBi
Free-space path loss (550km, 30GHz): -181 dB
Atmospheric loss (clear sky):        -1.5 dB
Rain margin (99.5% availability):    -6 dB
Satellite antenna gain:              +38 dBi
System noise temperature:            23 dB-K
──────────────────────────────────────────────
Received C/N0:                       ~85 dB-Hz
Achievable data rate:                ~2-5 Gbps
              

That 2-5 Gbps is the raw RF capacity under favorable conditions. After coding overhead, protocol framing, and the rain margin, usable throughput for a dedicated ground station is closer to 1-3 Gbps. A consumer terminal like Starlink achieves 50-200 Mbps because it shares capacity across many users and uses a smaller, cheaper antenna.

Atmospheric Effects

The atmosphere between ground station and satellite is not a clean pipe. It introduces several impairments that directly affect file transfer:

  • Rain fade: Water droplets absorb and scatter RF energy. At Ka-band, a moderate rainstorm (25 mm/hr) can reduce link margin by 10+ dB, cutting throughput by 90% or dropping the link entirely.
  • Ionospheric scintillation: Electron density variations in the ionosphere cause rapid amplitude and phase fluctuations, particularly at lower frequencies and near the magnetic equator. This manifests as packet loss in bursts.
  • Tropospheric turbulence: Temperature and humidity gradients cause refractive index variations, adding jitter to signal arrival times.
  • Doppler shift: A LEO satellite at 7.5 km/s creates frequency shifts of up to +/- 200 kHz at Ku-band. The receiver must track and compensate for this continuously. During maximum Doppler rate of change (satellite near horizon), tracking errors can cause brief dropouts.

The critical point: these effects cause packet loss that is not related to network congestion. A transport protocol must distinguish between atmospheric-induced loss and congestion-induced loss. TCP cannot.

Bandwidth-Delay Product: Where TCP Breaks

The bandwidth-delay product (BDP) is the amount of data that must be "in flight" to fully utilize a link. It equals bandwidth multiplied by round-trip time.

BDP Calculations for Different Links

  • Terrestrial (1 Gbps, 10ms RTT): 1.25 MB in flight
  • LEO link (2 Gbps, 40ms RTT): 10 MB in flight
  • GEO link (500 Mbps, 480ms RTT): 30 MB in flight
  • Transatlantic fiber (10 Gbps, 80ms RTT): 100 MB in flight

TCP's congestion window must grow to at least the BDP before the link is fully utilized. On a 2 Gbps LEO link with 40ms RTT, TCP needs a 10 MB window. TCP can achieve this with window scaling (RFC 7323). The problem is not the window size. It is what happens when a packet is lost.

TCP Reno halves the congestion window on loss. TCP CUBIC is more conservative but still reduces the window significantly. With 3% atmospheric packet loss, the window is being slashed multiple times per second. It never reaches the BDP. The link sits 85-90% idle.

TCP BBR (Google's bandwidth-based congestion control) handles this better by modeling the link's bandwidth and RTT rather than reacting to loss. But BBR still struggles with the bursty, correlated loss patterns caused by atmospheric effects. A 500ms rain fade event at Ka-band causes BBR to drastically underestimate available bandwidth, and recovery takes multiple RTTs. See our detailed analysis in why TCP fails for AI-scale data transfer.

What an Orbital-Capable Protocol Needs

The requirements for Earth-to-orbit file transfer follow directly from the physics above:

1. Rate Control Independent of Loss

The protocol must base its sending rate on measured link capacity, not on packet loss signals. When a 200ms atmospheric fade causes a burst of lost packets, the response should be to retransmit the lost data, not to reduce the sending rate. The link capacity did not change. The atmosphere briefly interrupted it.

2. Aggressive Window Management

The protocol needs to maintain a sending window at or above the BDP at all times. On a 2 Gbps LEO link with 40ms RTT, that means keeping 10 MB of data in flight continuously. When loss occurs, the gap should be filled with retransmissions without reducing the overall sending rate.

3. Session Persistence Across Link Interruptions

A LEO satellite at 550km passes out of view every 10 minutes. The protocol must support session state that persists across these outages. When the satellite is visible again (next pass, or via a different ground station), the transfer should resume from the last confirmed byte with zero ramp-up time. This is similar to what edge-to-orbit data flow architectures require for continuous data pipelines.

4. Minimal Handshake Overhead

TLS 1.3 requires 1 RTT for a fresh connection (0-RTT for resumed sessions, but with security trade-offs). On a 40ms LEO link, that is 40ms of a 600-second contact window. Tolerable. But add TCP's 3-way handshake (another 40ms) and slow-start phase (multiple RTTs to reach full rate), and you are spending 5-10% of your contact window just getting to full speed.

A protocol designed for intermittent links should reach full sending rate within 1 RTT of contact establishment.

5. End-to-End Encryption Without Round-Trip Tax

Data moving between Earth and orbit needs encryption. Model weights, training data, and inference results all have commercial value. The encryption scheme must not add round trips to the connection setup. Pre-shared key material or identity-based cryptography can eliminate the key exchange round trips that TLS requires. For more on the security requirements of these pipelines, see securing Earth-to-orbit AI pipelines.

Handrive's Protocol Characteristics

Handrive's transfer protocol was engineered for conditions that match the Earth-to-orbit profile:

  • Latency-independent throughput: The sending rate is not a function of RTT. A 40ms path achieves the same link utilization as a 4ms path.
  • Packet-loss tolerant: Non-congestion loss does not trigger rate reduction. Lost packets are retransmitted without slowing down the data stream.
  • Resumable transfers: Interrupted transfers resume from the last confirmed position. No full restart, no redundant retransmission of confirmed data.
  • E2E encryption: Data is encrypted before leaving the source. No cleartext on the wire at any point.

These properties were designed for challenging terrestrial scenarios -- remote sites, high-latency international links, unreliable mobile networks. The protocol has not been tested on actual orbital links. But the network conditions it handles (20-40ms RTT, 3-8% non-congestion packet loss, intermittent connectivity) are within the same parameter space as LEO communication.

For organizations evaluating transfer infrastructure for AI data center deployments -- whether terrestrial or orbital -- the protocol layer determines whether you actually use the bandwidth you are paying for.

The Road Ahead

Earth-to-orbit data transfer is an engineering problem with known physics. The atmospheric effects are characterized. The orbital mechanics are deterministic. The link budget math is straightforward. What has been missing is a transport protocol designed for these conditions rather than adapted from terrestrial assumptions.

As orbital compute moves from concept to deployment, the data transfer layer will determine which architectures are viable and which are bottlenecked by 1980s-era protocol assumptions. For further reading on the infrastructure challenges, see our coverage of the space data center file transfer problem and the broader data transfer crisis in AI infrastructure.


Protocol Built for Extreme Conditions

Handrive's transfer protocol maintains throughput where TCP collapses. Test it on your own high-latency, lossy links.

Download Handrive