Bandwidth-Delay Product Explained
The bandwidth-delay product (BDP) is the maximum amount of data that can be "in flight" on a network link at any given moment. It is calculated by multiplying the link's bandwidth by its round-trip time (RTT): BDP = Bandwidth × RTT. This seemingly simple formula has profound implications for file transfer performance, especially over long-distance and satellite links.
Why BDP Matters
Think of a network link as a pipe. The bandwidth is the pipe's diameter, and the RTT is its length. BDP tells you the total volume of that pipe — how much data can be inside it at once.
TCP, the protocol that underlies most file transfers, uses an acknowledgment-based flow control system. The sender transmits data and waits for the receiver to acknowledge receipt before sending more. The TCP window size determines how much unacknowledged data can be in transit at once. If the window size is smaller than the BDP, the sender must pause and wait for acknowledgments, leaving the link underutilized.
BDP by the Numbers
The impact becomes clear when you calculate BDP for real-world links:
| Link Type | Bandwidth | RTT | BDP |
|---|---|---|---|
| Same data center | 10 Gbps | 0.5 ms | 625 KB |
| Cross-country (US) | 1 Gbps | 60 ms | 7.5 MB |
| Intercontinental | 1 Gbps | 150 ms | 18.75 MB |
| LEO satellite | 500 Mbps | 40 ms | 2.5 MB |
| GEO satellite | 100 Mbps | 600 ms | 7.5 MB |
The TCP Window Problem
TCP's default window size is 64 KB. On the cross-country link above (BDP of 7.5 MB), a single TCP connection with the default window can only utilize about 0.85% of the available bandwidth. Even with TCP window scaling enabled (up to 1 GB theoretical maximum), reaching full bandwidth requires careful tuning that most systems do not have out of the box.
This is why transferring a large file between New York and London over a fast link often feels slow. The bandwidth is there, but TCP cannot fill the pipe. Networks with high BDP are sometimes called "long fat networks" (LFNs) in networking literature, and they require special treatment.
Satellite Link Implications
Satellite links are the most extreme case of high BDP. A geostationary (GEO) satellite link has roughly 600 ms of round-trip latency. Even modern LEO constellations like Starlink have 20-40 ms RTT. For orbital data centers that need to transfer training data or model weights between ground and orbit, the BDP challenge is a first-order concern.
Standard TCP-based transfer tools perform poorly on these links. A single TCP stream on a GEO satellite link with 100 Mbps capacity might achieve only 1-2 Mbps of actual throughput. This is not a network problem — it is a protocol problem.
Solutions to the BDP Problem
- TCP window scaling: RFC 7323 allows window sizes up to 1 GB, but requires both endpoints to be configured correctly.
- Parallel TCP streams: Running multiple TCP connections simultaneously divides the BDP across streams. Tools like GridFTP use this approach.
- UDP-based protocols: Protocols like FASP protocol bypass TCP entirely by using UDP with custom rate control, eliminating the window-based bottleneck.
- P2P protocols: Modern peer-to-peer protocols can implement their own flow control optimized for high-BDP links without TCP's legacy constraints.
How Handrive Handles High-BDP Links
Handrive's transfer protocol was designed with high-BDP links in mind. Rather than relying on TCP's congestion control, it uses an adaptive approach that measures link capacity and adjusts sending rate to fill available bandwidth regardless of latency. This makes it effective for cross-country, intercontinental, and satellite transfers where TCP-based tools underperform. Explore how this supports AI infrastructure on the AI Data Centers hub page.
Dive deeper into transfer protocol design:
Understanding File Transfer Protocols: TCP vs UDP vs P2P →