8-City Shoot, Different Crews — What's the Footage Transfer Workflow?
You've booked eight cities. Each location has its own crew. Every location is generating terabytes of footage daily. Your post house is waiting for it all. Here's how to make it work.
The Multi-Location Transfer Problem
Multi-location shoots create a logistics challenge that single-location productions never face. You're coordinating different teams, different camera systems, different internet connections, and different time zones—all feeding into one central post-production pipeline.
The first instinct is often to ship drives. A crew in Portland finishes their shoot, backs everything up to an external SSD, and ships it overnight. Scale that to eight cities and you're coordinating eight separate shipments. If one drive fails in transit, you're dealing with partial footage recovery. If labeling isn't precise, the post house wastes time figuring out what belongs where. The timeline stretches and your post-production window compresses.
Cloud upload services seem like a logical fix. Each location uploads their footage to a central cloud bucket. But if one location generates 10 TB daily and you have eight locations, that's 80 TB daily in aggregate. Per-GB cloud transfer fees add up fast, and upload speeds on residential connections make the whole thing impractical. Enterprise UDP tools exist but require special network configuration and significant investment.
This is where direct peer-to-peer transfer changes the equation. Each location transfers directly to the post house over the internet. No middleman server. No per-gigabyte costs. No waiting for cloud processing.
Building a Multi-Location Folder Structure
Before any transfer happens, establish your folder architecture. The cleanest structure organizes by location first, then shoot date, then camera or media type. For example: /ProjectName/Location_Portland/2026-03-04/Camera_A/ and /ProjectName/Location_NYC/2026-03-04/Camera_B/. This makes it instantly obvious which footage came from where and when it was captured.
Include a metadata folder at each location level. Store a text file with shoot notes, camera settings, codec information, resolution, frame rate, and any special handling instructions. One location might be shooting 4K/24fps on a cinema camera while another is at 1080p/60fps on a different system. The metadata file flags these differences immediately rather than forcing post to discover them during ingest.
Add a checksum or manifest file. When footage is transferred, a simple text file with file names and their checksums (MD5 or SHA-256) verifies that nothing corrupted in transit. This takes seconds to generate and gives the post house confidence the files arrived intact.
Naming Conventions Across Crews
Standardize file naming before production begins. Use a consistent scheme like [Location]_[Date]_[Camera]_[Scene]_[Take].mov. Every crew follows the exact same pattern. No improvisation. No crew deciding to use their own system.
Include the location code in the file name itself. Even if folder structure gets flattened or reorganized in post, the file name identifies the origin. Avoid special characters and spaces—use underscores instead. Keep total file name length under 255 characters. This prevents transfer errors and compatibility issues across operating systems.
Managing Different Camera Systems
Multi-location shoots often mean different camera models. Each generates footage in different codecs and wrappers.
| Camera System | Codec | Per Minute | Transfer Note |
|---|---|---|---|
| ARRI Alexa Mini | ProRes 422 HQ | ~9.5 GB | Moderate size, high quality |
| RED Komodo | R3D RAW | ~14–18 GB | Large files, requires color setup |
| Sony Burano | XAVC / ProRes RAW | ~6–12 GB | Flexible codec options |
| Drone (various) | H.264 / H.265 | ~2–3 GB | Smaller, compressed files |
Create a codec specification document before production. Specify what each location should shoot in and why. Don't ask locations to convert footage before transfer—it's inefficient and error-prone. Transfer native footage and handle color space conversions in post where you have consistent hardware and software.
The Direct Transfer Workflow
Once folder structure, naming conventions, and codec specifications are locked in, the actual workflow is straightforward. Each location organizes daily footage into the agreed structure, generates a metadata file and checksum manifest, then initiates a direct transfer to the post house.
Using a peer-to-peer solution like Handrive means you're not paying per gigabyte. Whether you transfer 10 GB or 10 TB, the cost remains the same. With eight locations, this cost elimination is significant. You also avoid licensing complexity of enterprise tools that require annual contracts or per-location fees.
Set up a daily transfer schedule. Each location initiates their transfer at 6 PM local time, giving the post house time to ingest and verify overnight. By morning, everything's ready for editing. This creates a predictable rhythm rather than random transfers arriving at odd hours.
Verification and Backup
After transfer completes, verify file integrity using the checksum manifest. Post-production runs a quick script to confirm transferred files match their checksums. If any file corrupted during transfer, you catch it immediately and request a retransfer. This takes minutes and saves days of rework.
Maintain duplicate storage at the post house. Don't rely on a single drive for multi-location footage. Back up to a second location as soon as transfer completes. Location teams should also maintain their own backups until post-production confirms receipt and integrity. Once confirmed, they can format their media cards.
Communication Protocol
Designate a DIT or assistant at each location as the transfer coordinator. Have them report transfer status daily—a simple message like "Portland: 8.5 TB transferred, all files verified, ready for post." Create a shared spreadsheet tracking which footage has arrived, which is pending, and which locations have completed their shoot. This prevents missed transfers and highlights bottlenecks immediately.
Build in buffer time for unexpected delays. Internet outages happen. Drives fail. Plan your timeline assuming 20% of locations will have a transfer delay. This buffer makes the difference between a smooth production and a crisis.
The Economics at Scale
If eight locations each generate 10 TB daily over a 30-day shoot, that's 2.4 petabytes transferred. Per-GB cloud transfer services would cost $50,000–$100,000 at typical rates. Peer-to-peer transfer eliminates this entirely—data travels the shortest path from source to destination. Beyond cost, cloud transfers are slow at scale. Each location waiting for uploads means delayed starts downstream and compressed timelines.
The direct transfer approach also scales linearly. Adding a ninth or tenth location doesn't multiply your costs or require new infrastructure. Each location simply initiates their transfer on the same schedule using the same folder structure.
Putting It All Together
Multi-location productions create a complex logistics puzzle, but the workflow itself is straightforward when organized properly. Lock in your folder structure and naming conventions before the first camera rolls. Collect metadata and checksums at every location. Use direct P2P transfer so each crew sends footage straight to your post house without cloud intermediaries. Do a test transfer with one location before the full production launches.
For more on production transfer workflows, see our guides on raw footage transfer methods, DIT on-set file transfer, and transferring dailies from set to post.
Streamline Multi-Location Footage Transfer
Instead of managing cloud uploads or drive shipments across multiple cities, use Handrive's P2P transfer to send footage directly from each location to your post house. No per-gigabyte charges, no bottlenecks, no waiting.
Get Early Access