Abstract visualization of chunked file upload data flow

Chunked, Resumable File Uploads for the Modern Web

Resumable.js splits large files into manageable chunks, uploads them independently, and picks up exactly where it left off after any interruption. This documentation hub covers everything from initial setup to production-grade deployment — chunking strategies, retry logic, server receivers, CORS, caching, and operational best practices.

How It Works

From File Selection to Complete Upload

Resumable.js handles the complexity of chunked uploads so your application code stays clean. Here is what happens under the hood during a typical upload.

1. Slice

File is divided into configurable chunks using File.slice()

Complete
8/8 chunks100%

2. Upload

Chunks are uploaded in parallel with configurable concurrency

Active
5/8 chunks63%

3. Retry

Failed chunks are retried automatically with backoff

Retry
6/8 chunks75%

4. Resume

After interruption, only missing chunks are re-sent

Complete
8/8 chunks100%
Core Concepts

What Makes Chunked Uploads Different

Traditional file uploads send the entire file in a single HTTP request. This works fine for profile pictures and small documents, but breaks down quickly with larger files. A single dropped packet at 95% progress means starting over. A server timeout at the proxy layer means the user sees a cryptic error. Chunked uploading changes the failure model entirely.

With Resumable.js, a 2 GB video file becomes (at the default 1 MB chunk size) roughly 2,000 small, independent uploads. Each chunk carries metadata identifying which file it belongs to and which position it occupies. The server stores chunks independently and assembles the final file once all pieces have arrived. If the connection drops after 1,500 chunks, only the in-flight chunk is lost — and when the user reconnects, Resumable.js queries the server to find out which chunks already exist and skips them.

This approach has been battle-tested across medical imaging systems handling DICOM files over hospital Wi-Fi, video production pipelines with 50 GB rushes, and SaaS platforms where users upload datasets from inconsistent mobile connections. The pattern scales because the complexity lives in the library and the server receiver, not in your application code.

Production Context

Built for Real Networks

Every guide and example in this documentation draws on patterns observed in production systems. The chunking math accounts for mobile networks with 300ms round-trip times. The retry guidance factors in the reality that cellular connections drop mid-request. The server receiver examples handle the edge cases — duplicate chunks, out-of-order arrival, abandoned uploads consuming disk space.

File uploads are one of those features that seem simple until they are not. The gap between a working demo and a production-ready implementation is wider than most teams expect. This documentation exists to close that gap — providing the operational detail, configuration guidance, and practical examples that turn a chunked upload prototype into something you can deploy with confidence.

Whether you are handling 10 MB documents or 50 GB media files, the principles are the same. The parameters change — chunk sizes, concurrency levels, timeout values — but the architecture holds. Start with the documentation overview to build your understanding, then dive into the specific guides relevant to your deployment context.