
Chunked, Resumable File Uploads for the Modern Web
Resumable.js splits large files into manageable chunks, uploads them independently, and picks up exactly where it left off after any interruption. This documentation hub covers everything from initial setup to production-grade deployment — chunking strategies, retry logic, server receivers, CORS, caching, and operational best practices.
From File Selection to Complete Upload
Resumable.js handles the complexity of chunked uploads so your application code stays clean. Here is what happens under the hood during a typical upload.
1. Slice
File is divided into configurable chunks using File.slice()
2. Upload
Chunks are uploaded in parallel with configurable concurrency
3. Retry
Failed chunks are retried automatically with backoff
4. Resume
After interruption, only missing chunks are re-sent
What Makes Chunked Uploads Different
Traditional file uploads send the entire file in a single HTTP request. This works fine for profile pictures and small documents, but breaks down quickly with larger files. A single dropped packet at 95% progress means starting over. A server timeout at the proxy layer means the user sees a cryptic error. Chunked uploading changes the failure model entirely.
With Resumable.js, a 2 GB video file becomes (at the default 1 MB chunk size) roughly 2,000 small, independent uploads. Each chunk carries metadata identifying which file it belongs to and which position it occupies. The server stores chunks independently and assembles the final file once all pieces have arrived. If the connection drops after 1,500 chunks, only the in-flight chunk is lost — and when the user reconnects, Resumable.js queries the server to find out which chunks already exist and skips them.
This approach has been battle-tested across medical imaging systems handling DICOM files over hospital Wi-Fi, video production pipelines with 50 GB rushes, and SaaS platforms where users upload datasets from inconsistent mobile connections. The pattern scales because the complexity lives in the library and the server receiver, not in your application code.
Chunking Strategy
Chunk size selection, throughput optimization, and the math behind efficient uploads.
GuideRetries and Resume
Handling failures gracefully with automatic retry logic and server-side chunk tracking.
GuideCORS Configuration
Setting up Cross-Origin Resource Sharing for upload endpoints.
GuideServer Receivers
Implementing chunk receivers in Node.js, Python, PHP, and other platforms.
GuideSecurity
Authentication, input sanitization, and malware scanning for upload endpoints.
GuideS3 and Object Storage
Integrating chunked uploads with S3-compatible storage services.
Jump Into the Documentation
Documentation Overview
Start here — core concepts, architecture, and how Resumable.js works.
APIAPI Reference
Configuration options, instance methods, and event handlers.
ExamplesExamples
Working code for basic uploads, React integration, drag-and-drop, and resume.
OpsOperations
Caching, timeouts, rate limits, and logging for production deployments.
GuidesAll Guides
In-depth guides covering chunking, retries, CORS, security, and more.
MetaChangelog
Documentation updates, compatibility notes, and maintenance history.
Built for Real Networks
Every guide and example in this documentation draws on patterns observed in production systems. The chunking math accounts for mobile networks with 300ms round-trip times. The retry guidance factors in the reality that cellular connections drop mid-request. The server receiver examples handle the edge cases — duplicate chunks, out-of-order arrival, abandoned uploads consuming disk space.
File uploads are one of those features that seem simple until they are not. The gap between a working demo and a production-ready implementation is wider than most teams expect. This documentation exists to close that gap — providing the operational detail, configuration guidance, and practical examples that turn a chunked upload prototype into something you can deploy with confidence.
Whether you are handling 10 MB documents or 50 GB media files, the principles are the same. The parameters change — chunk sizes, concurrency levels, timeout values — but the architecture holds. Start with the documentation overview to build your understanding, then dive into the specific guides relevant to your deployment context.