What Resumable.js Does and Why It Matters
After spending years integrating file uploads into production applications — from hospital imaging platforms to video-editing suites — a pattern emerges quickly. Traditional single-request uploads break down the moment file sizes cross a threshold. That threshold varies by network quality, server configuration, and user patience, but it is surprisingly low. Resumable.js exists to solve exactly this category of problem: it splits files into chunks, uploads each chunk independently, tracks progress per-chunk, and — critically — allows uploads to resume after interruption without re-sending data that already arrived on the server.
This page covers the foundational concepts behind Resumable.js, walks through how the library fits into a typical upload architecture, and points toward the detailed guides, API reference, and examples you will find throughout this documentation hub. Whether you are evaluating Resumable.js for a new project or trying to optimize an existing integration, the material here should anchor your understanding of how the library operates under the hood.
How Chunked Uploads Work
At its core, Resumable.js leverages the HTML5 File API — specifically File.slice() — to divide a selected file into discrete byte ranges. Each range is uploaded as a separate HTTP request, typically a multipart/form-data POST. The server receives these chunks and assembles them, either by appending to a temporary file or writing to a chunk storage directory.
Why does this matter? Three practical reasons:
-
Failure isolation. When a 2 GB upload fails at 1.8 GB in a single-request model, you lose everything. With chunked uploads, only the in-flight chunk is lost. The remaining 1.79 GB is already server-side.
-
Parallelism. Multiple chunks can be in flight simultaneously, saturating available bandwidth more effectively than a single TCP stream.
-
Progress granularity. Per-chunk tracking gives users accurate, non-jumpy progress feedback. Instead of a browser-estimated progress bar, you know exactly which chunks have been acknowledged.
Architecture at a Glance
A Resumable.js integration has two halves: the client library and a server-side receiver.
Client-side: The Resumable.js constructor accepts a configuration object specifying the upload target URL, chunk size, simultaneous upload count, retry behavior, and various callbacks. Once files are added (via file input or drag-and-drop), the library manages the chunking, upload queue, and retry logic internally.
Server-side: The server receives individual chunk requests. Each request carries metadata — the chunk number, total chunks, a unique file identifier, the original file name, and the total file size. The server must store each chunk and, once all chunks have arrived, reassemble the file. Test requests (GET) allow the client to ask "have you already received chunk N?" — which powers the resume behavior.
This architecture keeps concerns separated. The client handles file slicing, queue management, and user feedback. The server handles storage, assembly, and deduplication.
The Upload Lifecycle
Understanding the lifecycle helps when debugging or customizing behavior:
- File selection. The user picks files via an
<input>element or drops them onto a drop target. - Chunking. Resumable.js calculates how many chunks the file requires based on
chunkSizeand begins building the upload queue. - Test requests (optional). Before sending a chunk, the library may send a GET request to ask the server if that chunk already exists. If it does, the chunk is skipped — this is how resume works.
- Upload. Each chunk is POSTed to the target endpoint. The library respects
simultaneousUploadsto control concurrency. - Retry on failure. If a chunk upload fails, it is retried up to
maxChunkRetriestimes with a configurable delay between attempts. - Completion. When all chunks for a file have been successfully uploaded, the
fileSuccessevent fires. The server can then assemble the final file.
Configuration Essentials
The constructor options control virtually every aspect of upload behavior. A few that consistently matter in production:
target— The upload endpoint URL. This is where chunk POST requests are sent.chunkSize— Bytes per chunk. Common values range from 1 MB to 10 MB depending on network conditions. Smaller chunks improve resume granularity; larger chunks reduce HTTP overhead.simultaneousUploads— How many chunks upload in parallel. Three is a reasonable default. Going higher can help on fast connections but may overwhelm servers or proxies.testChunks— Whether to send GET requests to check for existing chunks. Enable this for resume support.maxChunkRetries— How many times a failed chunk is retried before the library gives up. Production systems typically set this between 3 and 10.
Detailed documentation for every option is available in the API Configuration reference.
Events and Callbacks
Resumable.js communicates state changes through events. The most commonly used:
| Event | Fires When |
|---|---|
fileAdded | A new file is added to the upload queue |
fileProgress | A chunk completes, updating overall file progress |
fileSuccess | All chunks for a file have been uploaded |
fileError | A chunk has exhausted its retries |
uploadStart | The upload process begins |
complete | All files in the queue have finished |
error | A non-recoverable error occurs |
These events drive UI updates — progress bars, status messages, error displays — and can trigger server-side actions via callbacks. See the full Events reference for parameters and usage.
Where to Go Next
This overview provides the conceptual foundation. From here, the documentation branches into several practical areas:
- Chunking Strategy — Deep dive into chunk size selection, throughput optimization, and HTTP behavior.
- Retries and Resume — Handling network failures gracefully.
- Server Receivers — Implementing the server-side half in Node.js, Python, PHP, and more.
- API Reference — Complete configuration, methods, and events documentation.
- Examples — Working code for common integration patterns.
- Operations — Caching, timeouts, rate limits, and logging for production deployments.
Each guide is written from practical experience and includes the kind of implementation detail that typically only surfaces after debugging production issues. The goal is not to repeat what you can find in a README, but to fill the gaps that appear once file uploads hit real networks with real users.
