Illustration for Chunking Strategy guide

Chunking Strategy

How to configure chunk sizes, slice files efficiently, and optimize upload throughput with Resumable.js.

Guides·Updated 2025-10-22

Chunking is the foundational mechanism that makes Resumable.js work. Every file you upload gets sliced into smaller pieces, sent independently, and reassembled on the server. Getting the chunk size right—and understanding the trade-offs behind that number—separates a mediocre upload experience from a bulletproof one. This guide covers chunk size selection, how File.slice() operates under the hood, throughput optimization, parallel upload strategies, and a practical tuning methodology you can apply to any deployment. For more Resumable.js patterns beyond chunking, browse the full guides hub.

Diagram showing file being split into sequential byte-range chunks

How File.slice() Works

The browser's File.slice(start, end) method returns a Blob representing a byte range of the original file. No data is copied into memory eagerly—the blob is a lightweight reference until you actually read it (for instance, by appending it to a FormData object). Resumable.js calls slice() for each chunk right before upload, which means a 4 GB file doesn't balloon your heap. The browser streams bytes off disk on demand.

const r = new Resumable({
  chunkSize: 2 * 1024 * 1024, // 2 MB per chunk
  target: '/api/upload',
});

That chunkSize value, expressed in bytes, determines the end - start window for every slice. A 50 MB file at 2 MB per chunk yields 25 HTTP requests. Simple math—but the implications cascade through your entire upload pipeline.

Chunk Size Selection: The Practical Math

Think of chunk size as a dial between two extremes. Tiny chunks (say, 256 KB) mean more HTTP round-trips, more server-side bookkeeping, and more overhead per byte transferred. Enormous chunks (50 MB+) reduce request count but make each individual request fragile—if a 50 MB chunk fails at 98%, you retransmit the whole thing.

Here's a rough formula to start with:

Target chunks per file = file size ÷ chunk size

For most web applications handling files between 10 MB and 2 GB, a chunk size of 1–5 MB hits the sweet spot. That keeps the chunk count manageable (a 500 MB file at 5 MB per chunk is 100 requests) while ensuring any single retry wastes at most a few megabytes of bandwidth.

Concrete starting points

File size rangeSuggested chunk sizeChunks per file
< 10 MB1 MB1–10
10–100 MB2 MB5–50
100 MB–1 GB5 MB20–200
> 1 GB10 MB100+

These aren't rigid rules. They're a baseline. Your network conditions, server capacity, and user expectations should shift them.

Throughput vs. Overhead Trade-offs

Every chunk upload carries fixed overhead: TCP handshake (unless you're reusing connections with HTTP keep-alive), TLS negotiation, HTTP headers, and server-side processing to receive and acknowledge the chunk. The HTTP semantics governing these interactions are defined in RFC 9110, and understanding request/response lifecycle helps explain why overhead matters.

With a 256 KB chunk size, headers alone might represent 1–2% of the payload. At 5 MB, that overhead drops to a rounding error. But bigger chunks increase the cost of failure. If a user is on a train with spotty cell coverage, a 10 MB chunk that fails at 80% means 8 MB wasted. A 1 MB chunk failing at 80% wastes 800 KB. That's an order of magnitude difference in retransmission cost.

The throughput equation looks roughly like this:

Effective throughput = (chunk size / (chunk size + overhead)) × raw bandwidth

As chunk size grows, the fraction approaches 1.0 and you saturate your available bandwidth. But diminishing returns kick in quickly—going from 1 MB to 2 MB chunks is noticeable; going from 10 MB to 20 MB rarely is.

Network Condition Considerations

Not all users sit behind a symmetric gigabit connection. Mobile networks introduce latency spikes, packet loss, and variable throughput. Corporate proxies sometimes kill connections that stay open too long. Satellite links add hundreds of milliseconds of latency per round-trip.

For applications serving a global audience, consider adaptive chunk sizing. You can measure the time each chunk takes and adjust dynamically:

let currentChunkSize = 2 * 1024 * 1024; // start at 2 MB

r.on('chunkingComplete', (file) => {
  // Pseudocode: measure and adapt
  const lastChunkTime = file.lastChunkDuration;
  if (lastChunkTime > 10000) {
    currentChunkSize = Math.max(512 * 1024, currentChunkSize / 2);
  } else if (lastChunkTime < 2000) {
    currentChunkSize = Math.min(10 * 1024 * 1024, currentChunkSize * 1.5);
  }
});

This kind of adaptive logic isn't built into Resumable.js out of the box, but the event system gives you everything you need to implement it. Is it worth the complexity? For consumer-facing products where users upload on unpredictable networks—absolutely.

Parallel Chunk Strategies

Resumable.js supports simultaneous chunk uploads through the simultaneousUploads parameter:

const r = new Resumable({
  chunkSize: 2 * 1024 * 1024,
  simultaneousUploads: 3,
  target: '/api/upload',
});

Three parallel uploads can dramatically improve throughput on high-latency connections, because you're filling the pipe while waiting for acknowledgments. But there's a ceiling. Browsers typically cap concurrent connections to the same origin at six (HTTP/1.1) or multiplex over a single connection (HTTP/2). Pushing simultaneousUploads above 3–4 rarely helps and can actually backfire if your server throttles per-client connections.

Parallelism and ordering

Chunks don't need to arrive in order. Resumable.js tags each chunk with resumableChunkNumber, and your server can accept them in any sequence. This is a feature, not a bug—it means chunk 17 doesn't wait for chunk 16 to finish before it starts uploading.

However, your reassembly logic needs to handle gaps. Don't concatenate chunks as they arrive. Wait until all chunks for a given file are present, then assemble in order.

Chunk Size Tuning Methodology

Here's a repeatable process for finding your optimal chunk size:

  1. Baseline: Set chunkSize to 2 MB and simultaneousUploads to 3. Upload a representative file (pick something close to your median file size).
  2. Measure: Record wall-clock time, successful chunk count, retry count, and total bytes transferred (including retransmissions).
  3. Adjust one variable: Double the chunk size to 4 MB. Repeat the measurement. Then try 1 MB. Compare.
  4. Stress test: Simulate poor network conditions using browser DevTools throttling (3G preset). Which chunk size completes reliably? Which causes timeout cascades?
  5. Monitor in production: Log chunk upload times and retry rates. Aggregate weekly. If retry rates creep above 5%, your chunks might be too large for a segment of your user base.

Monitoring Throughput

Instrumentation doesn't need to be elaborate. Resumable.js fires progress events at the file and chunk level. Capture timestamps and compute bytes-per-second:

r.on('fileProgress', (file) => {
  const elapsed = (Date.now() - file.uploadStartTime) / 1000;
  const bytesUploaded = file.progress() * file.size;
  const throughput = bytesUploaded / elapsed; // bytes per second
  console.log(`${(throughput / 1024 / 1024).toFixed(2)} MB/s`);
});

Track this metric per user session. If you see bimodal distributions—some users cruising at 10 MB/s while others struggle at 200 KB/s—adaptive chunk sizing or a lower default chunk size might serve the slower cohort without meaningfully impacting the faster group.

Chunk size is one of those settings that seems trivial until it isn't. Get it right and uploads just work. Get it wrong and you're debugging timeout errors at 2 AM. Start conservative, measure relentlessly, and let the data guide your tuning.