Object storage is where most uploaded files end up, but getting Resumable.js chunks from the browser into an S3 bucket isn't as straightforward as pointing the target at an S3 endpoint. S3 has its own multipart upload protocol, its own chunk size minimums, and its own authentication model—none of which map one-to-one onto Resumable.js defaults. This guide covers the two main integration patterns (proxy and presigned), chunk-to-part alignment, assembly via the S3 API, and compatibility notes for MinIO, Google Cloud Storage, and Azure Blob Storage. For more Resumable.js integration patterns, check the guides hub.
Why Direct-to-S3 Chunked Uploads Differ
Resumable.js sends chunks as standard multipart/form-data POST requests to your target URL. S3's multipart upload API expects something entirely different: an InitiateMultipartUpload call that returns an upload ID, then individual UploadPart calls with specific headers (Content-Length, Content-MD5, a part number), and finally a CompleteMultipartUpload call that lists all parts and their ETags.
You can't send a Resumable.js chunk directly to S3 without an intermediary that translates between the two protocols. The question is where that translation happens.
Pattern 1: The Proxy Approach
The most common pattern routes all uploads through your application server:
Browser (Resumable.js) → Your Server → S3
Your server receives chunks exactly as described in the server receivers pattern, but instead of writing them to local disk, it forwards them to S3 as multipart upload parts.
Implementation flow
-
First chunk arrives: Your server calls
CreateMultipartUploadto get an S3 upload ID. Store the mapping between the Resumable.js identifier and the S3 upload ID (in Redis, a database, or memory for single-server setups). -
Each chunk: Convert the Resumable.js chunk number to an S3 part number (they're both 1-based, so this is usually a direct mapping). Call
UploadPartwith the chunk data. Store the returned ETag. -
Last chunk: After receiving all chunks, call
CompleteMultipartUploadwith the list of part numbers and ETags.
const { S3Client, CreateMultipartUploadCommand, UploadPartCommand,
CompleteMultipartUploadCommand } = require('@aws-sdk/client-s3');
const s3 = new S3Client({ region: 'us-east-1' });
const uploads = new Map(); // identifier → { uploadId, parts: [] }
app.post('/api/upload', upload.single('file'), async (req, res) => {
const { resumableIdentifier, resumableChunkNumber, resumableTotalChunks,
resumableFilename } = req.body;
const partNumber = parseInt(resumableChunkNumber, 10);
const totalChunks = parseInt(resumableTotalChunks, 10);
// Initialize multipart upload on first chunk
if (!uploads.has(resumableIdentifier)) {
const initCmd = new CreateMultipartUploadCommand({
Bucket: 'my-uploads',
Key: `uploads/${resumableFilename}`,
ContentType: req.body.resumableType || 'application/octet-stream',
});
const { UploadId } = await s3.send(initCmd);
uploads.set(resumableIdentifier, { uploadId: UploadId, parts: [], key: `uploads/${resumableFilename}` });
}
const uploadState = uploads.get(resumableIdentifier);
// Upload part to S3
const partCmd = new UploadPartCommand({
Bucket: 'my-uploads',
Key: uploadState.key,
UploadId: uploadState.uploadId,
PartNumber: partNumber,
Body: fs.readFileSync(req.file.path),
});
const { ETag } = await s3.send(partCmd);
uploadState.parts.push({ PartNumber: partNumber, ETag });
// Complete when all parts received
if (uploadState.parts.length === totalChunks) {
uploadState.parts.sort((a, b) => a.PartNumber - b.PartNumber);
const completeCmd = new CompleteMultipartUploadCommand({
Bucket: 'my-uploads',
Key: uploadState.key,
UploadId: uploadState.uploadId,
MultipartUpload: { Parts: uploadState.parts },
});
await s3.send(completeCmd);
uploads.delete(resumableIdentifier);
}
fs.unlinkSync(req.file.path);
res.sendStatus(200);
});
Trade-offs
The proxy pattern is simple to implement and keeps S3 credentials on your server. But all upload traffic passes through your application, doubling bandwidth usage (client→server, then server→S3) and adding latency. For small deployments this is perfectly fine. For high-volume systems pushing terabytes per day, the bandwidth cost adds up.
Pattern 2: Presigned URLs
The presigned URL pattern sends data directly from the browser to S3, bypassing your server for the heavy lifting:
Browser → Your Server (get presigned URL) → Browser → S3 (direct upload)
Implementation flow
-
Client requests upload session: Before starting the upload, your frontend asks your server to initiate an S3 multipart upload and return presigned URLs for each part.
-
Server creates presigned URLs: Call
CreateMultipartUpload, then generate a presigned URL for each expected part usingUploadPartpresigned requests. -
Client uploads directly to S3: Override Resumable.js's target dynamically to use the presigned URL for each chunk.
-
Client reports completion: After all parts upload, the client tells your server, which calls
CompleteMultipartUpload.
This pattern requires more client-side orchestration. Resumable.js's target can be a function that returns a different URL per chunk:
const r = new Resumable({
target: (params) => {
return presignedUrls[params.resumableChunkNumber];
},
method: 'PUT', // S3 presigned URLs typically expect PUT
});
The complexity is higher, but your server only handles lightweight coordination requests instead of streaming gigabytes of file data.
Chunk-to-Part Alignment
Here's a critical detail: S3 requires each part (except the last) to be at least 5 MB. Resumable.js defaults to 1 MB chunks. If you send 1 MB parts to S3, the CompleteMultipartUpload call will fail.
You have two options:
- Set Resumable.js
chunkSizeto 5 MB or larger. This is the simplest fix. Each chunk maps directly to one S3 part. - Buffer chunks server-side. Collect multiple Resumable.js chunks until you have ≥5 MB, then upload them as a single S3 part. This preserves smaller client-side chunks (better for retry on poor networks) at the cost of server-side buffering logic.
const r = new Resumable({
chunkSize: 5 * 1024 * 1024, // Minimum for S3 parts
target: '/api/upload',
});
S3 also caps parts at 5 GB each and limits multipart uploads to 10,000 parts. With 5 MB parts, your maximum file size is about 48.8 GB. For larger files, increase the chunk size proportionally.
MinIO Compatibility
MinIO implements the S3 API, so the patterns above work without modification. Point your S3 client at your MinIO endpoint instead of AWS:
const s3 = new S3Client({
region: 'us-east-1',
endpoint: 'https://minio.example.com',
forcePathStyle: true, // MinIO uses path-style URLs
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
});
The forcePathStyle: true option is important—MinIO uses path-style bucket addressing (minio.example.com/bucket/key) rather than AWS's virtual-hosted style (bucket.s3.amazonaws.com/key).
Google Cloud Storage and Azure Blob Notes
Google Cloud Storage offers an S3-compatible API via its XML API and interoperability mode. You can use the same S3 client pointed at storage.googleapis.com with HMAC credentials. Alternatively, GCS has its own resumable upload protocol that differs from S3's multipart upload—it uses a single upload URI with byte range headers. Adapting Resumable.js to GCS's native protocol requires more custom server logic.
Azure Blob Storage uses a Block Blob approach: upload individual blocks (equivalent to parts), then commit them with a block list. The concept is similar to S3 multipart, but the API details differ. Azure's @azure/storage-blob SDK provides stageBlock and commitBlockList methods that parallel S3's UploadPart and CompleteMultipartUpload. Block IDs must be base64-encoded and all the same length—use zero-padded chunk numbers encoded to base64.
Cleaning Up Incomplete Multipart Uploads
Abandoned multipart uploads consume storage and incur costs on AWS. If a user starts uploading a 10 GB file and abandons it after 2 GB, those 2 GB of parts sit in S3 indefinitely unless you clean them up.
S3 offers lifecycle rules for this:
{
"Rules": [
{
"ID": "abort-incomplete-uploads",
"Status": "Enabled",
"Filter": { "Prefix": "uploads/" },
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 1
}
}
]
}
This automatically aborts any multipart upload that hasn't completed within 24 hours. Set the interval based on your largest expected upload time—if users legitimately upload files that take days (think remote locations with satellite internet), extend the window accordingly.
You can also programmatically list and abort stale uploads:
const { ListMultipartUploadsCommand, AbortMultipartUploadCommand } = require('@aws-sdk/client-s3');
const listCmd = new ListMultipartUploadsCommand({ Bucket: 'my-uploads' });
const { Uploads } = await s3.send(listCmd);
for (const upload of Uploads || []) {
const age = Date.now() - upload.Initiated.getTime();
if (age > 24 * 60 * 60 * 1000) {
await s3.send(new AbortMultipartUploadCommand({
Bucket: 'my-uploads',
Key: upload.Key,
UploadId: upload.UploadId,
}));
}
}
Getting Resumable.js and S3 working together takes some orchestration, but once the plumbing is in place, you get the best of both worlds: resilient chunked uploads on the client side and scalable, durable object storage on the backend.
