Illustration for Server Receivers guide

Server Receivers

Implementing server-side chunk receivers for Resumable.js in Node.js, Python, PHP, and other platforms.

Guides·Updated 2025-10-10

The client side of Resumable.js gets most of the attention, but the server is where chunks become files. Every Resumable.js deployment needs a receiver that handles two responsibilities: responding to GET requests that test whether a chunk already exists, and processing POST requests that contain the actual chunk data. Get the server wrong and everything else falls apart—retry logic can't help if the receiver mishandles bytes, and resume can't work if the server doesn't track state. This guide covers the server contract, chunk storage patterns, reassembly logic, and implementation notes for Node.js, Python, and PHP. For more Resumable.js patterns, see the guides hub.

Client-server architecture diagram for chunked upload pipeline

The Server Contract: GET and POST

Resumable.js sends two types of requests to your target URL:

GET — Test if a chunk exists

When testChunks is enabled, the client sends a GET request with query parameters before uploading each chunk:

GET /api/upload?resumableChunkNumber=3
    &resumableChunkSize=2097152
    &resumableCurrentChunkSize=2097152
    &resumableTotalSize=10485760
    &resumableType=image/png
    &resumableIdentifier=10485760-myfile-png
    &resumableFilename=myfile.png
    &resumableRelativePath=myfile.png
    &resumableTotalChunks=5

Your server checks whether chunk 3 of identifier 10485760-myfile-png has already been received. If yes, respond with 200. If no, respond with 204 (or any non-200 status). The client skips already-uploaded chunks automatically.

POST — Upload a chunk

The actual chunk data arrives as a multipart POST:

POST /api/upload

Content-Type: multipart/form-data

Fields:
  resumableChunkNumber: 3
  resumableChunkSize: 2097152
  resumableCurrentChunkSize: 2097152
  resumableTotalSize: 10485760
  resumableType: image/png
  resumableIdentifier: 10485760-myfile-png
  resumableFilename: myfile.png
  resumableRelativePath: myfile.png
  resumableTotalChunks: 5
  file: [binary data]

Your server saves the chunk, responds with 200 on success, and the client moves to the next chunk. On error, respond with the appropriate status code—anything not in permanentErrors triggers a retry.

Query Parameters Reference

Every request includes these parameters:

ParameterDescription
resumableChunkNumberThe 1-based index of the current chunk
resumableChunkSizeThe configured chunk size (not the actual size of the last chunk)
resumableCurrentChunkSizeThe actual byte size of this specific chunk
resumableTotalSizeTotal file size in bytes
resumableTypeMIME type of the file (may be empty)
resumableIdentifierUnique identifier for the file (typically size-filename)
resumableFilenameOriginal filename
resumableRelativePathRelative path (useful for folder uploads)
resumableTotalChunksTotal number of chunks for this file

The resumableIdentifier is your primary key for grouping chunks. It's what connects chunk 1 to chunk 47 of the same file.

Chunk Storage Patterns

Flat directory per upload

The simplest approach: create a directory named after the resumableIdentifier, and write each chunk as a numbered file within it.

/uploads/
  10485760-myfile-png/
    chunk_001
    chunk_002
    chunk_003
    chunk_004
    chunk_005

Advantages: easy to inspect, easy to count, trivial to implement. Disadvantages: creates many small files, which some file systems handle poorly at scale. If you're running on a system with inode limits or slow metadata operations (NFS), this pattern can become a bottleneck.

Temporary file with sparse writes

A more sophisticated approach: create a single file of the final expected size and write each chunk directly to its correct byte offset.

offset = (chunkNumber - 1) * chunkSize
file.seek(offset)
file.write(chunkData)

This eliminates the reassembly step entirely—when all chunks are received, the file is already complete. The trade-off is that you need to track which chunks have been written separately (a bitmap or database record), since you can't infer it from the file system.

Node.js Express Implementation

const express = require('express');
const multer = require('multer');
const fs = require('fs');
const path = require('path');

const app = express();
const upload = multer({ dest: 'tmp/' });
const UPLOAD_DIR = './uploads';

// GET: Test if chunk exists
app.get('/api/upload', (req, res) => {
  const { resumableIdentifier, resumableChunkNumber, resumableCurrentChunkSize } = req.query;
  const chunkPath = path.join(
    UPLOAD_DIR, resumableIdentifier, `chunk_${resumableChunkNumber.padStart(4, '0')}`
  );

  if (fs.existsSync(chunkPath)) {
    const stat = fs.statSync(chunkPath);
    if (stat.size === parseInt(resumableCurrentChunkSize, 10)) {
      return res.sendStatus(200);
    }
  }
  return res.sendStatus(204);
});

// POST: Receive chunk
app.post('/api/upload', upload.single('file'), (req, res) => {
  const { resumableIdentifier, resumableChunkNumber, resumableTotalChunks } = req.body;
  const chunkDir = path.join(UPLOAD_DIR, resumableIdentifier);

  fs.mkdirSync(chunkDir, { recursive: true });

  const chunkPath = path.join(chunkDir, `chunk_${resumableChunkNumber.padStart(4, '0')}`);
  fs.renameSync(req.file.path, chunkPath);

  // Check if all chunks received
  const totalChunks = parseInt(resumableTotalChunks, 10);
  const receivedChunks = fs.readdirSync(chunkDir).length;

  if (receivedChunks === totalChunks) {
    assembleFile(chunkDir, req.body.resumableFilename, totalChunks);
  }

  res.sendStatus(200);
});

function assembleFile(chunkDir, filename, totalChunks) {
  const outputPath = path.join(UPLOAD_DIR, filename);
  const writeStream = fs.createWriteStream(outputPath);

  for (let i = 1; i <= totalChunks; i++) {
    const chunkPath = path.join(chunkDir, `chunk_${String(i).padStart(4, '0')}`);
    const data = fs.readFileSync(chunkPath);
    writeStream.write(data);
  }

  writeStream.end();
  // Cleanup: remove chunk directory
  fs.rmSync(chunkDir, { recursive: true });
}

A couple of notes: the padStart(4, '0') ensures chunks sort lexicographically. The multer middleware handles multipart parsing and writes the uploaded file to a temporary location, which we then move into our chunk directory. The assembly step reads chunks sequentially and writes them to the final file—streaming would be more memory-efficient for very large files.

Python Flask Notes

import os
from flask import Flask, request

app = Flask(__name__)
UPLOAD_DIR = './uploads'

@app.route('/api/upload', methods=['GET'])
def test_chunk():
    identifier = request.args.get('resumableIdentifier')
    chunk_number = request.args.get('resumableChunkNumber', '').zfill(4)
    chunk_path = os.path.join(UPLOAD_DIR, identifier, f'chunk_{chunk_number}')

    if os.path.exists(chunk_path):
        return '', 200
    return '', 204

@app.route('/api/upload', methods=['POST'])
def upload_chunk():
    identifier = request.form.get('resumableIdentifier')
    chunk_number = request.form.get('resumableChunkNumber', '').zfill(4)
    total_chunks = int(request.form.get('resumableTotalChunks'))

    chunk_dir = os.path.join(UPLOAD_DIR, identifier)
    os.makedirs(chunk_dir, exist_ok=True)

    file = request.files.get('file')
    file.save(os.path.join(chunk_dir, f'chunk_{chunk_number}'))

    # Check completion and assemble if done
    received = len(os.listdir(chunk_dir))
    if received >= total_chunks:
        assemble(chunk_dir, request.form.get('resumableFilename'), total_chunks)

    return '', 200

Flask's request.files handles the multipart parsing. The logic mirrors the Node.js version—same directory structure, same assembly check. In production, replace os.listdir with a proper counter or database check, as file system operations under concurrent writes can race.

PHP Handling

<?php
$uploadDir = './uploads/';
$identifier = $_REQUEST['resumableIdentifier'];
$chunkNumber = str_pad($_REQUEST['resumableChunkNumber'], 4, '0', STR_PAD_LEFT);
$chunkDir = $uploadDir . $identifier . '/';

if ($_SERVER['REQUEST_METHOD'] === 'GET') {
    $chunkPath = $chunkDir . 'chunk_' . $chunkNumber;
    http_response_code(file_exists($chunkPath) ? 200 : 204);
    exit;
}

// POST: save chunk
if (!is_dir($chunkDir)) {
    mkdir($chunkDir, 0755, true);
}

move_uploaded_file($_FILES['file']['tmp_name'], $chunkDir . 'chunk_' . $chunkNumber);

// Check completion
$totalChunks = (int) $_POST['resumableTotalChunks'];
$received = count(glob($chunkDir . 'chunk_*'));

if ($received >= $totalChunks) {
    // Assemble file
    $output = fopen($uploadDir . $_POST['resumableFilename'], 'wb');
    for ($i = 1; $i <= $totalChunks; $i++) {
        $chunk = fopen($chunkDir . 'chunk_' . str_pad($i, 4, '0', STR_PAD_LEFT), 'rb');
        stream_copy_to_stream($chunk, $output);
        fclose($chunk);
    }
    fclose($output);
}

http_response_code(200);

PHP's move_uploaded_file provides some built-in safety checks that rename doesn't. The stream_copy_to_stream function is memory-efficient for large chunks.

Reassembly and Cleanup

Reassembly must happen in chunk-number order. Never rely on file system ordering or timestamp ordering—chunks arrive out of sequence when simultaneousUploads is greater than 1.

After successful assembly, clean up chunk directories. Incomplete uploads accumulate over time, so implement a cleanup job that removes chunk directories older than a threshold (24 hours is reasonable). Without this, your disk fills up with orphaned chunks from abandoned uploads.

A cron job or scheduled task that runs hourly works well:

find /uploads -maxdepth 1 -type d -mtime +1 -exec rm -rf {} +

For database-tracked uploads, mark incomplete uploads as expired and purge their chunks in a background worker. The approach depends on your infrastructure, but the principle is the same: chunks without a completed file are temporary data with a shelf life.