Node.js File Upload: Multipart Form Data Handling

When you upload a file through a web form, the browser can't use standard URL encoding (`application/x-www-form-urlencoded`) because it's designed for text data. Binary files need a different...

Key Insights

  • Multipart form data is the only encoding type that supports binary file uploads in HTTP, using boundary delimiters to separate form fields and file data in a single request body
  • Multer middleware handles the complexity of parsing multipart data in Node.js, but requires careful configuration of file filters, size limits, and storage options to prevent security vulnerabilities
  • Production file upload systems need robust validation, sanitized filenames, appropriate storage strategies (disk vs memory vs cloud), and comprehensive error handling to avoid common pitfalls like directory traversal attacks and storage exhaustion

Understanding Multipart Form Data

When you upload a file through a web form, the browser can’t use standard URL encoding (application/x-www-form-urlencoded) because it’s designed for text data. Binary files need a different approach: multipart/form-data.

This encoding type breaks the request body into parts separated by boundary strings. Each part contains its own headers describing the content type and field name, followed by the actual data. Here’s what a raw multipart request looks like:

POST /upload HTTP/1.1
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW

------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="username"

john_doe
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="avatar"; filename="profile.jpg"
Content-Type: image/jpeg

[binary data here]
------WebKitFormBoundary7MA4YWxkTrZu0gW--

The boundary delimiter (randomly generated by the browser) ensures that file contents don’t accidentally terminate the parsing. You’ll use multipart encoding whenever uploading files, while sticking with URL encoding for simple form submissions without files.

Building Your First File Upload Server

Let’s create a practical file upload server with Express and Multer. Multer is the de facto standard for handling multipart data in Node.js—it’s maintained, well-documented, and handles edge cases you don’t want to deal with manually.

First, install dependencies:

npm install express multer

Here’s a basic server setup:

const express = require('express');
const multer = require('multer');
const path = require('path');

const app = express();
const upload = multer({ dest: 'uploads/' });

// Serve a simple upload form
app.get('/', (req, res) => {
  res.send(`
    <form action="/upload" method="POST" enctype="multipart/form-data">
      <input type="file" name="document" />
      <button type="submit">Upload</button>
    </form>
  `);
});

// Basic upload endpoint
app.post('/upload', upload.single('document'), (req, res) => {
  if (!req.file) {
    return res.status(400).json({ error: 'No file uploaded' });
  }
  
  res.json({
    message: 'File uploaded successfully',
    filename: req.file.filename,
    size: req.file.size
  });
});

app.listen(3000, () => {
  console.log('Server running on http://localhost:3000');
});

The dest: 'uploads/' option tells Multer where to save files. The upload.single('document') middleware processes a single file from the form field named “document” and attaches it to req.file.

Handling Multiple Files

Real applications often need to handle multiple files. Multer provides several methods for this:

// Multiple files from the same field
app.post('/upload/multiple', upload.array('photos', 10), (req, res) => {
  if (!req.files || req.files.length === 0) {
    return res.status(400).json({ error: 'No files uploaded' });
  }
  
  const fileData = req.files.map(file => ({
    originalName: file.originalname,
    filename: file.filename,
    mimetype: file.mimetype,
    size: file.size,
    path: file.path
  }));
  
  res.json({
    message: `${req.files.length} files uploaded`,
    files: fileData
  });
});

// Multiple files from different fields
const multipleFields = upload.fields([
  { name: 'avatar', maxCount: 1 },
  { name: 'documents', maxCount: 5 }
]);

app.post('/upload/mixed', multipleFields, (req, res) => {
  const response = {
    avatar: req.files['avatar'] ? req.files['avatar'][0] : null,
    documents: req.files['documents'] || []
  };
  
  res.json(response);
});

The array() method accepts files from a single field with a maximum count, while fields() handles multiple named fields with individual limits. Files are available in req.files as an array or object depending on the method used.

Implementing File Validation and Security

Never trust user uploads. Implement strict validation to prevent malicious files and attacks:

const fileFilter = (req, file, cb) => {
  // Allowed extensions
  const allowedExtensions = /jpeg|jpg|png|pdf/;
  const extname = allowedExtensions.test(
    path.extname(file.originalname).toLowerCase()
  );
  
  // Allowed MIME types
  const allowedMimetypes = /image\/jpeg|image\/png|application\/pdf/;
  const mimetype = allowedMimetypes.test(file.mimetype);
  
  if (extname && mimetype) {
    return cb(null, true);
  } else {
    cb(new Error('Invalid file type. Only JPEG, PNG, and PDF files are allowed.'));
  }
};

const secureUpload = multer({
  dest: 'uploads/',
  limits: {
    fileSize: 5 * 1024 * 1024, // 5MB limit
    files: 5 // Maximum 5 files per request
  },
  fileFilter: fileFilter
});

// Error handling middleware
app.use((err, req, res, next) => {
  if (err instanceof multer.MulterError) {
    if (err.code === 'LIMIT_FILE_SIZE') {
      return res.status(400).json({ error: 'File too large. Maximum size is 5MB.' });
    }
    if (err.code === 'LIMIT_FILE_COUNT') {
      return res.status(400).json({ error: 'Too many files. Maximum is 5 files.' });
    }
    return res.status(400).json({ error: err.message });
  } else if (err) {
    return res.status(400).json({ error: err.message });
  }
  next();
});

This configuration validates both file extensions and MIME types. Checking only extensions is insufficient because users can rename files. Checking only MIME types is also weak because they’re client-provided. Use both for defense in depth.

Custom Storage Configuration

The default dest option saves files with random names. For production, you’ll want control over filenames and directory structure:

const storage = multer.diskStorage({
  destination: (req, file, cb) => {
    // Organize by upload date
    const date = new Date();
    const dir = `uploads/${date.getFullYear()}/${date.getMonth() + 1}`;
    
    // Ensure directory exists (you'd use fs.promises.mkdir in production)
    const fs = require('fs');
    if (!fs.existsSync(dir)) {
      fs.mkdirSync(dir, { recursive: true });
    }
    
    cb(null, dir);
  },
  filename: (req, file, cb) => {
    // Sanitize original filename
    const sanitized = file.originalname.replace(/[^a-zA-Z0-9.-]/g, '_');
    
    // Add timestamp to prevent collisions
    const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9);
    const ext = path.extname(sanitized);
    const basename = path.basename(sanitized, ext);
    
    cb(null, `${basename}-${uniqueSuffix}${ext}`);
  }
});

const customUpload = multer({
  storage: storage,
  limits: { fileSize: 5 * 1024 * 1024 },
  fileFilter: fileFilter
});

app.post('/upload/custom', customUpload.single('file'), (req, res) => {
  res.json({
    message: 'File uploaded with custom storage',
    path: req.file.path,
    filename: req.file.filename
  });
});

This approach organizes files by date and creates collision-resistant filenames. The sanitization step prevents directory traversal attacks where malicious filenames like ../../etc/passwd could write files outside your uploads directory.

Streaming to Cloud Storage

For scalable applications, stream files directly to cloud storage instead of saving to disk:

const multerS3 = require('multer-s3');
const { S3Client } = require('@aws-sdk/client-s3');

const s3Client = new S3Client({
  region: process.env.AWS_REGION,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
  }
});

const s3Upload = multer({
  storage: multerS3({
    s3: s3Client,
    bucket: 'my-upload-bucket',
    metadata: (req, file, cb) => {
      cb(null, { fieldName: file.fieldname });
    },
    key: (req, file, cb) => {
      const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9);
      cb(null, `uploads/${uniqueSuffix}-${file.originalname}`);
    }
  }),
  limits: { fileSize: 10 * 1024 * 1024 },
  fileFilter: fileFilter
});

app.post('/upload/s3', s3Upload.single('file'), (req, res) => {
  res.json({
    message: 'File uploaded to S3',
    location: req.file.location,
    key: req.file.key
  });
});

This eliminates local disk usage and provides immediate CDN integration. The file streams directly from the HTTP request to S3 without buffering the entire file in memory.

Testing Your Upload Endpoints

Test uploads programmatically with curl:

# Single file upload
curl -X POST http://localhost:3000/upload \
  -F "document=@/path/to/file.pdf"

# Multiple files
curl -X POST http://localhost:3000/upload/multiple \
  -F "photos=@photo1.jpg" \
  -F "photos=@photo2.jpg"

# Mixed fields
curl -X POST http://localhost:3000/upload/mixed \
  -F "avatar=@avatar.jpg" \
  -F "documents=@doc1.pdf" \
  -F "documents=@doc2.pdf"

For automated testing, use supertest:

const request = require('supertest');
const app = require('./app'); // Your Express app

describe('File Upload', () => {
  it('should upload a single file', async () => {
    const response = await request(app)
      .post('/upload')
      .attach('document', 'test/fixtures/sample.pdf')
      .expect(200);
    
    expect(response.body.filename).toBeDefined();
    expect(response.body.size).toBeGreaterThan(0);
  });
  
  it('should reject files over size limit', async () => {
    await request(app)
      .post('/upload')
      .attach('document', 'test/fixtures/large-file.pdf')
      .expect(400);
  });
});

Production Best Practices

Before deploying file uploads to production, implement these safeguards:

  1. Implement file cleanup: Delete old uploads periodically to prevent storage exhaustion
  2. Use virus scanning: Integrate ClamAV or a cloud service to scan uploaded files
  3. Rate limiting: Prevent abuse with express-rate-limit on upload endpoints
  4. Authentication: Require user authentication before accepting uploads
  5. Quota management: Track per-user upload limits in your database
  6. Content Security: Serve uploaded files from a separate domain to prevent XSS attacks
  7. Monitoring: Track upload failures, file sizes, and storage usage

Memory storage (multer.memoryStorage()) keeps files in RAM and works for small files or when immediately processing/forwarding to cloud storage. Use disk storage for large files to avoid memory exhaustion.

Always validate on the server even if you have client-side validation. Never trust the client. Sanitize filenames, check MIME types and extensions, enforce size limits, and handle errors gracefully. These fundamentals prevent the most common file upload vulnerabilities.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.