DDoS Mitigation: Protection Strategies

DDoS attacks fall into three categories, and your mitigation strategy must address all of them.

Key Insights

  • DDoS protection requires defense in depth—no single solution handles all attack vectors, so layer rate limiting, WAF rules, CDN absorption, and application-level challenges together.
  • Rate limiting at the edge is your first line of defense; implement it in nginx or your load balancer before traffic ever reaches your application servers.
  • Automated detection and response is non-negotiable; by the time you manually notice an attack, your infrastructure may already be overwhelmed.

Understanding DDoS Attack Vectors

DDoS attacks fall into three categories, and your mitigation strategy must address all of them.

Volumetric attacks flood your bandwidth with garbage traffic. UDP amplification attacks exploit services like DNS or NTP to generate massive response packets from small requests. A 100-byte request to an open DNS resolver can generate a 3,000-byte response directed at your server.

Protocol attacks exhaust server resources by exploiting weaknesses in network protocols. SYN floods send connection requests without completing the TCP handshake, filling up your connection state tables. Your server wastes memory tracking half-open connections that will never complete.

Application-layer attacks are the hardest to detect because they look like legitimate traffic. HTTP floods send valid requests to expensive endpoints—search queries, login attempts, or API calls that trigger database operations. These attacks require fewer resources from attackers but can cripple your application.

Here’s a basic traffic analysis script to identify anomalous patterns:

import collections
from datetime import datetime, timedelta

class TrafficAnalyzer:
    def __init__(self, window_seconds=60, threshold_multiplier=3):
        self.requests = collections.defaultdict(list)
        self.window = timedelta(seconds=window_seconds)
        self.threshold_multiplier = threshold_multiplier
        self.baseline_rps = {}
    
    def record_request(self, ip_address, endpoint, timestamp=None):
        timestamp = timestamp or datetime.now()
        key = (ip_address, endpoint)
        self.requests[key].append(timestamp)
        self._cleanup_old_requests(key, timestamp)
    
    def _cleanup_old_requests(self, key, current_time):
        cutoff = current_time - self.window
        self.requests[key] = [t for t in self.requests[key] if t > cutoff]
    
    def detect_anomaly(self, ip_address, endpoint):
        key = (ip_address, endpoint)
        current_rps = len(self.requests[key]) / self.window.total_seconds()
        baseline = self.baseline_rps.get(endpoint, 1.0)
        
        if current_rps > baseline * self.threshold_multiplier:
            return {
                "anomaly": True,
                "ip": ip_address,
                "endpoint": endpoint,
                "current_rps": current_rps,
                "baseline_rps": baseline,
                "severity": current_rps / baseline
            }
        return {"anomaly": False}
    
    def get_top_requesters(self, limit=10):
        totals = collections.Counter()
        for (ip, endpoint), timestamps in self.requests.items():
            totals[ip] += len(timestamps)
        return totals.most_common(limit)

Rate Limiting and Throttling

Rate limiting is your most effective first-line defense. Implement it at the infrastructure level before requests reach your application.

The token bucket algorithm allows burst traffic while enforcing average rates. Each client gets a bucket that fills with tokens at a steady rate. Requests consume tokens; when the bucket is empty, requests are rejected. This handles legitimate traffic spikes while blocking sustained floods.

The sliding window algorithm provides stricter control by counting requests in a rolling time window. It’s simpler to implement but less forgiving of burst traffic.

Here’s an Express.js rate limiter using Redis for distributed tracking:

const Redis = require('ioredis');
const redis = new Redis(process.env.REDIS_URL);

const rateLimiter = (options = {}) => {
  const {
    windowMs = 60000,
    maxRequests = 100,
    keyGenerator = (req) => req.ip,
    skipFailedRequests = false
  } = options;

  return async (req, res, next) => {
    const key = `ratelimit:${keyGenerator(req)}`;
    const now = Date.now();
    const windowStart = now - windowMs;

    try {
      const pipeline = redis.pipeline();
      pipeline.zremrangebyscore(key, 0, windowStart);
      pipeline.zadd(key, now, `${now}-${Math.random()}`);
      pipeline.zcard(key);
      pipeline.expire(key, Math.ceil(windowMs / 1000));
      
      const results = await pipeline.exec();
      const requestCount = results[2][1];

      res.set('X-RateLimit-Limit', maxRequests);
      res.set('X-RateLimit-Remaining', Math.max(0, maxRequests - requestCount));

      if (requestCount > maxRequests) {
        res.set('Retry-After', Math.ceil(windowMs / 1000));
        return res.status(429).json({ error: 'Rate limit exceeded' });
      }

      next();
    } catch (error) {
      console.error('Rate limiter error:', error);
      next(); // Fail open to avoid blocking legitimate traffic
    }
  };
};

module.exports = rateLimiter;

For nginx, configure rate limiting at the edge:

http {
    # Define rate limit zones
    limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s;
    limit_req_zone $binary_remote_addr$uri zone=endpoint_limit:10m rate=5r/s;
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

    server {
        listen 80;
        
        # Apply rate limits with burst allowance
        limit_req zone=ip_limit burst=20 nodelay;
        limit_req zone=endpoint_limit burst=10;
        limit_conn conn_limit 50;
        
        # Custom error page for rate-limited requests
        limit_req_status 429;
        error_page 429 /rate_limited.html;
        
        location /api/ {
            # Stricter limits for API endpoints
            limit_req zone=endpoint_limit burst=5 nodelay;
            proxy_pass http://backend;
        }
        
        location /login {
            # Very strict limits for authentication
            limit_req zone=ip_limit burst=5 nodelay;
            proxy_pass http://backend;
        }
    }
}

Web Application Firewall (WAF) Configuration

A WAF inspects HTTP traffic and blocks malicious requests based on rules. Configure it to filter known attack patterns while minimizing false positives.

Here’s an AWS WAF configuration using Terraform:

resource "aws_wafv2_web_acl" "main" {
  name        = "ddos-protection-acl"
  scope       = "REGIONAL"
  description = "WAF rules for DDoS mitigation"

  default_action {
    allow {}
  }

  # Rate-based rule for volumetric attacks
  rule {
    name     = "rate-limit-rule"
    priority = 1

    override_action {
      none {}
    }

    statement {
      rate_based_statement {
        limit              = 2000
        aggregate_key_type = "IP"
      }
    }

    visibility_config {
      sampled_requests_enabled   = true
      cloudwatch_metrics_enabled = true
      metric_name                = "RateLimitRule"
    }
  }

  # Block known bad IPs
  rule {
    name     = "ip-reputation-rule"
    priority = 2

    override_action {
      none {}
    }

    statement {
      managed_rule_group_statement {
        name        = "AWSManagedRulesAmazonIpReputationList"
        vendor_name = "AWS"
      }
    }

    visibility_config {
      sampled_requests_enabled   = true
      cloudwatch_metrics_enabled = true
      metric_name                = "IPReputationRule"
    }
  }

  # Geo-blocking for countries you don't serve
  rule {
    name     = "geo-block-rule"
    priority = 3

    action {
      block {}
    }

    statement {
      geo_match_statement {
        country_codes = ["CN", "RU", "KP"]  # Adjust based on your business
      }
    }

    visibility_config {
      sampled_requests_enabled   = true
      cloudwatch_metrics_enabled = true
      metric_name                = "GeoBlockRule"
    }
  }

  visibility_config {
    cloudwatch_metrics_enabled = true
    metric_name                = "DDOSProtectionACL"
    sampled_requests_enabled   = true
  }
}

CDN and Edge Protection

CDNs absorb attack traffic across their global network before it reaches your origin. Anycast routing distributes traffic to the nearest edge node, making volumetric attacks less effective.

Here’s a Cloudflare Worker that implements a challenge page for suspicious traffic:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  const clientIP = request.headers.get('CF-Connecting-IP');
  const url = new URL(request.url);
  
  // Check if client has passed challenge
  const challengeCookie = getCookie(request, 'ddos_challenge_passed');
  if (challengeCookie && await verifyChallenge(challengeCookie, clientIP)) {
    return fetch(request);
  }
  
  // Check request rate from KV store
  const requestCount = await RATE_LIMIT_KV.get(`count:${clientIP}`);
  const count = parseInt(requestCount || '0');
  
  if (count > 50) {
    // High request rate - serve challenge page
    return new Response(getChallengeHTML(clientIP), {
      status: 503,
      headers: { 'Content-Type': 'text/html' }
    });
  }
  
  // Increment counter
  await RATE_LIMIT_KV.put(`count:${clientIP}`, String(count + 1), { expirationTtl: 60 });
  
  return fetch(request);
}

function getChallengeHTML(clientIP) {
  const challenge = generateChallenge(clientIP);
  return `
    <!DOCTYPE html>
    <html>
    <head><title>Verifying your browser</title></head>
    <body>
      <h1>Please wait while we verify your browser...</h1>
      <script>
        // Simple proof-of-work challenge
        const challenge = "${challenge}";
        let nonce = 0;
        async function solve() {
          while (true) {
            const attempt = challenge + nonce;
            const hash = await crypto.subtle.digest('SHA-256', 
              new TextEncoder().encode(attempt));
            const hex = Array.from(new Uint8Array(hash))
              .map(b => b.toString(16).padStart(2, '0')).join('');
            if (hex.startsWith('0000')) {
              document.cookie = 'ddos_challenge_passed=' + nonce + '; path=/';
              location.reload();
              return;
            }
            nonce++;
          }
        }
        solve();
      </script>
    </body>
    </html>
  `;
}

Application-Level Defenses

When edge protection isn’t enough, implement challenges at the application level:

const crypto = require('crypto');

class ProofOfWork {
  constructor(difficulty = 4) {
    this.difficulty = difficulty;
    this.prefix = '0'.repeat(difficulty);
  }

  generateChallenge(clientId) {
    const timestamp = Date.now();
    const random = crypto.randomBytes(16).toString('hex');
    const challenge = `${clientId}:${timestamp}:${random}`;
    return {
      challenge,
      timestamp,
      difficulty: this.difficulty
    };
  }

  verifyProof(challenge, nonce, maxAge = 300000) {
    const [, timestamp] = challenge.split(':');
    if (Date.now() - parseInt(timestamp) > maxAge) {
      return { valid: false, reason: 'Challenge expired' };
    }

    const hash = crypto
      .createHash('sha256')
      .update(`${challenge}:${nonce}`)
      .digest('hex');

    if (!hash.startsWith(this.prefix)) {
      return { valid: false, reason: 'Invalid proof' };
    }

    return { valid: true, hash };
  }
}

module.exports = ProofOfWork;

Monitoring and Automated Response

Set up Prometheus alerting rules to detect attacks early:

groups:
  - name: ddos_alerts
    rules:
      - alert: HighRequestRate
        expr: sum(rate(http_requests_total[1m])) > 10000
        for: 30s
        labels:
          severity: critical
        annotations:
          summary: "Abnormally high request rate detected"
          
      - alert: HighErrorRate
        expr: sum(rate(http_requests_total{status=~"5.."}[1m])) / sum(rate(http_requests_total[1m])) > 0.5
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "High error rate - possible DDoS attack"

Incident Response Playbook

When an attack hits, capture traffic immediately for analysis:

#!/bin/bash
# Emergency traffic capture script

INTERFACE="${1:-eth0}"
DURATION="${2:-300}"
OUTPUT_DIR="/var/log/ddos-captures"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

mkdir -p "$OUTPUT_DIR"

echo "Starting emergency traffic capture on $INTERFACE for ${DURATION}s"

# Capture packets
timeout "$DURATION" tcpdump -i "$INTERFACE" -w "$OUTPUT_DIR/capture_$TIMESTAMP.pcap" -c 1000000 &

# Log connection states
ss -s > "$OUTPUT_DIR/connections_$TIMESTAMP.txt"
netstat -an | awk '/tcp/ {print $6}' | sort | uniq -c | sort -rn > "$OUTPUT_DIR/tcp_states_$TIMESTAMP.txt"

# Top talkers
timeout "$DURATION" tcpdump -i "$INTERFACE" -nn -q 2>/dev/null | \
  awk '{print $3}' | cut -d. -f1-4 | sort | uniq -c | sort -rn | head -50 > "$OUTPUT_DIR/top_ips_$TIMESTAMP.txt"

echo "Capture complete. Files saved to $OUTPUT_DIR"

DDoS mitigation isn’t a set-and-forget configuration. Test your defenses regularly, review attack patterns in your logs, and update your rules as threats evolve. The best protection is layered defense combined with rapid automated response.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.