Linux Signal Handling: SIGTERM, SIGKILL, SIGHUP

Signals are the Unix way of tapping a process on the shoulder. They're software interrupts that enable the kernel and other processes to communicate asynchronously with running programs. Unlike...

Key Insights

  • SIGTERM allows processes to clean up gracefully before terminating, while SIGKILL forces immediate termination without any cleanup opportunity—always try SIGTERM first
  • SIGHUP serves dual purposes: notifying processes of terminal disconnection and triggering configuration reloads in long-running daemons
  • Proper signal handling is critical for production applications, especially containerized workloads where PID 1 must forward signals correctly to child processes

Introduction to Linux Signals

Signals are the Unix way of tapping a process on the shoulder. They’re software interrupts that enable the kernel and other processes to communicate asynchronously with running programs. Unlike function calls or message queues, signals interrupt normal program execution, making them ideal for handling events like termination requests, configuration changes, or resource limit violations.

Every signal has a number and a symbolic name. You can view all available signals on your system:

$ kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL
 5) SIGTRAP      6) SIGABRT      7) SIGBUS       8) SIGFPE
 9) SIGKILL     10) SIGUSR1     11) SIGSEGV     12) SIGUSR2
13) SIGPIPE     14) SIGALRM     15) SIGTERM     16) SIGSTKFLT
17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
...

While there are dozens of signals, three dominate process management: SIGTERM, SIGKILL, and SIGHUP. Understanding their behavior is essential for writing robust server applications.

Understanding SIGTERM (Signal 15)

SIGTERM is the polite way to ask a process to terminate. It’s the default signal sent by the kill command and what most process managers use for graceful shutdowns. The key characteristic: processes can catch SIGTERM and execute cleanup code before exiting.

When your application receives SIGTERM, it should:

  1. Stop accepting new work
  2. Complete in-flight operations (with a timeout)
  3. Release resources (close files, database connections, sockets)
  4. Exit with an appropriate status code

Here’s a Python example demonstrating proper SIGTERM handling:

import signal
import sys
import time

shutdown_requested = False

def sigterm_handler(signum, frame):
    global shutdown_requested
    print("SIGTERM received, initiating graceful shutdown...")
    shutdown_requested = True

signal.signal(signal.SIGTERM, sigterm_handler)

print(f"Process running with PID: {os.getpid()}")

# Simulate a long-running service
while not shutdown_requested:
    print("Working...")
    time.sleep(2)

# Cleanup code
print("Closing database connections...")
print("Flushing buffers...")
print("Shutdown complete")
sys.exit(0)

In Bash, use trap to handle signals:

#!/bin/bash

cleanup() {
    echo "SIGTERM received, cleaning up..."
    # Kill child processes
    jobs -p | xargs -r kill
    # Remove temporary files
    rm -f /tmp/myapp.$$.*
    exit 0
}

trap cleanup SIGTERM

echo "Running with PID $$"
while true; do
    echo "Working..."
    sleep 5
done

Both kill PID and kill -15 PID send SIGTERM—they’re equivalent. The numeric form is useful when you need to be explicit or work across different Unix variants.

Understanding SIGKILL (Signal 9)

SIGKILL is the nuclear option. It cannot be caught, blocked, or ignored. The kernel terminates the process immediately without giving it a chance to clean up. This means:

  • No destructors run
  • Files may be left in inconsistent states
  • Database transactions may be incomplete
  • Temporary files aren’t removed
  • Network connections drop without proper shutdown sequences

Use SIGKILL only when SIGTERM fails. Here’s the typical escalation pattern:

# Try graceful shutdown first
kill $PID
sleep 5

# Check if still running
if kill -0 $PID 2>/dev/null; then
    echo "Process didn't respond to SIGTERM, using SIGKILL"
    kill -9 $PID
fi

You cannot catch SIGKILL, but here’s code demonstrating the attempt:

import signal
import time
import os

def sigkill_handler(signum, frame):
    # This will NEVER execute
    print("SIGKILL caught!")  # Impossible

# This registration is ignored by the kernel
signal.signal(signal.SIGKILL, sigkill_handler)

print(f"PID: {os.getpid()}")
print("Try: kill -9 {pid}")

while True:
    time.sleep(1)

SIGKILL is useful for cleaning up zombie processes or handling truly stuck processes. But overusing it is a code smell—if your processes regularly need SIGKILL, fix the underlying shutdown logic.

Understanding SIGHUP (Signal 1)

SIGHUP originally meant “hangup”—the signal sent when a terminal disconnects. Modern usage has evolved into two distinct patterns:

Pattern 1: Configuration Reload

Long-running daemons like Nginx, Apache, and PostgreSQL use SIGHUP to reload configuration without restarting:

import signal
import json
import os

config = {}

def load_config():
    global config
    with open('/etc/myapp/config.json') as f:
        config = json.load(f)
    print(f"Configuration loaded: {config}")

def sighup_handler(signum, frame):
    print("SIGHUP received, reloading configuration...")
    load_config()

signal.signal(signal.SIGHUP, sighup_handler)

# Initial load
load_config()

print(f"Daemon running with PID: {os.getpid()}")
print(f"Send SIGHUP with: kill -HUP {os.getpid()}")

while True:
    # Use config in your application logic
    time.sleep(5)

Reload the configuration without downtime:

kill -HUP $(cat /var/run/myapp.pid)

Pattern 2: Ignoring Terminal Hangup

When running commands over SSH, they normally die when the session disconnects. Use nohup to prevent this:

# Process survives SSH disconnection
nohup python long_running_script.py > output.log 2>&1 &

Or handle it explicitly in your daemon:

#!/bin/bash

# Ignore SIGHUP
trap '' SIGHUP

# Now runs independently of terminal
while true; do
    process_data
    sleep 60
done

Practical Signal Handling Patterns

Production applications should handle multiple signals. Here’s a complete Go example:

package main

import (
    "context"
    "fmt"
    "os"
    "os/signal"
    "syscall"
    "time"
)

func main() {
    ctx, cancel := context.WithCancel(context.Background())
    
    // Signal handling
    sigChan := make(chan os.Signal, 1)
    signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT, syscall.SIGHUP)
    
    go func() {
        for sig := range sigChan {
            switch sig {
            case syscall.SIGHUP:
                fmt.Println("SIGHUP: Reloading configuration")
                reloadConfig()
            case syscall.SIGTERM, syscall.SIGINT:
                fmt.Println("Shutdown signal received")
                cancel()
                return
            }
        }
    }()
    
    // Main application loop
    fmt.Printf("Running with PID: %d\n", os.Getpid())
    
    ticker := time.NewTicker(2 * time.Second)
    defer ticker.Stop()
    
    for {
        select {
        case <-ctx.Done():
            fmt.Println("Shutting down gracefully...")
            cleanup()
            os.Exit(0)
        case <-ticker.C:
            fmt.Println("Working...")
        }
    }
}

func reloadConfig() {
    // Reload logic here
}

func cleanup() {
    // Cleanup logic here
    time.Sleep(1 * time.Second)
    fmt.Println("Cleanup complete")
}

Docker and PID 1 Problem

Containers have a critical gotcha: the process running as PID 1 has special signal handling responsibilities. Many signals are ignored by default. Use an init system or handle signals explicitly:

# Bad: shell form doesn't forward signals
CMD python app.py

# Good: exec form, process receives signals
CMD ["python", "app.py"]

# Better: use tini or dumb-init
ENTRYPOINT ["/usr/bin/tini", "--"]
CMD ["python", "app.py"]

Systemd Integration

Configure how systemd sends signals to your service:

[Unit]
Description=My Application

[Service]
Type=simple
ExecStart=/usr/bin/myapp
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=30
Restart=on-failure

[Install]
WantedBy=multi-user.target

This configuration sends SIGTERM, waits 30 seconds, then escalates to SIGKILL if needed.

Common Pitfalls and Debugging

Signal Safety

Signal handlers interrupt normal execution. Only call async-signal-safe functions inside handlers. This is wrong:

// DANGEROUS: malloc is not signal-safe
void handler(int sig) {
    char *msg = malloc(100);  // Can deadlock!
    sprintf(msg, "Signal %d", sig);
    printf("%s\n", msg);  // printf not signal-safe
    free(msg);
}

Set flags instead:

volatile sig_atomic_t shutdown_flag = 0;

void handler(int sig) {
    shutdown_flag = 1;  // Safe
}

Debugging with strace

Observe signals in real-time:

strace -e trace=signal -p $PID

Output shows signal delivery:

--- SIGTERM {si_signo=SIGTERM, si_code=SI_USER, si_pid=1234} ---
rt_sigreturn({mask=[]})                 = 0

Race Conditions

Signals can arrive during critical sections. Block them when necessary:

sigset_t set, oldset;
sigemptyset(&set);
sigaddset(&set, SIGTERM);

// Block SIGTERM during critical section
sigprocmask(SIG_BLOCK, &set, &oldset);
critical_database_operation();
sigprocmask(SIG_SETMASK, &oldset, NULL);

Conclusion

Signal handling separates amateur code from production-ready systems. Remember these principles:

  • Always try SIGTERM before SIGKILL. Give processes a chance to clean up.
  • Use SIGHUP for configuration reloads in long-running services to avoid downtime.
  • Handle signals properly in containers. PID 1 has special responsibilities.
  • Keep signal handlers simple. Set flags, don’t execute complex logic.
  • Test your shutdown paths. They’re executed less frequently than startup and often harbor bugs.

Production applications should handle at minimum SIGTERM (graceful shutdown) and SIGHUP (reload). Add SIGUSR1/SIGUSR2 for custom behaviors like toggling debug logging or dumping statistics. With proper signal handling, your applications will integrate smoothly with process managers, container orchestrators, and system administrators’ expectations.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.