Linux Pipes and Redirection: stdin, stdout, stderr

Every process in Linux starts with three open file descriptors that form the foundation of command-line data flow. Standard input (stdin, fd 0) receives data into a program. Standard output (stdout,...

Key Insights

  • Linux provides three standard streams (stdin, stdout, stderr) that every process inherits, identified by file descriptors 0, 1, and 2 respectively
  • Redirection operators (>, >>, <, 2>) and pipes (|) let you route data between commands, files, and processes to build powerful command chains
  • Understanding redirect order matters critically—command > file 2>&1 works correctly while command 2>&1 > file doesn’t redirect stderr to the file

Understanding Standard Streams

Every process in Linux starts with three open file descriptors that form the foundation of command-line data flow. Standard input (stdin, fd 0) receives data into a program. Standard output (stdout, fd 1) sends normal program output. Standard error (stderr, fd 2) sends error messages and diagnostics.

These streams exist separately, which means programs can distinguish between regular output and error messages. Your terminal displays both by default, but you can route them independently—a powerful capability once you understand the mechanics.

Let’s see each stream in action:

# stdout: echo writes to standard output
echo "Hello, World!"
# Output appears in terminal: Hello, World!

# stderr: ls writes errors to standard error
ls /nonexistent_directory
# Error appears in terminal: ls: cannot access '/nonexistent_directory': No such file or directory

# stdin: cat reads from standard input when given no arguments
cat
# Type something and press Enter
# cat echoes it back
# Press Ctrl+D to exit

When you run cat without arguments, it waits for keyboard input. Each line you type goes to stdin, and cat writes it to stdout. This demonstrates the stream concept clearly—data flows in through one descriptor and out through another.

Redirecting Output

The > operator redirects stdout to a file, creating or overwriting it. The >> operator appends instead of overwriting. Both are fundamental to saving command output.

# Redirect stdout to a file (overwrites)
ls -la > directory_listing.txt

# Append stdout to a file
echo "New entry" >> log.txt

# This creates an empty file (or truncates existing)
> empty_file.txt

Redirecting stderr requires specifying its file descriptor explicitly:

# Redirect only stderr to a file
ls /nonexistent /tmp 2> errors.txt
# You'll see /tmp contents on screen, errors go to errors.txt

# Redirect stderr to a different file than stdout
command > output.txt 2> errors.txt

To redirect both streams to the same destination, you must understand that redirection happens left-to-right. First redirect stdout, then redirect stderr to wherever stdout is going:

# Correct: redirect stdout to file, then stderr to stdout's location
command > output.txt 2>&1

# Bash 4+ shorthand for the same thing
command &> output.txt

# Or using the explicit form
command > output.txt 2>&1

The 2>&1 syntax means “redirect file descriptor 2 (stderr) to where file descriptor 1 (stdout) currently points.” Order matters enormously here, as we’ll see in the pitfalls section.

Input Redirection

While output redirection sends data to files, input redirection feeds file contents into commands as stdin:

# Feed a file's contents to sort
sort < unsorted_names.txt

# Count lines in a file
wc -l < data.txt

# This is different from: wc -l data.txt
# The redirect version only shows the count, not the filename

The here-document (<<) provides multi-line input inline, useful in scripts:

# Create a file with multiple lines
cat << EOF > config.txt
database=localhost
port=5432
user=admin
EOF

# Send multi-line input to a command
mysql -u root -p << SQL
CREATE DATABASE myapp;
USE myapp;
CREATE TABLE users (id INT PRIMARY KEY);
SQL

The here-string (<<<) variant passes a single string as stdin:

# Pass a string directly as stdin
bc <<< "scale=2; 22/7"
# Output: 3.14

grep "error" <<< "This is an error message"

Pipes: Connecting Commands

Pipes connect the stdout of one command to the stdin of another, enabling command composition. This is where Linux’s “do one thing well” philosophy shines.

# List files, filter for .txt files
ls -l | grep "\.txt$"

# Process data through multiple filters
cat access.log | grep "ERROR" | sort | uniq -c | sort -rn

# Count running Python processes
ps aux | grep python | grep -v grep | wc -l

Pipes create a data flow pipeline. Each command processes its input and passes results to the next. The shell runs all piped commands simultaneously, with data streaming between them.

You can combine pipes with redirection:

# Pipe through filters, save result
cat data.csv | cut -d',' -f1,3 | sort > processed.csv

# Pipe and redirect stderr separately
find / -name "*.conf" 2> /dev/null | grep nginx

Practical Use Cases

Real-world tasks often combine multiple redirection techniques. Here are patterns you’ll use regularly:

# Find and process files, suppress permission errors
find /var/log -name "*.log" -type f 2>/dev/null | xargs grep "ERROR" > errors_found.txt

# Display output AND save to file using tee
./deployment_script.sh 2>&1 | tee deployment.log

# Process multiple commands' output together
(echo "=== Disk Usage ==="; df -h; echo "=== Memory ==="; free -h) > system_report.txt

# Create a filtered, sorted list from multiple sources
cat file1.txt file2.txt | sort | uniq > merged_unique.txt

# Monitor a log file and filter in real-time
tail -f application.log | grep --line-buffered "ERROR" | tee errors.log

The tee command deserves special mention—it reads stdin and writes to both stdout and files simultaneously. This lets you see output while saving it.

# Save and display
command | tee output.txt

# Append instead of overwrite
command | tee -a output.txt

# Multiple output files
command | tee file1.txt file2.txt file3.txt

For complex pipelines, you can redirect within the pipeline:

# Each command can have its own redirections
(command1 2> cmd1_errors.txt) | command2 | (command3 > cmd3_output.txt)

Common Pitfalls and Best Practices

The most frequent mistake involves redirect order. Redirections are processed left-to-right, and 2>&1 means “redirect stderr to wherever stdout currently points.”

# WRONG: stderr goes to terminal, not file
command 2>&1 > file.txt
# Here's what happens:
# 1. 2>&1 redirects stderr to current stdout (the terminal)
# 2. > file.txt redirects stdout to file
# Result: stdout in file, stderr still on terminal

# RIGHT: both go to file
command > file.txt 2>&1
# Here's what happens:
# 1. > file.txt redirects stdout to file
# 2. 2>&1 redirects stderr to where stdout now points (the file)
# Result: both stdout and stderr in file

Suppressing output completely uses /dev/null, a special file that discards everything:

# Suppress all output
command > /dev/null 2>&1

# Suppress only errors
command 2> /dev/null

# Suppress only normal output (errors still visible)
command > /dev/null

Understand that pipes only connect stdout by default. To pipe stderr, redirect it to stdout first:

# Only stdout goes through pipe
command 2> errors.txt | grep "pattern"

# Both streams through pipe
command 2>&1 | grep "pattern"

Be aware of buffering in pipelines. Programs may buffer output when writing to pipes rather than terminals, causing delays in seeing output:

# Force line-buffered output with grep
tail -f log.txt | grep --line-buffered "ERROR" | process.sh

# Python scripts should flush output
python -u script.py | other_command

When working with file descriptors directly, you can create custom redirections:

# Save stdout to fd 3, redirect stdout to file, restore stdout
exec 3>&1 > output.txt
echo "This goes to file"
exec 1>&3 3>&-
echo "This goes to terminal"

Finally, remember that redirection happens before command execution. This means > file.txt creates/truncates the file even if the command fails:

# BAD: truncates input.txt before cat can read it
cat input.txt > input.txt

# GOOD: use a temporary file
cat input.txt > temp.txt && mv temp.txt input.txt

Mastering these streams and redirection operators transforms how you work in Linux. You’ll build powerful one-liners, create robust scripts, and manipulate data flows with precision. Practice these patterns until they become second nature—they’re the foundation of effective command-line work.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.