Docker Compose: Multi-Container Applications
• Docker Compose eliminates the complexity of managing multiple `docker run` commands by defining your entire application stack in a single YAML file, making local development environments...
Key Insights
• Docker Compose eliminates the complexity of managing multiple docker run commands by defining your entire application stack in a single YAML file, making local development environments reproducible across teams.
• Service discovery is automatic—containers communicate using service names as hostnames, and Docker Compose creates isolated networks by default, simplifying inter-service communication without hardcoded IPs.
• Use multiple compose files (docker-compose.yml for base configuration, docker-compose.override.yml for development, docker-compose.prod.yml for production) to maintain environment-specific settings without duplicating configuration.
Introduction to Multi-Container Applications
Modern applications are rarely monolithic. A typical web application requires a web server, database, cache layer, message queue, and perhaps a reverse proxy. During development, you need all these services running simultaneously. Managing each container with individual docker run commands quickly becomes unwieldy—you’re juggling port mappings, network configurations, environment variables, and startup sequences across multiple terminal windows.
Docker Compose solves this orchestration problem. It lets you define your entire application stack in a single docker-compose.yml file and manage everything with simple commands. One command starts your entire environment; another tears it down. This consistency is invaluable for local development, testing, and even small production deployments.
Docker Compose Basics
At its core, Docker Compose uses a YAML file to define services, networks, and volumes. Each service represents a container, and Compose handles the creation, networking, and lifecycle management.
Here’s a minimal example—a Flask application with a PostgreSQL database:
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
depends_on:
- db
environment:
DATABASE_URL: postgresql://postgres:password@db:5432/myapp
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
The essential commands you’ll use daily:
docker-compose up- Start all services (add-dfor detached mode)docker-compose down- Stop and remove containersdocker-compose ps- List running servicesdocker-compose logs -f [service]- Stream logs from servicesdocker-compose exec [service] [command]- Run commands inside containers
This simple configuration defines two services. The web service builds from the current directory’s Dockerfile, while db uses the official PostgreSQL image. Notice the depends_on directive—Compose starts the database before the web service.
Service Configuration and Dependencies
Real applications need more sophisticated configuration. Let’s expand our example to include a Redis cache and demonstrate environment variables, volume mounts, and dependency management:
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
volumes:
- ./app:/app
- static_files:/app/static
environment:
- DATABASE_URL=postgresql://postgres:${DB_PASSWORD}@db:5432/${DB_NAME}
- REDIS_URL=redis://cache:6379/0
- DEBUG=true
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
command: python manage.py runserver 0.0.0.0:8000
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
static_files:
Key improvements here:
- Environment variables: Use
${VAR}syntax to reference variables from a.envfile - Bind mounts:
./app:/appmounts local code for live reloading during development - Named volumes:
postgres_datapersists database content across container restarts - Health checks: The web service waits for the database to be actually ready, not just started
- Custom commands: Override the default container command
Networking and Communication
Docker Compose automatically creates a bridge network for your application. Services communicate using service names as DNS hostnames—no IP addresses needed. In our example, the web service connects to PostgreSQL at db:5432 and Redis at cache:6379.
For more complex applications, you might need custom networks to isolate tiers:
version: '3.8'
services:
frontend:
build: ./frontend
ports:
- "3000:3000"
networks:
- frontend_network
environment:
- API_URL=http://backend:4000
backend:
build: ./backend
networks:
- frontend_network
- backend_network
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/app
db:
image: postgres:15-alpine
networks:
- backend_network
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: app
volumes:
- db_data:/var/lib/postgresql/data
networks:
frontend_network:
backend_network:
volumes:
db_data:
This configuration creates two networks. The frontend can reach the backend, the backend can reach the database, but the frontend cannot directly access the database—a basic security principle.
Volumes and Data Persistence
Volumes are critical for stateful services. Docker offers two types: named volumes (managed by Docker) and bind mounts (direct host filesystem mappings).
Here’s a practical example with a database and backup service sharing a volume:
version: '3.8'
services:
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: secure_password
POSTGRES_DB: production
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_backups:/backups
backup:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data:ro
- postgres_backups:/backups
- ./backup-script.sh:/backup-script.sh
entrypoint: /backup-script.sh
depends_on:
- db
volumes:
postgres_data:
driver: local
postgres_backups:
driver: local
The backup service mounts the database volume as read-only (:ro) and shares the backups volume for storing dumps. This pattern ensures data persists even if containers are destroyed.
Development vs. Production Configurations
Maintaining separate configurations for different environments is essential. Docker Compose supports multiple files that merge together:
docker-compose.yml (base configuration):
version: '3.8'
services:
web:
build: .
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/app
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: app
docker-compose.override.yml (development overrides, loaded automatically):
version: '3.8'
services:
web:
volumes:
- ./app:/app
environment:
- DEBUG=true
command: python manage.py runserver 0.0.0.0:8000
db:
ports:
- "5432:5432"
docker-compose.prod.yml (production overrides):
version: '3.8'
services:
web:
restart: always
environment:
- DEBUG=false
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
db:
restart: always
deploy:
resources:
limits:
cpus: '1'
memory: 1G
Use docker-compose -f docker-compose.yml -f docker-compose.prod.yml up for production. The override file automatically loads during development.
Common Patterns and Best Practices
Here’s a production-ready full-stack application incorporating health checks, resource limits, and proper service organization:
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- static_files:/usr/share/nginx/html/static
depends_on:
- frontend
- backend
networks:
- frontend_network
restart: unless-stopped
frontend:
build: ./frontend
volumes:
- static_files:/app/build
networks:
- frontend_network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
backend:
build: ./backend
environment:
- DATABASE_URL=postgresql://postgres:${DB_PASSWORD}@db:5432/app
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
networks:
- frontend_network
- backend_network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4000/health"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '1'
memory: 1G
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: app
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- backend_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: '1'
memory: 2G
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
networks:
- backend_network
restart: unless-stopped
command: redis-server --appendonly yes
networks:
frontend_network:
backend_network:
volumes:
postgres_data:
redis_data:
static_files:
Critical best practices demonstrated:
- Health checks: Ensure services are actually ready before routing traffic
- Resource limits: Prevent runaway containers from consuming all system resources
- Restart policies: Automatically recover from failures
- Logging configuration: Prevent logs from filling disk space
- Network segmentation: Isolate services appropriately
- Named volumes: Persist critical data
Docker Compose transforms multi-container chaos into manageable infrastructure-as-code. Master these patterns, and you’ll have reproducible environments that work identically for every developer on your team.