Docker Networking: Bridge, Host, and Overlay
Docker networking isn't just about connecting containers to the internet. It's the foundation that determines how your containers communicate with each other, with the host system, and with external...
Key Insights
- Bridge networks provide container isolation and are ideal for single-host deployments, while custom bridges enable DNS-based service discovery between containers without exposing ports to the host
- Host networking eliminates network address translation overhead for maximum performance but sacrifices container isolation and port management—use it only when you need bare-metal network speeds
- Overlay networks enable seamless multi-host container communication in Docker Swarm with built-in service discovery and load balancing, making them essential for production distributed applications
Understanding Docker’s Network Architecture
Docker networking isn’t just about connecting containers to the internet. It’s the foundation that determines how your containers communicate with each other, with the host system, and with external services. The Container Network Model (CNM) provides the abstraction layer that makes this possible through pluggable network drivers.
When you install Docker, it creates three default networks automatically. Let’s examine them:
docker network ls
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 bridge bridge local
g7h8i9j0k1l2 host host local
m3n4o5p6q7r8 none null local
Each driver serves a specific purpose. The bridge driver creates an isolated network on your host. The host driver removes network isolation entirely. The none driver disables networking. To understand what’s happening under the hood:
docker network inspect bridge
This reveals the subnet configuration, gateway, and connected containers. The default bridge network uses a 172.17.0.0/16 subnet, but you’ll rarely want to use it directly in production.
Bridge Networks: Isolation Without Complexity
Bridge networks are Docker’s default networking mode and the right choice for most single-host deployments. When you run a container without specifying a network, it connects to the default bridge. However, the default bridge has limitations—containers can only communicate using IP addresses, not container names.
Custom bridge networks solve this problem and provide better isolation:
# Create a custom bridge network
docker network create --driver bridge my-app-network
# Inspect the network configuration
docker network inspect my-app-network
Now launch containers on this network:
# Start a PostgreSQL database
docker run -d \
--name postgres-db \
--network my-app-network \
-e POSTGRES_PASSWORD=secret \
postgres:15
# Start an application container
docker run -d \
--name api-server \
--network my-app-network \
-p 8080:8080 \
your-api-image
The critical advantage here is DNS-based service discovery. Your API server can connect to PostgreSQL using the hostname postgres-db instead of tracking IP addresses:
# Inside your application code
DATABASE_URL = "postgresql://user:secret@postgres-db:5432/mydb"
Test connectivity between containers:
# Access the API container
docker exec -it api-server sh
# Ping the database by name
ping postgres-db
Custom bridge networks also provide network isolation. Containers on different bridge networks cannot communicate unless explicitly connected to the same network. This is crucial for multi-tenant environments or separating development and testing workloads on the same host.
Port mapping with bridge networks gives you precise control over exposure:
# Map container port 80 to host port 8080
docker run -d --name web --network my-app-network -p 8080:80 nginx
# Map to a specific host interface
docker run -d --name web --network my-app-network -p 127.0.0.1:8080:80 nginx
The second example only binds to localhost, preventing external access—useful for internal services.
Host Networks: When You Need Raw Performance
Host networking removes the network namespace isolation between container and host. The container shares the host’s network stack directly, which means no NAT, no bridge, and no port mapping.
# Run nginx with host networking
docker run -d --network host nginx
With host networking, nginx binds directly to port 80 on your host machine. There’s no -p flag because port mapping doesn’t exist in this mode—the container sees all host interfaces and ports.
This mode shines in specific scenarios:
High-throughput applications: Network-intensive workloads like databases, caches, or message queues benefit from eliminating NAT overhead. The performance difference can be substantial:
# Bridge network latency test
docker run --rm --network bridge nicolaka/netshoot \
ping -c 100 -i 0.01 8.8.8.8 | grep avg
# Host network latency test
docker run --rm --network host nicolaka/netshoot \
ping -c 100 -i 0.01 8.8.8.8 | grep avg
You’ll typically see 10-20% lower latency with host networking, and throughput improvements can exceed 30% for certain workloads.
Network monitoring tools: Applications that need to see all network traffic (like Wireshark, tcpdump, or network performance monitors) require host network access.
Legacy applications: Some applications expect to bind to specific interfaces or have hard-coded network assumptions that don’t work well with Docker’s network abstraction.
The trade-offs are significant:
- No isolation: Multiple containers can’t bind to the same port
- Security concerns: Container processes can see all host network traffic
- Portability issues: Host network behavior differs between Linux, Windows, and macOS
- No service discovery: You lose Docker’s DNS-based service resolution
Use host networking sparingly and only when performance requirements justify the loss of isolation.
Overlay Networks: Distributed Container Communication
Overlay networks enable containers running on different Docker hosts to communicate securely as if they were on the same machine. This is the foundation for Docker Swarm and the key to building distributed applications.
First, initialize a Swarm:
# On the manager node
docker swarm init --advertise-addr <MANAGER-IP>
# The output provides a join token for worker nodes
# On worker nodes, run:
# docker swarm join --token <TOKEN> <MANAGER-IP>:2377
Create an overlay network:
docker network create \
--driver overlay \
--attachable \
my-overlay-network
The --attachable flag allows standalone containers to connect, not just services.
Deploy a distributed application:
# Create a web service with 3 replicas
docker service create \
--name web \
--network my-overlay-network \
--replicas 3 \
-p 80:80 \
nginx
# Create a backend service
docker service create \
--name api \
--network my-overlay-network \
--replicas 2 \
your-api-image
Docker Swarm automatically load-balances traffic across replicas and provides service discovery. The web service can reach the API using the hostname api, and Swarm routes requests to healthy replicas.
Overlay networks support encryption for secure multi-host communication:
docker network create \
--driver overlay \
--opt encrypted \
secure-overlay
This encrypts all traffic between containers using IPSec, with minimal performance overhead.
Inspect overlay network details:
docker network inspect my-overlay-network
You’ll see the subnet configuration, connected services, and peer information across the Swarm cluster.
Choosing the Right Network Driver
Here’s a practical decision framework:
Use bridge networks when:
- Running containers on a single host
- You need container isolation and service discovery
- Port mapping provides sufficient access control
- You’re developing locally or running simple production workloads
Use host networks when:
- Performance is critical and you’ve measured the overhead
- Running network monitoring or diagnostic tools
- Dealing with legacy applications with specific network requirements
- You can accept the security and isolation trade-offs
Use overlay networks when:
- Deploying across multiple hosts
- Building microservices that need service discovery and load balancing
- Running Docker Swarm or Kubernetes (which uses its own overlay implementation)
- You need encrypted container-to-container communication
Here’s a Docker Compose example combining multiple network types:
version: '3.8'
services:
# Public-facing service on bridge network
web:
image: nginx
networks:
- frontend
ports:
- "80:80"
# Application on custom bridge
app:
image: your-app
networks:
- frontend
- backend
# Database isolated on backend network
db:
image: postgres
networks:
- backend
environment:
POSTGRES_PASSWORD: secret
# High-performance cache with host networking
cache:
image: redis
network_mode: "host"
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
The internal: true flag creates a bridge network with no route to the external world—perfect for database networks.
Common pitfalls to avoid:
- Don’t use the default bridge network in production; always create custom bridges for DNS resolution
- Don’t expose databases or internal services on host networking
- Don’t forget to clean up unused networks:
docker network prune - Don’t assume overlay networks work the same as bridge networks for standalone containers
Docker networking is powerful but requires understanding the trade-offs. Bridge networks handle most use cases efficiently. Host networking is a performance optimization with security implications. Overlay networks enable distributed systems but add complexity. Choose based on your specific requirements, not assumptions about performance or complexity.