Kubernetes Network Policies: Pod Communication Rules
By default, Kubernetes operates as a flat network where every pod can communicate with every other pod across all namespaces. While this simplifies development, it creates a significant security risk...
Key Insights
- Kubernetes allows all pod-to-pod communication by default—Network Policies enforce zero-trust security by explicitly defining allowed traffic patterns
- Network Policies are additive and namespace-scoped, requiring a CNI plugin like Calico or Cilium to function
- Start with default-deny policies and whitelist specific traffic flows to minimize attack surface in production clusters
By default, Kubernetes operates as a flat network where every pod can communicate with every other pod across all namespaces. While this simplifies development, it creates a significant security risk in production environments. A compromised pod can potentially access any service in your cluster, exfiltrate data, or pivot to attack other workloads.
Network Policies provide a declarative way to control traffic flow between pods, implementing a zero-trust security model where communication must be explicitly permitted. Think of them as firewall rules for your Kubernetes cluster—except they’re dynamic, label-based, and integrate natively with your application architecture.
Network Policy Fundamentals
Network Policies are Kubernetes resources that define rules for pod communication. They use label selectors to identify which pods the policy applies to and which traffic sources are allowed. Policies are namespaced and additive—multiple policies affecting the same pod combine to allow traffic that matches any of them.
Here’s the basic structure:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-policy
namespace: production
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
egress:
- to:
- podSelector:
matchLabels:
role: backend
The podSelector determines which pods this policy governs. An empty podSelector ({}) selects all pods in the namespace. The policyTypes array specifies whether the policy controls incoming traffic, outgoing traffic, or both.
Critical requirement: Network Policies require a CNI (Container Network Interface) plugin that supports them, such as Calico, Cilium, Weave Net, or Antrea. The default Kubernetes networking doesn’t enforce these policies—they’ll be accepted but silently ignored.
The foundational security practice is implementing a default-deny policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This policy blocks all ingress and egress traffic for every pod in the namespace. You then create additional policies to whitelist specific communication paths.
Ingress Rules: Controlling Incoming Traffic
Ingress rules define what traffic can reach your pods. You can specify sources using pod selectors, namespace selectors, or IP blocks, and optionally restrict by port and protocol.
Allow traffic from pods with specific labels:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow-from-web
namespace: production
spec:
podSelector:
matchLabels:
app: api
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web
tier: frontend
ports:
- protocol: TCP
port: 8080
This policy allows pods labeled app=web, tier=frontend to connect to pods labeled app=api, tier=backend on TCP port 8080.
For cross-namespace communication, use namespaceSelector:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-monitoring
namespace: production
spec:
podSelector:
matchLabels:
metrics: enabled
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090
This allows any pod in the monitoring namespace (which must be labeled name=monitoring) to scrape metrics from pods in the production namespace.
You can combine selectors for fine-grained control:
ingress:
- from:
- namespaceSelector:
matchLabels:
environment: production
podSelector:
matchLabels:
role: api-gateway
This allows traffic only from pods labeled role=api-gateway within namespaces labeled environment=production. Note that when both selectors appear in the same array element, they create an AND condition.
Egress Rules: Controlling Outbound Traffic
Egress rules are equally important but often overlooked. They prevent compromised pods from exfiltrating data or communicating with command-and-control servers.
Restrict egress to specific services:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-egress-policy
namespace: production
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: api
ports:
- protocol: TCP
port: 8080
DNS is critical: Pods need to resolve service names. Always allow DNS traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
For external API access, use CIDR blocks:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-api
namespace: production
spec:
podSelector:
matchLabels:
app: payment-processor
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
This allows the payment processor to communicate with an external API at a specific IP range over HTTPS.
Real-World Use Cases
Three-tier application architecture:
# Frontend can receive traffic from ingress controller
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
---
# Backend can only receive from frontend, connect to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
---
# Database accepts connections only from backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
Multi-tenant namespace isolation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant-isolation
namespace: tenant-a
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
This ensures tenant-a pods can only communicate with each other and DNS, preventing cross-tenant data access.
Testing and Troubleshooting
Verify policies are applied:
kubectl get networkpolicies -n production
kubectl describe networkpolicy frontend-policy -n production
Deploy a test pod for connectivity testing:
apiVersion: v1
kind: Pod
metadata:
name: network-test
namespace: production
labels:
app: test
spec:
containers:
- name: network-test
image: nicolaka/netshoot
command: ["sleep", "3600"]
Test connectivity from this pod:
kubectl exec -n production network-test -- curl -m 5 http://backend-service:8080
kubectl exec -n production network-test -- nc -zv database-service 5432
Common pitfalls:
- CNI plugin not installed: Policies are silently ignored without a supporting CNI
- Missing DNS egress rules: Pods can’t resolve service names
- Incorrect label selectors: Typos in labels mean policies don’t match intended pods
- Policy order confusion: Remember policies are additive—any matching policy allows traffic
Check pod labels to ensure policies target correctly:
kubectl get pods -n production --show-labels
Best Practices
Start with default-deny: Implement a blanket deny policy first, then whitelist required traffic. This prevents accidental exposure.
Use meaningful labels: Create a labeling strategy that supports network policies. Common labels include app, tier, role, and team.
Document your policies: Add annotations explaining the business purpose:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
namespace: production
annotations:
description: "Allows frontend to access API backend on port 8080"
owner: "platform-team@company.com"
last-reviewed: "2024-01-15"
spec:
# ... policy spec
Test in non-production first: Network Policies can break applications if misconfigured. Validate in staging environments.
Monitor network traffic: Use tools like Cilium Hubble or Calico Enterprise to visualize actual traffic patterns and identify missing policies.
Separate policies by concern: Create individual policies for ingress and egress rather than combining everything into one massive policy. This improves maintainability.
Network Policies are essential for production Kubernetes security. They transform your cluster from a flat network into a segmented, zero-trust environment where every communication path is intentional and documented. While they require upfront planning and a compatible CNI plugin, the security benefits far outweigh the implementation effort. Start with default-deny, whitelist necessary traffic, and continuously refine policies as your application architecture evolves.