Kubernetes Pods: Smallest Deployable Units

A pod is the smallest deployable unit in Kubernetes. While Docker and other container runtimes work with individual containers, Kubernetes adds a layer of abstraction by wrapping containers in pods....

Key Insights

  • Pods are Kubernetes’ atomic deployment unit that wrap one or more containers with shared networking, storage, and lifecycle—understanding pods is fundamental to everything else in Kubernetes
  • Containers within a pod share the same network namespace and can communicate via localhost, making pods ideal for tightly coupled application components that need to work as a cohesive unit
  • You should rarely create pods directly in production; instead use higher-level controllers like Deployments that provide scaling, self-healing, and rolling updates while managing pods for you

What is a Pod?

A pod is the smallest deployable unit in Kubernetes. While Docker and other container runtimes work with individual containers, Kubernetes adds a layer of abstraction by wrapping containers in pods. This isn’t arbitrary complexity—it’s a deliberate design decision that enables powerful deployment patterns.

Think of a pod as a logical host for your application. Just like a physical or virtual machine can run multiple processes that share resources, a pod can run multiple containers that share networking and storage. The key difference is that pods are ephemeral and designed to be replaced rather than repaired.

Most pods contain a single container, but multi-container pods are common when you have tightly coupled components that must run together, scale together, and share resources. The critical question is: do these containers need to be deployed as a single atomic unit?

Here’s the simplest possible pod definition:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.24
    ports:
    - containerPort: 80

This creates a single-container pod running nginx. In production, you’d never deploy a standalone pod like this—you’d use a Deployment—but this shows the fundamental structure.

Pod Anatomy and Lifecycle

A pod specification defines several key components:

Containers: One or more container definitions with their images, ports, environment variables, and volume mounts. Each container runs as an isolated process but shares the pod’s network and storage namespaces.

Volumes: Storage that can be shared between containers in the pod. These persist for the pod’s lifetime but are typically lost when the pod is deleted (unless using persistent volumes).

Networking: Each pod gets a unique IP address. All containers in the pod share this IP and the network namespace, meaning they can reach each other on localhost.

Pods move through distinct phases during their lifecycle:

  • Pending: The pod has been accepted but containers aren’t running yet (pulling images, scheduling)
  • Running: At least one container is executing
  • Succeeded: All containers terminated successfully (won’t restart)
  • Failed: At least one container terminated with an error
  • Unknown: Pod state cannot be determined (usually communication issues)

Here’s a multi-container pod demonstrating the sidecar pattern:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
  - name: application
    image: myapp:1.0
    ports:
    - containerPort: 8080
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/app
  - name: log-shipper
    image: fluent/fluent-bit:2.0
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/app
      readOnly: true
  volumes:
  - name: shared-logs
    emptyDir: {}

The application writes logs to a shared volume, and the log-shipper sidecar reads and forwards them to a centralized logging system. They’re deployed together, scaled together, and scheduled on the same node.

To inspect a pod’s status:

kubectl describe pod app-with-sidecar

This shows events, container states, resource usage, and why a pod might be failing. The Events section at the bottom is particularly valuable for troubleshooting.

Pod Networking and Communication

Pod networking is elegantly simple: all containers in a pod share the same network namespace. This has several implications:

  1. Single IP address: The entire pod gets one IP, not one per container
  2. Localhost communication: Containers can reach each other on 127.0.0.1
  3. Port sharing: Containers must use different ports (no conflicts)
  4. Network identity: From outside the pod, you can’t distinguish between containers

This design makes multi-container pods feel like processes running on the same host. Here’s a practical example:

apiVersion: v1
kind: Pod
metadata:
  name: frontend-backend
spec:
  containers:
  - name: api
    image: api-server:1.0
    ports:
    - containerPort: 8080
  - name: nginx-proxy
    image: nginx:1.24
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-config
      mountPath: /etc/nginx/conf.d
  volumes:
  - name: nginx-config
    configMap:
      name: nginx-proxy-config

The nginx container can proxy to the API server using http://localhost:8080 because they share the network namespace. No service discovery needed, no DNS lookups—just localhost.

The nginx ConfigMap might contain:

server {
    listen 80;
    location / {
        proxy_pass http://localhost:8080;
    }
}

This pattern is useful for adding SSL termination, authentication, or rate limiting without modifying your application container.

Common Pod Patterns

Multi-container pods enable several design patterns:

Sidecar Pattern: A helper container that extends the main container’s functionality. Common uses include log shipping, metrics collection, or configuration synchronization.

Ambassador Pattern: A proxy container that simplifies connectivity to external services. The main container connects to localhost while the ambassador handles the complexity of connecting to external databases or APIs.

Adapter Pattern: A container that transforms the main container’s output to match a standard format. Useful when integrating legacy applications with modern monitoring systems.

Here’s a production-grade sidecar example with init containers:

apiVersion: v1
kind: Pod
metadata:
  name: web-app
spec:
  initContainers:
  - name: setup
    image: busybox:1.36
    command: ['sh', '-c', 'echo "Initializing..." && sleep 5']
  containers:
  - name: web
    image: nginx:1.24
    ports:
    - containerPort: 80
    volumeMounts:
    - name: cache
      mountPath: /var/cache/nginx
  - name: metrics-exporter
    image: nginx/nginx-prometheus-exporter:0.11
    args:
    - '-nginx.scrape-uri=http://localhost:80/stub_status'
    ports:
    - containerPort: 9113
  volumes:
  - name: cache
    emptyDir: {}

The init container runs to completion before the main containers start. This is perfect for database migrations, configuration generation, or waiting for dependencies.

Pod Configuration Best Practices

Production pods need more than just container definitions. Here’s a comprehensive example:

apiVersion: v1
kind: Pod
metadata:
  name: production-app
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 1000
  containers:
  - name: app
    image: myapp:2.0
    ports:
    - containerPort: 8080
    resources:
      requests:
        memory: "256Mi"
        cpu: "250m"
      limits:
        memory: "512Mi"
        cpu: "500m"
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 3
    readinessProbe:
      httpGet:
        path: /ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
    env:
    - name: DATABASE_URL
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: url

Resource requests and limits are critical. Requests ensure your pod gets scheduled on a node with sufficient resources. Limits prevent runaway containers from affecting other workloads. Set requests based on typical usage and limits at 1.5-2x requests.

Liveness probes tell Kubernetes when to restart a container that’s running but broken (deadlocked, out of memory). Readiness probes determine when a container is ready to accept traffic. A failing readiness probe removes the pod from service load balancers without restarting it.

Security contexts enforce the principle of least privilege. Running as non-root, using read-only filesystems, and dropping capabilities reduce attack surface.

Managing Pods in Practice

You should almost never create pods directly in production. Pods are mortal—when a node fails, the pods on it are gone. When you delete a pod, it’s gone. No self-healing, no scaling, no rolling updates.

Instead, use controllers that manage pods for you:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"

The Deployment creates and manages pods based on the template. It ensures three replicas are always running, handles rolling updates when you change the image, and replaces failed pods automatically.

For quick testing, use imperative commands:

kubectl run test-pod --image=nginx:1.24 --port=80

But for anything beyond experimentation, use declarative YAML manifests in version control. This gives you:

  • Reproducible deployments
  • Change history and rollback capability
  • Code review for infrastructure changes
  • Automated deployments via CI/CD

Understanding pods is foundational because everything in Kubernetes builds on them. Deployments manage ReplicaSets, which manage pods. StatefulSets manage pods with stable identities. DaemonSets ensure pods run on every node. Jobs and CronJobs create pods for batch workloads.

Master pod concepts—networking, lifecycle, patterns, and configuration—and you’ll understand how all Kubernetes workload types function. Start simple with single-container pods, then graduate to multi-container patterns when you have truly coupled components that must be deployed as a unit.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.