Secrets Management: HashiCorp Vault

Every application needs secrets: database passwords, API keys, TLS certificates, encryption keys. The traditional approach of hardcoding credentials or storing them in environment variables creates...

Key Insights

  • Vault provides centralized secrets management with encryption at rest, dynamic secret generation, and fine-grained access control through policies—eliminating the need for hardcoded credentials scattered across your infrastructure.
  • Dynamic secrets for databases and cloud providers automatically rotate with configurable TTLs, dramatically reducing the blast radius of compromised credentials compared to static, long-lived secrets.
  • Production Vault deployments require careful planning around high availability, unsealing strategies, and authentication methods like AppRole or Kubernetes service accounts for automated workloads.

The Secrets Management Problem

Every application needs secrets: database passwords, API keys, TLS certificates, encryption keys. The traditional approach of hardcoding credentials or storing them in environment variables creates significant security and operational challenges. Hardcoded secrets end up in version control. Environment variables live in plaintext on disk and in process memory. When credentials need rotation, you’re hunting through configuration files across dozens of services.

HashiCorp Vault solves this by providing a centralized secrets management system with encryption at rest, fine-grained access control, detailed audit logs, and the ability to generate dynamic, short-lived credentials. Instead of managing static secrets, Vault becomes your single source of truth for all sensitive data.

Vault Core Concepts

Vault’s architecture centers around several key concepts. Secrets engines are components that store, generate, or encrypt data. The key/value engine stores static secrets, while dynamic engines like the database engine generate credentials on-demand.

Authentication methods verify the identity of users and applications. Vault supports numerous methods including tokens, AppRole for machines, Kubernetes service accounts, and cloud provider IAM.

Policies define what authenticated entities can access. Written in HCL, policies grant or deny capabilities on specific paths within Vault.

The seal/unseal process is Vault’s security mechanism. When sealed, Vault encrypts its data and won’t respond to requests. Unsealing requires a threshold of key shares (using Shamir’s Secret Sharing), ensuring no single person can access Vault’s encryption keys.

Here’s a basic Vault server configuration:

storage "raft" {
  path    = "/vault/data"
  node_id = "node1"
}

listener "tcp" {
  address     = "0.0.0.0:8200"
  tls_disable = 1
}

api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
ui = true

Getting Started with Vault

For local development, Docker Compose provides the fastest path to a running Vault instance:

version: '3.8'
services:
  vault:
    image: hashicorp/vault:latest
    container_name: vault
    ports:
      - "8200:8200"
    environment:
      VAULT_DEV_ROOT_TOKEN_ID: 'dev-token'
      VAULT_DEV_LISTEN_ADDRESS: '0.0.0.0:8200'
    cap_add:
      - IPC_LOCK
    command: server -dev

This runs Vault in dev mode—perfect for testing but never use this in production. Dev mode stores everything in memory, automatically unseals, and uses a predictable root token.

For production initialization:

# Initialize Vault (returns unseal keys and root token)
vault operator init

# Unseal Vault (requires threshold number of key shares)
vault operator unseal <key1>
vault operator unseal <key2>
vault operator unseal <key3>

# Authenticate
vault login <root-token>

# Enable secrets engine
vault secrets enable -path=secret kv-v2

# Write a secret
vault kv put secret/myapp/config \
  db_password=super-secret \
  api_key=abc123

# Read a secret
vault kv get secret/myapp/config

Dynamic Secrets and Database Credentials

Static database credentials create security risks. If compromised, they remain valid until manually rotated. Vault’s database secrets engine generates dynamic credentials with configurable time-to-live (TTL), automatically revoking them when they expire.

Configure the database engine for PostgreSQL:

# Enable database secrets engine
vault secrets enable database

# Configure PostgreSQL connection
vault write database/config/postgresql \
  plugin_name=postgresql-database-plugin \
  allowed_roles="readonly" \
  connection_url="postgresql://{{username}}:{{password}}@postgres:5432/mydb?sslmode=disable" \
  username="vault" \
  password="vault-password"

# Create a role that generates readonly credentials
vault write database/roles/readonly \
  db_name=postgresql \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
  default_ttl="1h" \
  max_ttl="24h"

Application code retrieves credentials dynamically. Here’s a Python example:

import hvac
import psycopg2
from contextlib import contextmanager

@contextmanager
def get_db_connection():
    # Authenticate to Vault
    client = hvac.Client(url='http://vault:8200')
    client.auth.approle.login(
        role_id='your-role-id',
        secret_id='your-secret-id'
    )
    
    # Get dynamic database credentials
    response = client.read('database/creds/readonly')
    username = response['data']['username']
    password = response['data']['password']
    
    # Connect to database with dynamic credentials
    conn = psycopg2.connect(
        host='postgres',
        database='mydb',
        user=username,
        password=password
    )
    
    try:
        yield conn
    finally:
        conn.close()
        # Credentials automatically expire based on TTL

# Usage
with get_db_connection() as conn:
    cursor = conn.cursor()
    cursor.execute("SELECT * FROM users LIMIT 10")
    results = cursor.fetchall()

Each time your application requests credentials, Vault generates a new database user with a limited lifetime. When the TTL expires, Vault automatically revokes the credentials by dropping the database user.

Application Integration Patterns

For automated workloads, AppRole is the recommended authentication method. It uses a role ID (non-sensitive) and secret ID (sensitive) pair:

# Enable AppRole authentication
vault auth enable approle

# Create a policy
vault policy write myapp-policy - <<EOF
path "secret/data/myapp/*" {
  capabilities = ["read"]
}
path "database/creds/readonly" {
  capabilities = ["read"]
}
EOF

# Create an AppRole
vault write auth/approle/role/myapp \
  token_policies="myapp-policy" \
  token_ttl=1h \
  token_max_ttl=4h

# Get role ID
vault read auth/approle/role/myapp/role-id

# Generate secret ID
vault write -f auth/approle/role/myapp/secret-id

For Kubernetes workloads, use the Kubernetes auth method with service accounts:

# Enable Kubernetes auth
vault auth enable kubernetes

# Configure Kubernetes auth
vault write auth/kubernetes/config \
  kubernetes_host="https://kubernetes.default.svc:443" \
  kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
  token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token

# Create a role for a specific service account
vault write auth/kubernetes/role/myapp \
  bound_service_account_names=myapp \
  bound_service_account_namespaces=default \
  policies=myapp-policy \
  ttl=1h

Vault Agent simplifies integration by handling authentication and secret retrieval automatically:

pid_file = "./pidfile"

vault {
  address = "http://vault:8200"
}

auto_auth {
  method "kubernetes" {
    mount_path = "auth/kubernetes"
    config = {
      role = "myapp"
    }
  }

  sink "file" {
    config = {
      path = "/vault/token"
    }
  }
}

template {
  source      = "/vault/configs/config.tmpl"
  destination = "/vault/configs/config.json"
}

The agent authenticates, retrieves secrets, renders templates, and automatically renews tokens—your application just reads rendered configuration files.

Production Considerations

Production Vault requires high availability. Use the integrated Raft storage backend for simplicity:

storage "raft" {
  path    = "/vault/data"
  node_id = "node1"
  
  retry_join {
    leader_api_addr = "http://vault-0:8200"
  }
  retry_join {
    leader_api_addr = "http://vault-1:8200"
  }
  retry_join {
    leader_api_addr = "http://vault-2:8200"
  }
}

listener "tcp" {
  address     = "0.0.0.0:8200"
  tls_cert_file = "/vault/tls/tls.crt"
  tls_key_file  = "/vault/tls/tls.key"
}

api_addr = "https://vault-0:8200"
cluster_addr = "https://vault-0:8201"
ui = true

telemetry {
  prometheus_retention_time = "30s"
  disable_hostname = true
}

Enable audit logging to track all Vault operations:

vault audit enable file file_path=/vault/logs/audit.log

For unsealing in production, consider auto-unseal using cloud KMS services (AWS KMS, Azure Key Vault, GCP Cloud KMS) to eliminate manual unseal procedures during restarts.

Implement least-privilege policies. Grant only necessary capabilities on specific paths. Regularly rotate root tokens and avoid using them for daily operations.

Conclusion

HashiCorp Vault transforms secrets management from a security liability into a controlled, auditable process. Dynamic secrets reduce credential lifetime from months to hours. Centralized policies replace scattered configuration files. Audit logs provide visibility into every secret access.

Start with Vault for your most critical secrets—database credentials and API keys. Use dynamic secrets engines where possible. Integrate using AppRole or cloud-native authentication methods. As you gain confidence, expand to certificate management, encryption as a service, and cloud credential generation.

Vault isn’t the only option—cloud provider services like AWS Secrets Manager or Azure Key Vault work well for cloud-native applications. But for multi-cloud or hybrid environments, Vault’s platform-agnostic approach and extensive feature set make it the industry standard for secrets management.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.