Terraform State: Remote Backend Configuration

Terraform's state file is the source of truth for your infrastructure. It maps your configuration code to real-world resources, tracks metadata, and enables Terraform to determine what changes need...

Key Insights

  • Remote backends solve critical problems with local state files: they enable team collaboration through state locking, provide encryption and versioning for security, and eliminate the risk of losing state files stored on individual machines.
  • S3 with DynamoDB locking is the most cost-effective remote backend for AWS users, costing pennies per month while providing enterprise-grade state management, versioning, and automatic locking mechanisms.
  • Always encrypt state files at rest and in transit—they contain sensitive data including resource IDs, IP addresses, and sometimes secrets that attackers could exploit to map your infrastructure.

Introduction to Terraform State

Terraform’s state file is the source of truth for your infrastructure. It maps your configuration code to real-world resources, tracks metadata, and enables Terraform to determine what changes need to be applied. Without state, Terraform cannot function.

Local state files—stored as terraform.tfstate in your project directory—work fine for solo developers experimenting with Terraform. But they become a liability the moment you work with a team or manage production infrastructure. Local state creates several critical problems:

Collaboration bottlenecks: Multiple team members cannot safely work on the same infrastructure. If two engineers run terraform apply simultaneously with different local state files, they’ll create conflicting changes and potentially corrupt infrastructure.

Security risks: State files contain sensitive information in plain text—database passwords, API keys, private IP addresses, and resource identifiers. Storing these locally means they’re scattered across developer laptops, unencrypted and unaudited.

No disaster recovery: Lose your laptop or accidentally delete the state file, and you’ve lost the mapping between your code and infrastructure. You’ll need to manually import hundreds of resources or rebuild everything from scratch.

Remote backends solve these problems by storing state in a centralized location with built-in locking, encryption, and versioning.

Choosing a Remote Backend

Terraform supports multiple remote backend types. Your choice depends on your cloud provider, team size, and budget.

AWS S3 + DynamoDB: Best for AWS-centric teams. S3 provides versioned storage with encryption, while DynamoDB handles state locking. Costs under $1/month for most teams.

Azure Blob Storage: Native choice for Azure users. Includes built-in state locking without additional services. Integrated with Azure AD for authentication.

Google Cloud Storage: Optimal for GCP environments. Provides automatic locking without external dependencies. Leverages GCP IAM for access control.

Terraform Cloud: HashiCorp’s managed solution. Includes remote execution, policy enforcement, and a web UI. Free tier supports up to 5 users. Best for teams wanting managed infrastructure without backend maintenance.

Here’s how backend configurations compare:

# AWS S3
terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "prod/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-locks"
    encrypt        = true
  }
}

# Azure Blob Storage
terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "tfstatestorage"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
  }
}

# Google Cloud Storage
terraform {
  backend "gcs" {
    bucket = "my-terraform-state"
    prefix = "prod"
  }
}

# Terraform Cloud
terraform {
  backend "remote" {
    organization = "my-org"
    workspaces {
      name = "production"
    }
  }
}

For most teams on AWS, S3 with DynamoDB provides the best balance of cost, features, and reliability. That’s what we’ll configure next.

Configuring S3 Backend with DynamoDB Locking

Setting up a remote backend requires creating the storage infrastructure before configuring Terraform to use it. Here’s the complete setup for S3.

First, create an S3 bucket with versioning and encryption enabled:

resource "aws_s3_bucket" "terraform_state" {
  bucket = "mycompany-terraform-state"

  lifecycle {
    prevent_destroy = true
  }

  tags = {
    Name        = "Terraform State"
    Environment = "production"
  }
}

resource "aws_s3_bucket_versioning" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Next, create a DynamoDB table for state locking. The table must have a primary key named LockID:

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "terraform-state-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }

  tags = {
    Name        = "Terraform State Locks"
    Environment = "production"
  }
}

After creating these resources, configure your Terraform backend. Add this to your root module’s configuration:

terraform {
  backend "s3" {
    bucket         = "mycompany-terraform-state"
    key            = "production/infrastructure.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-locks"
    encrypt        = true
  }
}

Migrate your existing state to the remote backend:

terraform init -migrate-state

Terraform will detect the backend change and prompt you to copy your local state to S3. Type “yes” to confirm. Your state is now centralized and locked.

State Locking and Consistency

State locking prevents concurrent modifications that could corrupt your infrastructure. When you run terraform apply, Terraform acquires a lock on the state file. Other operations must wait until the lock releases.

With S3 and DynamoDB, locking happens automatically. When Terraform starts an operation, it writes a lock record to DynamoDB:

{
  "LockID": "mycompany-terraform-state/production/infrastructure.tfstate-md5",
  "Info": "{\"ID\":\"abc123\",\"Operation\":\"OperationTypeApply\",\"Who\":\"user@example.com\",\"Version\":\"1.6.0\",\"Created\":\"2024-01-15T10:30:00Z\"}",
  "Digest": "..."
}

If another user tries to run Terraform while the lock exists, they’ll see:

Error: Error acquiring the state lock

Error message: ConditionalCheckFailedException: The conditional request failed
Lock Info:
  ID:        abc123
  Path:      mycompany-terraform-state/production/infrastructure.tfstate
  Operation: OperationTypeApply
  Who:       user@example.com
  Version:   1.6.0
  Created:   2024-01-15 10:30:00

Sometimes locks persist after crashes or network failures. Force-unlock only when you’re certain no other process is running:

terraform force-unlock abc123

Use this carefully. Force-unlocking while another operation runs can corrupt state.

Security Best Practices

State files are treasure troves for attackers. They reveal your infrastructure topology, resource identifiers, and sometimes credentials. Secure them properly.

Encrypt everything: Enable encryption at rest for your backend storage and use HTTPS for all API calls. With S3, enforce encryption using bucket policies:

resource "aws_s3_bucket_policy" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "DenyUnencryptedObjectUploads"
        Effect = "Deny"
        Principal = "*"
        Action = "s3:PutObject"
        Resource = "${aws_s3_bucket.terraform_state.arn}/*"
        Condition = {
          StringNotEquals = {
            "s3:x-amz-server-side-encryption" = "AES256"
          }
        }
      }
    ]
  })
}

Restrict access with IAM: Grant minimum necessary permissions. Create a dedicated IAM role for Terraform execution:

resource "aws_iam_role" "terraform" {
  name = "terraform-execution"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::123456789012:user/terraform-ci"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy" "terraform_state_access" {
  name = "terraform-state-access"
  role = aws_iam_role.terraform.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:ListBucket",
          "s3:GetObject",
          "s3:PutObject"
        ]
        Resource = [
          aws_s3_bucket.terraform_state.arn,
          "${aws_s3_bucket.terraform_state.arn}/*"
        ]
      },
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:DeleteItem"
        ]
        Resource = aws_dynamodb_table.terraform_locks.arn
      }
    ]
  })
}

Handle sensitive values carefully: Mark sensitive outputs to prevent them from appearing in logs:

output "database_password" {
  value     = aws_db_instance.main.password
  sensitive = true
}

Remember: sensitive = true only hides values from console output. They’re still stored in state files. Never commit state files to version control.

State Migration and Team Workflows

When migrating to remote backends, coordinate with your team to avoid conflicts. Choose a maintenance window, ensure no one has pending changes, and migrate together.

For CI/CD integration, configure your pipeline to use the remote backend. Here’s a GitHub Actions example:

name: Terraform Apply

on:
  push:
    branches: [main]

jobs:
  terraform:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::123456789012:role/terraform-execution
          aws-region: us-east-1
      
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        
      - name: Terraform Init
        run: terraform init
        
      - name: Terraform Plan
        run: terraform plan -out=tfplan
        
      - name: Terraform Apply
        if: github.ref == 'refs/heads/main'
        run: terraform apply -auto-approve tfplan

The backend configuration in your Terraform code handles state storage automatically. No additional configuration needed in CI/CD.

Troubleshooting Common Issues

State lock timeout: If operations hang waiting for a lock, investigate whether another process is actually running. Check DynamoDB for lock details, then force-unlock if necessary.

Backend initialization failures: Verify your AWS credentials have sufficient permissions and the S3 bucket and DynamoDB table exist in the correct region.

State corruption: If state becomes corrupted, restore from S3 versioning:

# List state file versions
aws s3api list-object-versions \
  --bucket mycompany-terraform-state \
  --prefix production/infrastructure.tfstate

# Download a previous version
aws s3api get-object \
  --bucket mycompany-terraform-state \
  --key production/infrastructure.tfstate \
  --version-id <version-id> \
  terraform.tfstate.backup

# Restore it
terraform state push terraform.tfstate.backup

Always maintain backups. Enable S3 versioning and consider cross-region replication for critical state files.

Remote backends transform Terraform from a solo tool into a collaborative platform. The initial setup takes an hour, but you’ll gain security, reliability, and team collaboration that make it indispensable for serious infrastructure management.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.