How to Use Redis in Go Applications

Redis is an in-memory data structure store that serves as a database, cache, and message broker. Its sub-millisecond latency and rich data types make it an ideal companion for Go applications that...

Key Insights

  • Redis excels at caching, session storage, and rate limiting in Go applications, with the go-redis library providing idiomatic interfaces for all Redis data structures
  • Always use connection pooling, context timeouts, and pipeline operations to maximize performance and reliability in production environments
  • Implement proper error handling and fallback strategies—Redis should enhance your application, not become a single point of failure

Introduction to Redis and Go Integration

Redis is an in-memory data structure store that serves as a database, cache, and message broker. Its sub-millisecond latency and rich data types make it an ideal companion for Go applications that demand high performance and low overhead.

Go’s concurrency model and Redis’s speed create a powerful combination for building scalable systems. Common use cases include caching expensive database queries, managing user sessions across distributed services, implementing rate limiters to protect APIs, and building real-time features with pub/sub messaging.

Unlike traditional databases, Redis keeps data in memory, making reads and writes exceptionally fast. This speed comes with tradeoffs—you need to manage memory carefully and understand persistence options—but for the right workloads, Redis can dramatically improve application performance.

Setting Up Redis with Go

Start by running Redis locally. The easiest approach uses Docker Compose:

version: '3.8'
services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    command: redis-server --appendonly yes

volumes:
  redis_data:

This configuration enables AOF (Append Only File) persistence, ensuring data survives restarts. Run it with docker-compose up -d.

Next, add the go-redis client to your project:

go get github.com/redis/go-redis/v9

Initialize the client with proper connection pooling:

package main

import (
    "context"
    "fmt"
    "github.com/redis/go-redis/v9"
    "time"
)

func NewRedisClient() *redis.Client {
    client := redis.NewClient(&redis.Options{
        Addr:         "localhost:6379",
        Password:     "",
        DB:           0,
        PoolSize:     10,
        MinIdleConns: 5,
        MaxRetries:   3,
        DialTimeout:  5 * time.Second,
        ReadTimeout:  3 * time.Second,
        WriteTimeout: 3 * time.Second,
    })

    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()

    if err := client.Ping(ctx).Err(); err != nil {
        panic(fmt.Sprintf("Failed to connect to Redis: %v", err))
    }

    return client
}

Connection pooling is critical for production applications. The PoolSize determines maximum concurrent connections, while MinIdleConns keeps connections warm for immediate use.

Core Redis Operations in Go

Redis supports multiple data structures. Here’s how to use the most common ones in Go.

String operations handle simple key-value pairs with optional expiration:

func CacheUserProfile(ctx context.Context, rdb *redis.Client, userID string, profile string) error {
    key := fmt.Sprintf("user:profile:%s", userID)
    return rdb.Set(ctx, key, profile, 1*time.Hour).Err()
}

func GetUserProfile(ctx context.Context, rdb *redis.Client, userID string) (string, error) {
    key := fmt.Sprintf("user:profile:%s", userID)
    return rdb.Get(ctx, key).Result()
}

Hashes store structured data efficiently, perfect for sessions:

func SaveSession(ctx context.Context, rdb *redis.Client, sessionID string, data map[string]interface{}) error {
    key := fmt.Sprintf("session:%s", sessionID)
    pipe := rdb.Pipeline()
    pipe.HSet(ctx, key, data)
    pipe.Expire(ctx, key, 24*time.Hour)
    _, err := pipe.Exec(ctx)
    return err
}

func GetSessionField(ctx context.Context, rdb *redis.Client, sessionID, field string) (string, error) {
    key := fmt.Sprintf("session:%s", sessionID)
    return rdb.HGet(ctx, key, field).Result()
}

Lists implement queues for background job processing:

func EnqueueJob(ctx context.Context, rdb *redis.Client, jobData string) error {
    return rdb.RPush(ctx, "jobs:queue", jobData).Err()
}

func DequeueJob(ctx context.Context, rdb *redis.Client) (string, error) {
    result, err := rdb.BLPop(ctx, 5*time.Second, "jobs:queue").Result()
    if err != nil {
        return "", err
    }
    return result[1], nil // result[0] is the key name
}

Sorted sets power leaderboards and ranking systems:

func UpdateScore(ctx context.Context, rdb *redis.Client, playerID string, score float64) error {
    return rdb.ZAdd(ctx, "leaderboard", redis.Z{
        Score:  score,
        Member: playerID,
    }).Err()
}

func GetTopPlayers(ctx context.Context, rdb *redis.Client, count int64) ([]string, error) {
    return rdb.ZRevRange(ctx, "leaderboard", 0, count-1).Result()
}

Implementing Common Patterns

The cache-aside pattern reduces database load by checking Redis before querying your database:

type User struct {
    ID   string
    Name string
    Email string
}

func GetUser(ctx context.Context, rdb *redis.Client, db *sql.DB, userID string) (*User, error) {
    cacheKey := fmt.Sprintf("user:%s", userID)
    
    // Try cache first
    cached, err := rdb.Get(ctx, cacheKey).Result()
    if err == nil {
        var user User
        if err := json.Unmarshal([]byte(cached), &user); err == nil {
            return &user, nil
        }
    }
    
    // Cache miss - query database
    var user User
    err = db.QueryRowContext(ctx, "SELECT id, name, email FROM users WHERE id = ?", userID).
        Scan(&user.ID, &user.Name, &user.Email)
    if err != nil {
        return nil, err
    }
    
    // Store in cache
    userData, _ := json.Marshal(user)
    rdb.Set(ctx, cacheKey, userData, 15*time.Minute)
    
    return &user, nil
}

Distributed locks prevent race conditions in distributed systems:

func AcquireLock(ctx context.Context, rdb *redis.Client, resource string, ttl time.Duration) (bool, error) {
    lockKey := fmt.Sprintf("lock:%s", resource)
    return rdb.SetNX(ctx, lockKey, "locked", ttl).Result()
}

func ReleaseLock(ctx context.Context, rdb *redis.Client, resource string) error {
    lockKey := fmt.Sprintf("lock:%s", resource)
    return rdb.Del(ctx, lockKey).Err()
}

Rate limiting protects your APIs from abuse:

func CheckRateLimit(ctx context.Context, rdb *redis.Client, clientID string, limit int64) (bool, error) {
    key := fmt.Sprintf("ratelimit:%s:%d", clientID, time.Now().Unix()/60)
    
    pipe := rdb.Pipeline()
    incr := pipe.Incr(ctx, key)
    pipe.Expire(ctx, key, 1*time.Minute)
    _, err := pipe.Exec(ctx)
    
    if err != nil {
        return false, err
    }
    
    return incr.Val() <= limit, nil
}

Pub/Sub and Real-time Features

Redis pub/sub enables event-driven architectures without complex message brokers:

func PublishNotification(ctx context.Context, rdb *redis.Client, channel, message string) error {
    return rdb.Publish(ctx, channel, message).Err()
}

func SubscribeToNotifications(ctx context.Context, rdb *redis.Client, channel string) {
    pubsub := rdb.Subscribe(ctx, channel)
    defer pubsub.Close()
    
    ch := pubsub.Channel()
    for msg := range ch {
        fmt.Printf("Received message: %s from channel: %s\n", msg.Payload, msg.Channel)
        // Process notification
    }
}

// Usage in a chat application
func main() {
    ctx := context.Background()
    rdb := NewRedisClient()
    
    go SubscribeToNotifications(ctx, rdb, "chat:room:123")
    
    PublishNotification(ctx, rdb, "chat:room:123", "New message from user")
}

Error Handling and Best Practices

Always use context with timeouts to prevent operations from hanging indefinitely:

func SafeGet(rdb *redis.Client, key string) (string, error) {
    ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
    defer cancel()
    
    val, err := rdb.Get(ctx, key).Result()
    if err == redis.Nil {
        return "", fmt.Errorf("key does not exist")
    } else if err != nil {
        return "", fmt.Errorf("redis error: %w", err)
    }
    return val, nil
}

Use pipelines to batch operations and reduce network round trips:

func BatchUpdateScores(ctx context.Context, rdb *redis.Client, scores map[string]float64) error {
    pipe := rdb.Pipeline()
    
    for playerID, score := range scores {
        pipe.ZAdd(ctx, "leaderboard", redis.Z{
            Score:  score,
            Member: playerID,
        })
    }
    
    _, err := pipe.Exec(ctx)
    return err
}

Monitor Redis health and implement circuit breakers for resilience:

func HealthCheck(ctx context.Context, rdb *redis.Client) error {
    ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
    defer cancel()
    
    return rdb.Ping(ctx).Err()
}

Conclusion

Redis transforms Go applications by providing blazing-fast data access for caching, sessions, rate limiting, and real-time features. The go-redis library offers an idiomatic interface that feels natural to Go developers while exposing Redis’s full power.

Start with simple caching to reduce database load, then expand to more sophisticated patterns like distributed locks and pub/sub messaging as your needs grow. Always use connection pooling, implement proper timeouts, and design fallback strategies—Redis should accelerate your application, not create dependencies that cause outages.

Focus on the patterns that solve your specific problems. Not every application needs distributed locks or pub/sub, but nearly every Go service benefits from intelligent caching. Measure the impact, iterate on your implementation, and let Redis handle what it does best: making your data access ridiculously fast.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.