Go sync.Pool: Object Reuse Pattern

The `sync.Pool` type in Go's standard library provides a mechanism for reusing objects across goroutines, reducing the burden on the garbage collector. Every time you allocate memory in Go, you're...

Key Insights

  • sync.Pool reduces garbage collection pressure by reusing objects instead of allocating new ones, but it’s not a general-purpose cache—objects can be evicted at any GC cycle
  • The pool’s Get() and Put() methods provide thread-safe access to reusable objects, with per-P (processor) local storage minimizing lock contention in concurrent scenarios
  • Object pooling makes sense for high-frequency allocations of short-lived objects in hot paths, but adds complexity and should only be used when profiling proves GC is a bottleneck

Introduction to sync.Pool

The sync.Pool type in Go’s standard library provides a mechanism for reusing objects across goroutines, reducing the burden on the garbage collector. Every time you allocate memory in Go, you’re creating work for the GC. In high-throughput systems where you’re allocating thousands or millions of temporary objects per second, this becomes a performance bottleneck.

Object pooling isn’t always the answer. It adds complexity and can actually hurt performance if misused. Use sync.Pool when you have measurable GC pressure from allocating the same type of object repeatedly in hot code paths. Don’t use it prematurely—profile first.

Here’s a simple comparison:

// Without pooling - allocates every time
func processDataNaive(data []byte) ([]byte, error) {
    buf := new(bytes.Buffer)
    buf.Write(data)
    // process buf
    return buf.Bytes(), nil
}

// With pooling - reuses buffers
var bufferPool = sync.Pool{
    New: func() interface{} {
        return new(bytes.Buffer)
    },
}

func processDataPooled(data []byte) ([]byte, error) {
    buf := bufferPool.Get().(*bytes.Buffer)
    defer func() {
        buf.Reset()
        bufferPool.Put(buf)
    }()
    
    buf.Write(data)
    // process buf
    return buf.Bytes(), nil
}

The pooled version retrieves a buffer from the pool, uses it, resets it to clear state, and returns it for future reuse. This eliminates repeated allocations.

How sync.Pool Works

Understanding sync.Pool’s internals helps you use it effectively. The pool maintains a set of objects that can be individually saved and retrieved. It’s designed for temporary objects that can be safely reused across independent operations.

The two primary methods are:

  • Get(): Retrieves an object from the pool. If the pool is empty, it calls the New function you provided to create one.
  • Put(): Returns an object to the pool for future reuse.

Internally, Go maintains per-P (processor) local pools to minimize lock contention. Each P has its own private pool, and goroutines running on that P can access it without locking. When a P’s local pool is empty, it may steal from other P’s pools or the shared pool.

The critical behavior to understand: objects in the pool can be evicted at any GC cycle. The pool uses a “victim cache” mechanism where objects survive at most two GC cycles. This means sync.Pool is not a cache in the traditional sense—you can’t rely on objects persisting.

type Request struct {
    ID      string
    Payload []byte
    Headers map[string]string
}

var requestPool = sync.Pool{
    New: func() interface{} {
        return &Request{
            Headers: make(map[string]string, 10),
            Payload: make([]byte, 0, 4096),
        }
    },
}

func handleRequest(id string, data []byte) {
    // Get a request object from the pool
    req := requestPool.Get().(*Request)
    
    // Use it
    req.ID = id
    req.Payload = append(req.Payload[:0], data...)
    req.Headers["Content-Type"] = "application/json"
    
    // Process request...
    
    // Clean and return to pool
    req.ID = ""
    req.Payload = req.Payload[:0]
    for k := range req.Headers {
        delete(req.Headers, k)
    }
    requestPool.Put(req)
}

The New function pre-allocates the Headers map and Payload slice with reasonable capacities, avoiding repeated small allocations.

Common Use Cases and Patterns

The most common use case is buffer pooling for encoding/decoding operations. JSON marshaling, protocol buffer encoding, and HTTP response writing all benefit from pooled buffers.

var jsonBufferPool = sync.Pool{
    New: func() interface{} {
        return new(bytes.Buffer)
    },
}

func encodeJSON(v interface{}) ([]byte, error) {
    buf := jsonBufferPool.Get().(*bytes.Buffer)
    defer func() {
        buf.Reset()
        jsonBufferPool.Put(buf)
    }()
    
    encoder := json.NewEncoder(buf)
    if err := encoder.Encode(v); err != nil {
        return nil, err
    }
    
    // Copy the result since we're returning the buffer to the pool
    result := make([]byte, buf.Len())
    copy(result, buf.Bytes())
    return result, nil
}

Notice we copy the result before returning the buffer. This is critical—never return references to pooled objects or their internal data.

Another pattern is pooling objects used in HTTP handlers:

type ResponseWriter struct {
    buf     *bytes.Buffer
    headers map[string]string
}

var respWriterPool = sync.Pool{
    New: func() interface{} {
        return &ResponseWriter{
            buf:     new(bytes.Buffer),
            headers: make(map[string]string),
        }
    },
}

func httpHandler(w http.ResponseWriter, r *http.Request) {
    rw := respWriterPool.Get().(*ResponseWriter)
    defer func() {
        rw.buf.Reset()
        for k := range rw.headers {
            delete(rw.headers, k)
        }
        respWriterPool.Put(rw)
    }()
    
    // Build response using rw
    rw.headers["Content-Type"] = "application/json"
    rw.buf.WriteString(`{"status":"ok"}`)
    
    // Write to actual response
    for k, v := range rw.headers {
        w.Header().Set(k, v)
    }
    w.Write(rw.buf.Bytes())
}

Best Practices and Gotchas

The most important rule: always reset object state before returning to the pool. Failure to do this causes subtle bugs where pooled objects carry state from previous uses.

type Worker struct {
    ID       int
    Data     []byte
    Callback func()
}

var workerPool = sync.Pool{
    New: func() interface{} {
        return &Worker{
            Data: make([]byte, 0, 1024),
        }
    },
}

// WRONG - doesn't reset state
func badExample() {
    w := workerPool.Get().(*Worker)
    w.ID = 42
    w.Callback = func() { fmt.Println("done") }
    // ... use worker
    workerPool.Put(w) // BUG: Callback still set!
}

// CORRECT - resets all fields
func goodExample() {
    w := workerPool.Get().(*Worker)
    w.ID = 42
    w.Callback = func() { fmt.Println("done") }
    // ... use worker
    
    // Reset before returning
    w.ID = 0
    w.Data = w.Data[:0]
    w.Callback = nil
    workerPool.Put(w)
}

Never store references to pooled objects beyond the scope where you acquired them. The object might be reused by another goroutine immediately after you Put() it.

Don’t use sync.Pool for:

  • Long-lived objects (defeats the purpose)
  • Objects that are expensive to reset
  • Scenarios where you need guaranteed object persistence
  • Situations where allocation isn’t actually a bottleneck

The New function should return objects in a ready-to-use state. Pre-allocate internal slices and maps with reasonable capacities to avoid repeated small allocations.

Performance Benchmarks

Let’s measure the actual impact with realistic benchmarks:

package main

import (
    "bytes"
    "encoding/json"
    "sync"
    "testing"
)

type Data struct {
    ID      int
    Name    string
    Values  []float64
}

func BenchmarkWithoutPool(b *testing.B) {
    data := Data{
        ID:     1,
        Name:   "test",
        Values: []float64{1.1, 2.2, 3.3, 4.4, 5.5},
    }
    
    b.ResetTimer()
    b.ReportAllocs()
    
    for i := 0; i < b.N; i++ {
        buf := new(bytes.Buffer)
        json.NewEncoder(buf).Encode(data)
        _ = buf.Bytes()
    }
}

var bufPool = sync.Pool{
    New: func() interface{} {
        return new(bytes.Buffer)
    },
}

func BenchmarkWithPool(b *testing.B) {
    data := Data{
        ID:     1,
        Name:   "test",
        Values: []float64{1.1, 2.2, 3.3, 4.4, 5.5},
    }
    
    b.ResetTimer()
    b.ReportAllocs()
    
    for i := 0; i < b.N; i++ {
        buf := bufPool.Get().(*bytes.Buffer)
        json.NewEncoder(buf).Encode(data)
        _ = buf.Bytes()
        buf.Reset()
        bufPool.Put(buf)
    }
}

Running these benchmarks typically shows:

BenchmarkWithoutPool-8    500000    2847 ns/op    576 B/op    5 allocs/op
BenchmarkWithPool-8      1000000    1923 ns/op    192 B/op    3 allocs/op

The pooled version reduces allocations by 40% and improves throughput by about 30%. In high-throughput systems processing millions of requests, this translates to significant CPU and memory savings.

The real benefit appears under load. At low request rates, the overhead of pool management might outweigh benefits. At high rates, reduced GC pressure keeps latency stable.

Conclusion and Recommendations

Use sync.Pool when you’ve profiled your application and identified GC pressure from repeated allocations of the same object type. It’s particularly effective for:

  • Buffer pooling in encoding/decoding operations
  • Temporary objects in HTTP handlers or RPC servers
  • High-frequency allocations in tight loops
  • Scenarios where allocation shows up in CPU profiles

Don’t use it as a premature optimization. The added complexity of managing object lifecycle, resetting state, and avoiding reference leaks isn’t worth it unless you have measurable performance problems.

When implementing pooling, always provide a New function, always reset state before Put(), never store references to pooled objects, and benchmark to verify the improvement. Remember that pooled objects can disappear at any GC cycle—this is a performance optimization, not a caching layer.

The sync.Pool is a sharp tool. Used correctly in the right situations, it significantly reduces GC overhead. Used incorrectly, it introduces bugs and complexity without performance gains. Profile first, optimize second.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.