Object Pools: Reusing Expensive Objects

Some objects are expensive to create. Database connections require network round-trips, authentication handshakes, and protocol negotiation. Thread creation involves kernel calls and stack...

Key Insights

  • Object pools trade memory for speed by eliminating repeated allocation costs, but they introduce complexity around thread safety, lifecycle management, and potential resource leaks that you must handle carefully.
  • The pattern shines for genuinely expensive objects like database connections, threads, and large buffers, but modern garbage collectors have made pooling unnecessary for simple objects—always benchmark before implementing.
  • Automatic resource management through RAII patterns or try-with-resources is essential; manual return-to-pool calls are a guaranteed source of leaks in production systems.

The Problem: Expensive Object Creation

Some objects are expensive to create. Database connections require network round-trips, authentication handshakes, and protocol negotiation. Thread creation involves kernel calls and stack allocation. Large byte buffers trigger heap allocations that fragment memory and pressure the garbage collector.

When your application creates and destroys these objects repeatedly, you’ll see the symptoms: GC pause spikes during traffic bursts, connection timeouts under load, and latency percentiles that look like a heart rate monitor.

Consider the cost of creating a database connection:

public class ConnectionBenchmark {
    private static final String URL = "jdbc:postgresql://localhost:5432/mydb";
    
    public static void main(String[] args) throws SQLException {
        // Measure single connection creation
        long start = System.nanoTime();
        Connection conn = DriverManager.getConnection(URL, "user", "password");
        long elapsed = System.nanoTime() - start;
        conn.close();
        
        System.out.printf("Single connection: %.2f ms%n", elapsed / 1_000_000.0);
        // Typical output: 15-50ms for local, 100-300ms for remote
        
        // Now imagine 1000 requests per second, each needing a connection
        // That's 15-50 seconds of just connection overhead per second
    }
}

At 50ms per connection and 1000 requests per second, you’d need 50 seconds of connection time every second. That’s obviously impossible without pooling.

Object Pool Pattern Fundamentals

The object pool pattern replaces create-use-destroy with borrow-use-return. Instead of paying the creation cost on every request, you pay it once upfront and amortize it across thousands of uses.

A pool has three core components: a factory that knows how to create objects, a container that holds idle objects, and an interface for borrowing and returning.

public interface ObjectPool<T> {
    /**
     * Borrow an object from the pool. May block if pool is exhausted.
     */
    T borrow() throws PoolExhaustedException;
    
    /**
     * Borrow with timeout. Returns null if no object available within duration.
     */
    T borrow(Duration timeout);
    
    /**
     * Return an object to the pool for reuse.
     */
    void release(T object);
    
    /**
     * Invalidate an object (don't return to pool, destroy it).
     */
    void invalidate(T object);
}

public interface PooledObjectFactory<T> {
    T create();
    void destroy(T object);
    boolean validate(T object);
    void reset(T object);
}

The factory interface separates object lifecycle concerns from pool management. This lets you pool anything—connections, threads, buffers, game entities—by implementing the appropriate factory.

Implementing a Thread-Safe Object Pool

Real applications have concurrent access. Multiple threads will borrow and return objects simultaneously. You need thread safety without creating a bottleneck.

Here’s a practical implementation using a blocking queue and semaphore:

public class BoundedObjectPool<T> implements ObjectPool<T> {
    private final BlockingQueue<T> available;
    private final Semaphore permits;
    private final PooledObjectFactory<T> factory;
    private final Set<T> allObjects;
    private final int maxSize;
    
    public BoundedObjectPool(PooledObjectFactory<T> factory, int maxSize) {
        this.factory = factory;
        this.maxSize = maxSize;
        this.available = new LinkedBlockingQueue<>();
        this.permits = new Semaphore(maxSize);
        this.allObjects = ConcurrentHashMap.newKeySet();
    }
    
    @Override
    public T borrow() throws PoolExhaustedException {
        return borrow(Duration.ofSeconds(30));
    }
    
    @Override
    public T borrow(Duration timeout) {
        try {
            // Acquire permit (limits total objects in existence)
            if (!permits.tryAcquire(timeout.toMillis(), TimeUnit.MILLISECONDS)) {
                return null;
            }
            
            // Try to get existing object
            T object = available.poll();
            if (object != null && factory.validate(object)) {
                return object;
            }
            
            // Destroy invalid object if we got one
            if (object != null) {
                destroyObject(object);
            }
            
            // Create new object
            T newObject = factory.create();
            allObjects.add(newObject);
            return newObject;
            
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            permits.release();
            throw new PoolExhaustedException("Interrupted while waiting for object");
        }
    }
    
    @Override
    public void release(T object) {
        if (object == null || !allObjects.contains(object)) {
            return; // Ignore objects we don't own
        }
        
        try {
            factory.reset(object);
            available.offer(object);
        } finally {
            permits.release();
        }
    }
    
    @Override
    public void invalidate(T object) {
        if (object == null) return;
        destroyObject(object);
        permits.release();
    }
    
    private void destroyObject(T object) {
        allObjects.remove(object);
        factory.destroy(object);
    }
}

The semaphore bounds total objects (borrowed plus idle). The blocking queue handles thread-safe storage of idle objects. This design lets multiple threads borrow and return without contention, only blocking when the pool is exhausted.

Object Lifecycle Management

Pooled objects need validation before reuse. A database connection might have timed out. A socket might have been closed by the remote end. Handing out a dead object is worse than the latency of creating a new one.

public class ConnectionFactory implements PooledObjectFactory<Connection> {
    private final String url;
    private final Properties props;
    private final Duration maxAge;
    private final Map<Connection, Instant> creationTimes = new ConcurrentHashMap<>();
    
    public ConnectionFactory(String url, String user, String password, Duration maxAge) {
        this.url = url;
        this.props = new Properties();
        this.props.setProperty("user", user);
        this.props.setProperty("password", password);
        this.maxAge = maxAge;
    }
    
    @Override
    public Connection create() {
        try {
            Connection conn = DriverManager.getConnection(url, props);
            conn.setAutoCommit(true);
            creationTimes.put(conn, Instant.now());
            return conn;
        } catch (SQLException e) {
            throw new RuntimeException("Failed to create connection", e);
        }
    }
    
    @Override
    public boolean validate(Connection conn) {
        try {
            // Check age - connections go stale
            Instant created = creationTimes.get(conn);
            if (created != null && Duration.between(created, Instant.now()).compareTo(maxAge) > 0) {
                return false;
            }
            
            // Check liveness - fast query to verify connection works
            return conn.isValid(1); // 1 second timeout
        } catch (SQLException e) {
            return false;
        }
    }
    
    @Override
    public void reset(Connection conn) {
        try {
            // Clear any transaction state
            if (!conn.getAutoCommit()) {
                conn.rollback();
                conn.setAutoCommit(true);
            }
            // Clear any warnings
            conn.clearWarnings();
        } catch (SQLException e) {
            // Connection is broken, validation will catch it
        }
    }
    
    @Override
    public void destroy(Connection conn) {
        creationTimes.remove(conn);
        try {
            conn.close();
        } catch (SQLException e) {
            // Log and ignore - we're destroying it anyway
        }
    }
}

The reset method is critical. Without it, one request’s uncommitted transaction becomes the next request’s mystery bug. State pollution between borrowers is a subtle and nasty failure mode.

Common Pitfalls and Edge Cases

The most common bug with object pools is forgetting to return objects. One missed return path—an early return, an exception, a conditional branch—and your pool slowly drains until the application deadlocks.

The solution is automatic resource management. In Java, use try-with-resources. In C++, use RAII. In Go, use defer.

public class PooledResource<T> implements AutoCloseable {
    private final ObjectPool<T> pool;
    private final T resource;
    private boolean released = false;
    
    public PooledResource(ObjectPool<T> pool, T resource) {
        this.pool = pool;
        this.resource = resource;
    }
    
    public T get() {
        if (released) {
            throw new IllegalStateException("Resource already released");
        }
        return resource;
    }
    
    @Override
    public void close() {
        if (!released) {
            released = true;
            pool.release(resource);
        }
    }
    
    public void invalidate() {
        if (!released) {
            released = true;
            pool.invalidate(resource);
        }
    }
}

// Usage - impossible to forget return
public void doWork(ObjectPool<Connection> pool) {
    try (PooledResource<Connection> resource = new PooledResource<>(pool, pool.borrow())) {
        Connection conn = resource.get();
        // Use connection
        // Automatically returned even if exception thrown
    }
}

Another pitfall: nested pool access. If code holding object A tries to borrow object B from the same pool, and the pool is exhausted, you have a deadlock. Thread 1 holds A, waits for B. Thread 2 holds B, waits for A. Design your code so pooled objects don’t need to borrow siblings.

Real-World Implementations

Don’t write your own connection pool for production. HikariCP has years of optimization and battle-testing:

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://localhost:5432/mydb");
config.setUsername("user");
config.setPassword("password");

// Pool sizing - start here, tune based on metrics
config.setMinimumIdle(5);
config.setMaximumPoolSize(20);

// Lifecycle settings
config.setMaxLifetime(Duration.ofMinutes(30).toMillis());
config.setIdleTimeout(Duration.ofMinutes(10).toMillis());
config.setConnectionTimeout(Duration.ofSeconds(30).toMillis());

// Validation
config.setConnectionTestQuery("SELECT 1");
config.setValidationTimeout(Duration.ofSeconds(5).toMillis());

HikariDataSource ds = new HikariDataSource(config);

Game engines pool aggressively. Bullets, particles, and enemies get recycled rather than garbage collected. The Unity pattern of SetActive(false) instead of Destroy() is object pooling in disguise.

HTTP client libraries pool connections by default. When you create an HttpClient in .NET or use requests.Session() in Python, you’re using a connection pool. Creating a new client per request is a common performance mistake.

When Not to Use Object Pools

Modern garbage collectors are remarkably good at short-lived objects. The generational hypothesis—most objects die young—means allocating and discarding small objects is nearly free.

Pooling adds complexity: thread safety concerns, lifecycle management, potential for leaks and deadlocks. That complexity needs to pay for itself.

Before implementing a pool, benchmark:

public class PoolBenchmark {
    public static void main(String[] args) {
        int iterations = 100_000;
        
        // Without pool
        long start = System.nanoTime();
        for (int i = 0; i < iterations; i++) {
            byte[] buffer = new byte[4096];
            // Simulate work
            buffer[0] = 1;
        }
        long withoutPool = System.nanoTime() - start;
        
        // With pool
        ArrayBlockingQueue<byte[]> pool = new ArrayBlockingQueue<>(100);
        for (int i = 0; i < 100; i++) {
            pool.offer(new byte[4096]);
        }
        
        start = System.nanoTime();
        for (int i = 0; i < iterations; i++) {
            byte[] buffer = pool.poll();
            if (buffer == null) buffer = new byte[4096];
            buffer[0] = 1;
            pool.offer(buffer);
        }
        long withPool = System.nanoTime() - start;
        
        System.out.printf("Without pool: %.2f ms%n", withoutPool / 1_000_000.0);
        System.out.printf("With pool: %.2f ms%n", withPool / 1_000_000.0);
        // For 4KB buffers, pooling might not win
        // For database connections, pooling wins by orders of magnitude
    }
}

Pool when creation cost dominates. Don’t pool when GC handles it efficiently. When in doubt, measure. The profiler doesn’t lie, but your intuition might.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.