Circular Array: Ring Buffer Implementation

A ring buffer—also called a circular buffer or circular queue—is a fixed-size data structure that wraps around to its beginning when it reaches the end. Imagine an array where position `n-1` connects...

Key Insights

  • Ring buffers provide O(1) enqueue and dequeue operations with zero memory allocation overhead, making them ideal for real-time systems where predictable performance matters more than flexibility.
  • The “one slot empty” convention elegantly solves the full-vs-empty ambiguity problem without requiring an additional counter variable, though a separate count offers clearer semantics at minimal cost.
  • Power-of-two sizing transforms expensive modulo operations into fast bitwise AND operations, a micro-optimization that compounds significantly in high-throughput scenarios.

Introduction to Ring Buffers

A ring buffer—also called a circular buffer or circular queue—is a fixed-size data structure that wraps around to its beginning when it reaches the end. Imagine an array where position n-1 connects back to position 0, forming a logical ring.

This simple concept solves a fundamental problem: how do you efficiently manage a sliding window of data without constantly shifting elements or allocating memory? With a standard array-based queue, removing the front element requires shifting everything forward—an O(n) operation. Ring buffers eliminate this entirely.

You’ll find ring buffers everywhere in systems programming:

  • Audio processing: Buffering samples between capture and playback
  • Network stacks: Holding packets between receipt and processing
  • Logging systems: Keeping the last N events without unbounded growth
  • Producer-consumer patterns: Decoupling data generation from consumption
  • Keyboard drivers: Buffering keystrokes until the application reads them

The “circular” property means old data gets overwritten by new data when the buffer fills—exactly what you want for streaming scenarios where stale data has no value.

How Ring Buffers Work

A ring buffer maintains two pointers: head (where we read from) and tail (where we write to). These pointers chase each other around the buffer, wrapping back to zero when they reach the end.

Initial state (empty):
┌───┬───┬───┬───┬───┬───┬───┬───┐
│   │   │   │   │   │   │   │   │
└───┴───┴───┴───┴───┴───┴───┴───┘
head/tail

After 3 writes:
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ A │ B │ C │   │   │   │   │   │
└───┴───┴───┴───┴───┴───┴───┴───┘
  ↑           ↑
head        tail

After 2 reads:
┌───┬───┬───┬───┬───┬───┬───┬───┐
│   │   │ C │   │   │   │   │   │
└───┴───┴───┴───┴───┴───┴───┴───┘
          ↑   ↑
        head tail

The wrap-around uses modulo arithmetic: next_position = (current_position + 1) % capacity. When the tail reaches position 7 and advances, it wraps to position 0.

Here’s the basic structure:

class RingBuffer:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.buffer = [None] * capacity
        self.head = 0  # Read position
        self.tail = 0  # Write position
        self.count = 0 # Current number of elements

The critical question: how do you distinguish between full and empty? Both states have head == tail. We’ll address this in the edge cases section.

Core Operations Implementation

The two fundamental operations are enqueue (write) and dequeue (read). Each advances its respective pointer with wrap-around.

class RingBuffer:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.buffer = [None] * capacity
        self.head = 0
        self.tail = 0
        self.count = 0
    
    def enqueue(self, item, overwrite: bool = False) -> bool:
        """
        Add an item to the buffer.
        Returns True if successful, False if full and overwrite=False.
        """
        if self.count == self.capacity:
            if not overwrite:
                return False
            # Overwrite oldest: advance head to discard it
            self.head = (self.head + 1) % self.capacity
            self.count -= 1
        
        self.buffer[self.tail] = item
        self.tail = (self.tail + 1) % self.capacity
        self.count += 1
        return True
    
    def dequeue(self):
        """
        Remove and return the oldest item.
        Raises IndexError if empty.
        """
        if self.count == 0:
            raise IndexError("Buffer is empty")
        
        item = self.buffer[self.head]
        self.buffer[self.head] = None  # Help garbage collection
        self.head = (self.head + 1) % self.capacity
        self.count -= 1
        return item

The overwrite parameter controls overflow behavior. For logging systems, you typically want to overwrite—losing old logs is acceptable, but blocking the producer isn’t. For command queues, you might prefer rejection to avoid losing recent commands.

Handling Edge Cases

The empty-vs-full ambiguity is the classic ring buffer gotcha. When head == tail, is the buffer completely empty or completely full? Two solutions exist:

Solution 1: Separate count variable (shown above)

Track the number of elements explicitly. This costs one integer but provides clear semantics and makes size() trivial.

Solution 2: Waste one slot

Never let the buffer completely fill. “Full” means tail is one position behind head. This wastes one slot but eliminates the count variable.

class RingBufferOneSlotEmpty:
    def __init__(self, capacity: int):
        # Allocate one extra slot internally
        self.capacity = capacity + 1
        self.buffer = [None] * self.capacity
        self.head = 0
        self.tail = 0
    
    def is_empty(self) -> bool:
        return self.head == self.tail
    
    def is_full(self) -> bool:
        return (self.tail + 1) % self.capacity == self.head
    
    def size(self) -> int:
        if self.tail >= self.head:
            return self.tail - self.head
        return self.capacity - self.head + self.tail

I prefer the explicit count approach. The extra integer is negligible, and the code is more readable. The one-slot-empty trick feels clever but obscures intent.

For thread safety, you have options ranging from simple locks to lock-free implementations. A single-producer, single-consumer (SPSC) ring buffer can be made lock-free with careful memory ordering:

import threading

class ThreadSafeRingBuffer:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.buffer = [None] * capacity
        self.head = 0
        self.tail = 0
        self.count = 0
        self.lock = threading.Lock()
    
    def enqueue(self, item) -> bool:
        with self.lock:
            if self.count == self.capacity:
                return False
            self.buffer[self.tail] = item
            self.tail = (self.tail + 1) % self.capacity
            self.count += 1
            return True
    
    def dequeue(self):
        with self.lock:
            if self.count == 0:
                return None
            item = self.buffer[self.head]
            self.head = (self.head + 1) % self.capacity
            self.count -= 1
            return item

Memory Efficiency and Performance

Ring buffers shine in performance-critical code:

O(1) everything: Enqueue, dequeue, peek, size—all constant time. No element shifting, no memory allocation after initialization.

Cache-friendly: Sequential memory access patterns play well with CPU caches. Elements are contiguous, unlike linked-list nodes scattered across the heap.

Zero allocation overhead: Pre-allocated fixed buffer means no GC pressure during operation. Critical for real-time systems.

Power-of-two optimization: When capacity is a power of two, modulo becomes a bitwise AND:

class FastRingBuffer:
    def __init__(self, capacity: int):
        # Round up to next power of two
        self.capacity = 1 << (capacity - 1).bit_length()
        self.mask = self.capacity - 1  # e.g., 8 -> 0b111
        self.buffer = [None] * self.capacity
        self.head = 0
        self.tail = 0
        self.count = 0
    
    def enqueue(self, item) -> bool:
        if self.count == self.capacity:
            return False
        self.buffer[self.tail] = item
        self.tail = (self.tail + 1) & self.mask  # Fast wrap-around
        self.count += 1
        return True
    
    def dequeue(self):
        if self.count == 0:
            raise IndexError("Buffer is empty")
        item = self.buffer[self.head]
        self.head = (self.head + 1) & self.mask  # Fast wrap-around
        self.count -= 1
        return item

The bitwise AND (index + 1) & mask is significantly faster than (index + 1) % capacity on most architectures. For a buffer processing millions of operations per second, this matters.

Real-World Implementation

Here’s a complete, production-ready implementation with iterator support and a clean API:

from typing import TypeVar, Generic, Iterator, Optional
from collections.abc import Iterable

T = TypeVar('T')

class RingBuffer(Generic[T]):
    """
    A fixed-size circular buffer with O(1) operations.
    
    Supports iteration, length checking, and configurable overflow behavior.
    """
    
    def __init__(self, capacity: int):
        if capacity <= 0:
            raise ValueError("Capacity must be positive")
        # Use power-of-two sizing for fast modulo
        self._capacity = 1 << (capacity - 1).bit_length()
        self._mask = self._capacity - 1
        self._buffer: list[Optional[T]] = [None] * self._capacity
        self._head = 0
        self._tail = 0
        self._count = 0
    
    @property
    def capacity(self) -> int:
        """Maximum number of elements the buffer can hold."""
        return self._capacity
    
    def __len__(self) -> int:
        return self._count
    
    def __bool__(self) -> bool:
        return self._count > 0
    
    def is_empty(self) -> bool:
        return self._count == 0
    
    def is_full(self) -> bool:
        return self._count == self._capacity
    
    def push(self, item: T, overwrite: bool = False) -> bool:
        """Add item to buffer. Returns False if full and overwrite=False."""
        if self._count == self._capacity:
            if not overwrite:
                return False
            self._head = (self._head + 1) & self._mask
            self._count -= 1
        
        self._buffer[self._tail] = item
        self._tail = (self._tail + 1) & self._mask
        self._count += 1
        return True
    
    def pop(self) -> T:
        """Remove and return oldest item. Raises IndexError if empty."""
        if self._count == 0:
            raise IndexError("pop from empty buffer")
        
        item = self._buffer[self._head]
        self._buffer[self._head] = None
        self._head = (self._head + 1) & self._mask
        self._count -= 1
        return item  # type: ignore
    
    def peek(self) -> T:
        """Return oldest item without removing. Raises IndexError if empty."""
        if self._count == 0:
            raise IndexError("peek at empty buffer")
        return self._buffer[self._head]  # type: ignore
    
    def clear(self) -> None:
        """Remove all items from buffer."""
        self._buffer = [None] * self._capacity
        self._head = 0
        self._tail = 0
        self._count = 0
    
    def __iter__(self) -> Iterator[T]:
        """Iterate over items from oldest to newest (non-destructive)."""
        idx = self._head
        for _ in range(self._count):
            yield self._buffer[idx]  # type: ignore
            idx = (idx + 1) & self._mask
    
    def extend(self, items: Iterable[T], overwrite: bool = False) -> int:
        """Add multiple items. Returns count of items added."""
        added = 0
        for item in items:
            if self.push(item, overwrite):
                added += 1
            elif not overwrite:
                break
        return added


# Usage demonstration
if __name__ == "__main__":
    buf = RingBuffer[int](5)
    
    # Basic operations
    for i in range(5):
        buf.push(i)
    
    print(f"Buffer contents: {list(buf)}")  # [0, 1, 2, 3, 4]
    print(f"Size: {len(buf)}, Full: {buf.is_full()}")
    
    # Overwrite mode
    buf.push(99, overwrite=True)
    print(f"After overwrite: {list(buf)}")  # [1, 2, 3, 4, 99]
    
    # Drain buffer
    while buf:
        print(f"Popped: {buf.pop()}")

When to Use (and When Not To)

Use ring buffers when:

  • You have a known maximum buffer size
  • You need predictable, allocation-free performance
  • Old data can be safely discarded (streaming, logging)
  • You’re implementing producer-consumer patterns
  • Memory footprint must be bounded

Avoid ring buffers when:

  • Buffer size is unpredictable or highly variable
  • You can’t afford to lose any data
  • You need random access by index frequently
  • Dynamic resizing is a requirement

For concurrent systems, consider established implementations like boost::lockfree::spsc_queue in C++ or crossbeam::queue::ArrayQueue in Rust. Rolling your own lock-free ring buffer is notoriously error-prone.

Ring buffers are one of those foundational data structures that every systems programmer should understand deeply. They’re simple in concept but elegant in application—a fixed-size array that behaves like an infinite stream.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.