Queue Data Structure: Implementation and Operations
A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. The element that enters first leaves first—exactly like a checkout line at a grocery store. The person who...
Key Insights
- Queues enforce FIFO ordering, making them essential for task scheduling, breadth-first search, and any scenario where processing order must match arrival order.
- Circular array queues solve the “phantom full” problem and provide O(1) operations with predictable memory usage—prefer them when you know the maximum capacity upfront.
- Linked list queues offer dynamic sizing at the cost of pointer overhead; choose them when queue size is unpredictable or memory fragmentation isn’t a concern.
Introduction to Queues
A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. The element that enters first leaves first—exactly like a checkout line at a grocery store. The person who arrives first gets served first, and newcomers join at the back.
This simple constraint makes queues surprisingly powerful. They appear everywhere in software systems: operating system task schedulers, print spoolers managing document jobs, message brokers buffering events between services, and breadth-first search algorithms exploring graphs level by level.
Understanding queues deeply means understanding not just the abstract concept, but the implementation trade-offs that affect real-world performance. Let’s build queues from scratch and explore when each approach makes sense.
Core Queue Operations
Every queue implementation must support these fundamental operations:
- enqueue(element): Add an element to the rear of the queue
- dequeue(): Remove and return the element at the front
- peek() / front(): View the front element without removing it
- isEmpty(): Check if the queue contains no elements
- size(): Return the current number of elements
Here’s an interface that defines this contract:
public interface Queue<T> {
void enqueue(T element);
T dequeue();
T peek();
boolean isEmpty();
int size();
}
The critical insight is that both enqueue and dequeue must be O(1) operations. If either requires shifting elements or traversing the structure, you’ve built something inefficient. This constraint drives our implementation choices.
Array-Based Implementation
The naive approach uses an array with a rear pointer. Enqueue appends to the rear; dequeue removes from index 0 and shifts everything left. This gives O(1) enqueue but O(n) dequeue—unacceptable.
The fix: maintain both front and rear pointers. Instead of shifting, just move the front pointer forward. But this creates a new problem. After several enqueue/dequeue cycles, your front pointer advances, leaving unused space at the beginning. Eventually, the rear hits the array’s end while plenty of space exists at the front. The queue reports “full” when it isn’t—the phantom full problem.
Circular Queue Solution
A circular queue treats the array as a ring. When the rear pointer reaches the end, it wraps around to the beginning (if space exists). This eliminates wasted space entirely.
public class CircularQueue<T> implements Queue<T> {
private Object[] elements;
private int front;
private int rear;
private int size;
private final int capacity;
public CircularQueue(int capacity) {
this.capacity = capacity;
this.elements = new Object[capacity];
this.front = 0;
this.rear = -1;
this.size = 0;
}
@Override
public void enqueue(T element) {
if (size == capacity) {
throw new IllegalStateException("Queue is full");
}
rear = (rear + 1) % capacity; // Wrap around
elements[rear] = element;
size++;
}
@Override
@SuppressWarnings("unchecked")
public T dequeue() {
if (isEmpty()) {
throw new IllegalStateException("Queue is empty");
}
T element = (T) elements[front];
elements[front] = null; // Help garbage collection
front = (front + 1) % capacity; // Wrap around
size--;
return element;
}
@Override
@SuppressWarnings("unchecked")
public T peek() {
if (isEmpty()) {
throw new IllegalStateException("Queue is empty");
}
return (T) elements[front];
}
@Override
public boolean isEmpty() {
return size == 0;
}
@Override
public int size() {
return size;
}
}
The modulo operator (%) handles wrap-around elegantly. When rear is at index capacity - 1, adding 1 and taking modulo brings it back to 0. The same logic applies to the front pointer during dequeue.
Notice we track size explicitly rather than computing it from front and rear positions. This avoids ambiguity: when front equals rear, is the queue empty or full? With a size counter, the answer is immediate.
Linked List Implementation
When you don’t know the maximum queue size upfront—or when that maximum is very large—a linked list implementation makes more sense. It grows and shrinks dynamically, using exactly the memory needed.
public class LinkedQueue<T> implements Queue<T> {
private static class Node<T> {
T data;
Node<T> next;
Node(T data) {
this.data = data;
this.next = null;
}
}
private Node<T> head; // Front of queue
private Node<T> tail; // Rear of queue
private int size;
public LinkedQueue() {
this.head = null;
this.tail = null;
this.size = 0;
}
@Override
public void enqueue(T element) {
Node<T> newNode = new Node<>(element);
if (isEmpty()) {
head = newNode;
tail = newNode;
} else {
tail.next = newNode;
tail = newNode;
}
size++;
}
@Override
public T dequeue() {
if (isEmpty()) {
throw new IllegalStateException("Queue is empty");
}
T data = head.data;
head = head.next;
if (head == null) {
tail = null; // Queue is now empty
}
size--;
return data;
}
@Override
public T peek() {
if (isEmpty()) {
throw new IllegalStateException("Queue is empty");
}
return head.data;
}
@Override
public boolean isEmpty() {
return head == null;
}
@Override
public int size() {
return size;
}
}
Maintaining both head and tail pointers is essential. Without a tail pointer, enqueue would require traversing the entire list to find the end—O(n) instead of O(1). The tail pointer gives us direct access to where new elements belong.
The trade-off is memory overhead. Each node carries a pointer (8 bytes on 64-bit systems) in addition to its data. For small data types like integers, this overhead can double memory usage compared to an array.
Time and Space Complexity Analysis
Both implementations achieve O(1) time complexity for all core operations:
| Operation | Array (Circular) | Linked List |
|---|---|---|
| enqueue | O(1) | O(1) |
| dequeue | O(1) | O(1) |
| peek | O(1) | O(1) |
| isEmpty | O(1) | O(1) |
| size | O(1) | O(1) |
Space complexity differs meaningfully:
- Circular array: O(capacity) regardless of actual usage. You pay for the maximum size upfront.
- Linked list: O(n) where n is current size, plus pointer overhead per element.
Choose circular arrays when: you know the maximum size, memory is tight, or you need cache-friendly sequential access.
Choose linked lists when: size is unpredictable, the maximum could be very large, or you’re frequently creating/destroying queues.
Queue Variants
Double-Ended Queue (Deque)
A deque allows insertion and removal at both ends. It’s more flexible than a standard queue and can simulate both stacks and queues.
Priority Queue
Elements have priorities; the highest-priority element dequeues first regardless of insertion order. Typically implemented with a heap for O(log n) operations.
public class SimplePriorityQueue<T extends Comparable<T>> {
private final List<T> heap = new ArrayList<>();
public void enqueue(T element) {
heap.add(element);
siftUp(heap.size() - 1);
}
public T dequeue() {
if (heap.isEmpty()) {
throw new IllegalStateException("Queue is empty");
}
T result = heap.get(0);
T last = heap.remove(heap.size() - 1);
if (!heap.isEmpty()) {
heap.set(0, last);
siftDown(0);
}
return result;
}
private void siftUp(int index) {
while (index > 0) {
int parent = (index - 1) / 2;
if (heap.get(index).compareTo(heap.get(parent)) >= 0) break;
swap(index, parent);
index = parent;
}
}
private void siftDown(int index) {
int size = heap.size();
while (index < size) {
int smallest = index;
int left = 2 * index + 1;
int right = 2 * index + 2;
if (left < size && heap.get(left).compareTo(heap.get(smallest)) < 0) {
smallest = left;
}
if (right < size && heap.get(right).compareTo(heap.get(smallest)) < 0) {
smallest = right;
}
if (smallest == index) break;
swap(index, smallest);
index = smallest;
}
}
private void swap(int i, int j) {
T temp = heap.get(i);
heap.set(i, heap.get(j));
heap.set(j, temp);
}
}
Practical Applications
BFS Graph Traversal
Breadth-first search is the canonical queue application. It explores nodes level by level, processing all neighbors of the current node before moving deeper.
public class GraphTraversal {
public static List<Integer> bfs(Map<Integer, List<Integer>> graph, int start) {
List<Integer> visited = new ArrayList<>();
Set<Integer> seen = new HashSet<>();
Queue<Integer> queue = new LinkedQueue<>();
queue.enqueue(start);
seen.add(start);
while (!queue.isEmpty()) {
int current = queue.dequeue();
visited.add(current);
for (int neighbor : graph.getOrDefault(current, List.of())) {
if (!seen.contains(neighbor)) {
seen.add(neighbor);
queue.enqueue(neighbor);
}
}
}
return visited;
}
}
The queue ensures we process nodes in the order we discover them. This guarantees the shortest path in unweighted graphs—a property depth-first search cannot provide.
Rate Limiting
Queues naturally implement sliding window rate limiters. Track request timestamps in a queue; on each new request, dequeue expired timestamps and check if the remaining count exceeds your limit.
The queue’s FIFO property means the oldest timestamps are always at the front, making expiration checks efficient.
Queues are foundational. Master their implementation trade-offs, and you’ll make better decisions about data structure selection throughout your career.