Binomial Heap: Mergeable Priority Queue
Priority queues are fundamental data structures, but standard binary heaps have a critical weakness: merging two heaps requires O(n) time. You essentially rebuild from scratch. For many...
Key Insights
- Binomial heaps achieve O(log n) merge operations compared to O(n) for binary heaps, making them ideal for algorithms requiring frequent priority queue merges
- The structure mirrors binary number representation—a heap with n elements contains binomial trees corresponding to the set bits in n’s binary form
- While theoretically elegant, binomial heaps trade cache locality for merge efficiency; choose them when merge operations dominate your workload
Introduction to Mergeable Priority Queues
Priority queues are fundamental data structures, but standard binary heaps have a critical weakness: merging two heaps requires O(n) time. You essentially rebuild from scratch. For many algorithms—particularly in distributed systems and parallel computing—this is unacceptable.
Consider a parallel Dijkstra implementation where worker threads maintain local priority queues. Periodically, you need to merge results. With binary heaps, merge cost grows linearly with queue size. With binomial heaps, it’s O(log n).
The binomial heap achieves this through a clever structural property: it represents a heap as a forest of specially-structured trees, where merging mimics binary addition. This article covers the complete implementation, from tree structure to all major operations.
Binomial Trees: The Building Blocks
A binomial tree is defined recursively. B₀ is a single node. Bₖ is formed by linking two Bₖ₋₁ trees, making one the leftmost child of the other’s root.
This gives us predictable properties for Bₖ:
- Exactly 2^k nodes
- Height exactly k
- Root has degree k (k children)
- Children of root are Bₖ₋₁, Bₖ₋₂, …, B₁, B₀ (in that order)
Here’s how the first few binomial trees look:
B₀: ○
B₁: ○
|
○
B₂: ○
/|
○ ○
|
○
B₃: ○
/ | \
○ ○ ○
/| |
○ ○ ○
|
○
The node structure uses a left-child, right-sibling representation, which makes tree linking O(1):
class BinomialNode:
def __init__(self, key, value=None):
self.key = key
self.value = value
self.degree = 0 # Number of children
self.parent = None
self.child = None # Leftmost child
self.sibling = None # Right sibling
def __lt__(self, other):
return self.key < other.key
The sibling pointer chains together children of a node, and also chains together root nodes in the heap’s forest. This dual purpose keeps the structure compact.
Binomial Heap Structure
A binomial heap is a collection of binomial trees satisfying two properties:
- Heap property: Each tree is min-heap ordered (parent ≤ children)
- Uniqueness: At most one tree of each order exists
The second property is the key insight. Since Bₖ has 2^k nodes, a heap with n nodes contains trees corresponding to the set bits in n’s binary representation. A heap with 13 nodes (binary 1101) contains B₃, B₂, and B₀.
class BinomialHeap:
def __init__(self):
self.head = None # First root in the root list
self.min_node = None # Cached minimum for O(1) find-min
self._size = 0
def is_empty(self):
return self.head is None
def find_min(self):
"""Return minimum key in O(1) time."""
if self.min_node is None:
raise IndexError("Heap is empty")
return self.min_node.key, self.min_node.value
def _update_min(self):
"""Scan root list to find and cache minimum."""
if self.head is None:
self.min_node = None
return
self.min_node = self.head
current = self.head.sibling
while current is not None:
if current.key < self.min_node.key:
self.min_node = current
current = current.sibling
The root list is maintained in increasing order of degree. This invariant simplifies the merge operation significantly.
Core Operations: Insert and Find-Min
Insert creates a single-node heap (just B₀) and merges it with the existing heap. This seems expensive—O(log n) for a single insert—but amortized analysis shows it’s O(1) on average.
Think of it like incrementing a binary counter. Sometimes you flip one bit (insert into empty slot), sometimes you cascade through many carries. The amortized cost is constant.
def insert(self, key, value=None):
"""Insert a new element. O(log n) worst case, O(1) amortized."""
# Create a single-node heap
node = BinomialNode(key, value)
new_heap = BinomialHeap()
new_heap.head = node
new_heap.min_node = node
new_heap._size = 1
# Merge with existing heap
self._merge_heap(new_heap)
self._size += 1
return node # Return for decrease-key operations
def _merge_heap(self, other):
"""Merge another binomial heap into this one."""
if other.is_empty():
return
if self.is_empty():
self.head = other.head
self.min_node = other.min_node
return
# Merge root lists and consolidate
self.head = self._merge_root_lists(self.head, other.head)
self._consolidate()
self._update_min()
Find-min is O(1) thanks to the cached pointer. We update this pointer during merge and extract-min operations.
The Merge Operation
Merge is where binomial heaps shine. The algorithm has two phases:
- Merge root lists: Combine two sorted-by-degree lists into one
- Consolidate: Link trees of the same degree until all degrees are unique
Linking two trees of the same order is O(1)—just make one root the child of the other:
def _link(self, y, z):
"""Make y a child of z. Both must be roots of same degree."""
# y becomes leftmost child of z
y.parent = z
y.sibling = z.child
z.child = y
z.degree += 1
def _merge_root_lists(self, h1, h2):
"""Merge two root lists sorted by degree. O(log n)."""
if h1 is None:
return h2
if h2 is None:
return h1
# Standard sorted list merge
if h1.degree <= h2.degree:
head = h1
h1 = h1.sibling
else:
head = h2
h2 = h2.sibling
tail = head
while h1 is not None and h2 is not None:
if h1.degree <= h2.degree:
tail.sibling = h1
h1 = h1.sibling
else:
tail.sibling = h2
h2 = h2.sibling
tail = tail.sibling
tail.sibling = h1 if h1 is not None else h2
return head
Consolidation handles the “carries” when two trees of the same degree appear:
def _consolidate(self):
"""Link trees of same degree until all degrees unique."""
if self.head is None:
return
prev = None
curr = self.head
next_node = curr.sibling
while next_node is not None:
# Case 1: Different degrees, or three trees of same degree
# (handle third tree in next iteration)
if (curr.degree != next_node.degree or
(next_node.sibling is not None and
next_node.sibling.degree == curr.degree)):
prev = curr
curr = next_node
# Case 2: curr has smaller key, link next under curr
elif curr.key <= next_node.key:
curr.sibling = next_node.sibling
self._link(next_node, curr)
# Case 3: next has smaller key, link curr under next
else:
if prev is None:
self.head = next_node
else:
prev.sibling = next_node
self._link(curr, next_node)
curr = next_node
next_node = curr.sibling
Let’s trace merging heaps of size 7 (111 binary: B₂, B₁, B₀) and 3 (011 binary: B₁, B₀):
- Merge root lists: B₀, B₀, B₁, B₁, B₂
- Link two B₀s → B₁: B₁, B₁, B₁, B₂
- Link two B₁s → B₂: B₁, B₂, B₂
- Link two B₂s → B₃: B₁, B₃
Result: 10 nodes (1010 binary), containing B₃ and B₁. Exactly right.
Extract-Min and Decrease-Key
Extract-min removes the minimum root, but its children form a valid binomial heap (in reverse order). We reverse the child list and merge it back:
def extract_min(self):
"""Remove and return minimum element. O(log n)."""
if self.is_empty():
raise IndexError("Heap is empty")
# Find min and its predecessor in root list
min_node = self.min_node
prev = None
curr = self.head
while curr is not min_node:
prev = curr
curr = curr.sibling
# Remove min from root list
if prev is None:
self.head = min_node.sibling
else:
prev.sibling = min_node.sibling
# Reverse min's children to form a new heap
child_heap = BinomialHeap()
child = min_node.child
while child is not None:
next_child = child.sibling
child.sibling = child_heap.head
child.parent = None
child_heap.head = child
child = next_child
# Merge children back
self._merge_heap(child_heap)
self._size -= 1
self._update_min()
return min_node.key, min_node.value
def decrease_key(self, node, new_key):
"""Decrease a node's key. O(log n)."""
if new_key > node.key:
raise ValueError("New key must be smaller")
node.key = new_key
# Bubble up like binary heap
current = node
parent = current.parent
while parent is not None and current.key < parent.key:
# Swap keys and values (not pointers)
current.key, parent.key = parent.key, current.key
current.value, parent.value = parent.value, current.value
current = parent
parent = current.parent
# Update min pointer if needed
if current.parent is None and current.key < self.min_node.key:
self.min_node = current
Performance Summary and When to Use
| Operation | Binary Heap | Binomial Heap | Fibonacci Heap |
|---|---|---|---|
| find-min | O(1) | O(1) | O(1) |
| insert | O(log n) | O(log n)* | O(1)* |
| extract-min | O(log n) | O(log n) | O(log n)* |
| decrease-key | O(log n) | O(log n) | O(1)* |
| merge | O(n) | O(log n) | O(1) |
*Amortized bounds
Use binomial heaps when:
- Merge operations are frequent (parallel algorithms, external sorting)
- You need better-than-linear merge but Fibonacci heaps are overkill
- Memory overhead of Fibonacci heaps is unacceptable
Avoid binomial heaps when:
- Merges are rare (standard binary heap is simpler and cache-friendlier)
- Decrease-key dominates (Fibonacci heap’s O(1) amortized wins)
- You’re working with small heaps (constant factors matter more)
The binomial heap occupies a practical middle ground. It’s more complex than a binary heap but far simpler than a Fibonacci heap, while still providing logarithmic merges. For distributed priority queues or parallel graph algorithms, it’s often the right choice.