Fibonacci Search: Division-Based Search

Binary search is the go-to algorithm for searching sorted arrays, but it's not the only game in town. Fibonacci search offers an alternative approach that replaces division with addition and...

Key Insights

  • Fibonacci search divides arrays using Fibonacci numbers instead of midpoints, using only addition and subtraction operations rather than division—a significant advantage on certain hardware architectures.
  • While sharing O(log n) time complexity with binary search, Fibonacci search can offer better cache performance on large datasets due to its non-uniform access patterns.
  • Modern CPUs have largely eliminated the division-cost advantage, making Fibonacci search primarily useful in memory-constrained environments or as an educational tool for understanding alternative search strategies.

Binary search is the go-to algorithm for searching sorted arrays, but it’s not the only game in town. Fibonacci search offers an alternative approach that replaces division with addition and subtraction—operations that were historically cheaper on many CPU architectures.

Both algorithms achieve O(log n) time complexity, but they partition the search space differently. Binary search always splits the array in half. Fibonacci search uses Fibonacci numbers to create asymmetric partitions that converge on the target through a series of comparisons.

The algorithm might seem like a historical curiosity, but understanding it reveals important insights about algorithm design trade-offs and how hardware constraints influence software decisions.

The Mathematics Behind It

Before diving into the algorithm, let’s establish the mathematical foundation. The Fibonacci sequence starts with 0 and 1, with each subsequent number being the sum of the two preceding ones:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...

The ratio between consecutive Fibonacci numbers approaches the golden ratio (φ ≈ 1.618) as the sequence progresses. This means each Fibonacci number is approximately 61.8% of the next one—creating a natural partition ratio.

Here’s how we generate the Fibonacci numbers we need for searching:

def get_fibonacci_numbers(n):
    """Generate Fibonacci numbers up to or exceeding n."""
    fibs = [0, 1]
    while fibs[-1] < n:
        fibs.append(fibs[-1] + fibs[-2])
    return fibs

# For an array of length 100
fibs = get_fibonacci_numbers(100)
print(fibs)  # [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]

The key insight is that any positive integer can be represented as a sum of non-consecutive Fibonacci numbers (Zeckendorf’s representation). This property allows us to systematically eliminate portions of the array using Fibonacci-sized steps.

When we partition an array of size F(k), we can split it into subarrays of size F(k-1) and F(k-2). Since F(k) = F(k-1) + F(k-2), these partitions always align perfectly with the array boundaries.

Algorithm Walkthrough

Fibonacci search works by maintaining three consecutive Fibonacci numbers and using them to calculate comparison indices. Here’s the step-by-step process:

  1. Find the smallest Fibonacci number greater than or equal to the array length
  2. Use the offset and Fibonacci indices to calculate comparison points
  3. Compare the element at the calculated index with the target
  4. Eliminate the appropriate portion and adjust Fibonacci numbers
  5. Repeat until the element is found or the search space is exhausted

Let’s implement this in Python:

def fibonacci_search(arr, target):
    """
    Search for target in sorted array using Fibonacci search.
    Returns index if found, -1 otherwise.
    """
    n = len(arr)
    
    if n == 0:
        return -1
    
    # Initialize Fibonacci numbers
    fib_m2 = 0  # F(k-2)
    fib_m1 = 1  # F(k-1)
    fib = fib_m1 + fib_m2  # F(k)
    
    # Find smallest Fibonacci number >= n
    while fib < n:
        fib_m2 = fib_m1
        fib_m1 = fib
        fib = fib_m1 + fib_m2
    
    # Offset marks the eliminated range from the front
    offset = -1
    
    # While there are elements to inspect
    while fib > 1:
        # Calculate index to compare
        # Ensure index is within bounds
        i = min(offset + fib_m2, n - 1)
        
        if arr[i] < target:
            # Target is in the upper portion
            # Move Fibonacci numbers two steps down
            fib = fib_m1
            fib_m1 = fib_m2
            fib_m2 = fib - fib_m1
            offset = i
        elif arr[i] > target:
            # Target is in the lower portion
            # Move Fibonacci numbers one step down
            fib = fib_m2
            fib_m1 = fib_m1 - fib_m2
            fib_m2 = fib - fib_m1
        else:
            # Element found
            return i
    
    # Check the last element
    if fib_m1 and offset + 1 < n and arr[offset + 1] == target:
        return offset + 1
    
    return -1

Here’s the equivalent implementation in Java for those working in that ecosystem:

public class FibonacciSearch {
    
    public static int search(int[] arr, int target) {
        int n = arr.length;
        
        if (n == 0) return -1;
        
        // Initialize Fibonacci numbers
        int fibM2 = 0;  // F(k-2)
        int fibM1 = 1;  // F(k-1)
        int fib = fibM1 + fibM2;  // F(k)
        
        // Find smallest Fibonacci >= n
        while (fib < n) {
            fibM2 = fibM1;
            fibM1 = fib;
            fib = fibM1 + fibM2;
        }
        
        int offset = -1;
        
        while (fib > 1) {
            int i = Math.min(offset + fibM2, n - 1);
            
            if (arr[i] < target) {
                fib = fibM1;
                fibM1 = fibM2;
                fibM2 = fib - fibM1;
                offset = i;
            } else if (arr[i] > target) {
                fib = fibM2;
                fibM1 = fibM1 - fibM2;
                fibM2 = fib - fibM1;
            } else {
                return i;
            }
        }
        
        if (fibM1 == 1 && offset + 1 < n && arr[offset + 1] == target) {
            return offset + 1;
        }
        
        return -1;
    }
}

Time and Space Complexity Analysis

Fibonacci search has O(log n) time complexity, matching binary search. Here’s why: each comparison eliminates at least one-third of the remaining elements. Since F(k-2)/F(k) approaches 1/φ² ≈ 0.382 as k increases, we’re always eliminating a constant fraction.

The number of comparisons is bounded by log_φ(n) ≈ 1.44 × log₂(n). This means Fibonacci search makes about 44% more comparisons than binary search in the worst case. However, each comparison involves simpler arithmetic.

Space complexity is O(1)—we only need a constant number of variables regardless of input size. This matches iterative binary search but beats recursive binary search, which uses O(log n) stack space.

The constant factors matter in practice. Fibonacci search performs more comparisons but uses cheaper operations. On modern CPUs with fast division units and branch prediction, these factors largely cancel out. The real performance difference often comes down to cache behavior.

Let’s examine the practical differences between these algorithms:

import time
import random

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    
    while left <= right:
        mid = left + (right - left) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    
    return -1

def benchmark_searches(size, iterations=10000):
    """Compare performance of both search algorithms."""
    arr = list(range(size))
    targets = [random.randint(0, size - 1) for _ in range(iterations)]
    
    # Benchmark binary search
    start = time.perf_counter()
    for target in targets:
        binary_search(arr, target)
    binary_time = time.perf_counter() - start
    
    # Benchmark Fibonacci search
    start = time.perf_counter()
    for target in targets:
        fibonacci_search(arr, target)
    fib_time = time.perf_counter() - start
    
    return binary_time, fib_time

# Run benchmarks
for size in [1000, 10000, 100000, 1000000]:
    binary_t, fib_t = benchmark_searches(size)
    ratio = fib_t / binary_t
    print(f"Size {size:>8}: Binary={binary_t:.4f}s, "
          f"Fibonacci={fib_t:.4f}s, Ratio={ratio:.2f}x")

Running this benchmark on modern hardware typically shows binary search winning by a small margin. The division operation that Fibonacci search avoids is no longer the bottleneck it once was.

However, Fibonacci search has an interesting cache behavior advantage. Binary search always jumps to the middle, which can cause cache misses on very large arrays. Fibonacci search’s asymmetric access pattern can sometimes result in better locality, especially when searching for elements near the beginning of the array.

The historical context matters here. On older processors without hardware division units, division required multiple clock cycles or even software emulation. Fibonacci search’s reliance on addition and subtraction made it genuinely faster. Today, this advantage has largely disappeared.

Practical Applications and Limitations

Despite its diminished performance advantages, Fibonacci search still has legitimate use cases:

Memory-constrained embedded systems: When working with microcontrollers that lack efficient division hardware, Fibonacci search can outperform binary search.

Tape storage and sequential access: The algorithm’s tendency to access nearby elements can reduce seek times on storage media with high seek costs.

Educational purposes: Understanding Fibonacci search deepens your knowledge of how mathematical properties can inform algorithm design.

Unevenly distributed data: When you have prior knowledge that targets are more likely near the beginning of the array, Fibonacci search’s initial smaller jumps can be advantageous.

The limitations are significant for most modern applications:

  1. Added complexity: The algorithm is harder to implement correctly and maintain
  2. Marginal benefits: Modern CPUs have fast division, eliminating the historical advantage
  3. More comparisons: The 44% increase in worst-case comparisons usually outweighs arithmetic savings
  4. Less intuitive: Binary search is easier to reason about and debug

Conclusion

Fibonacci search is a clever algorithm that demonstrates how hardware constraints can drive software design decisions. Its use of Fibonacci numbers to partition arrays creates an elegant mathematical structure, and its avoidance of division operations was genuinely valuable on older hardware.

For most modern applications, binary search remains the better choice. It’s simpler, well-understood, and performs comparably or better on current CPUs. However, Fibonacci search isn’t obsolete—it still has a place in embedded systems, educational contexts, and situations where division is genuinely expensive.

Understanding Fibonacci search makes you a better algorithm designer. It teaches you to question assumptions about operation costs, consider hardware characteristics, and appreciate that the “best” algorithm depends heavily on context. Keep it in your toolkit, even if you rarely reach for it.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.