Fibonacci Sequence: Iterative, Recursive, and DP
The Fibonacci sequence appears everywhere: spiral patterns in sunflowers, branching in trees, the golden ratio in art and architecture, and countless coding interviews. Its mathematical definition is...
Key Insights
- The naive recursive Fibonacci implementation has O(2^n) time complexity because it recomputes the same values exponentially many times—fib(35) makes over 29 million function calls
- Memoization and tabulation both achieve O(n) time complexity but differ in approach: memoization adds caching to recursion (top-down), while tabulation builds solutions iteratively (bottom-up)
- The space-optimized iterative solution is the practical choice for production code—O(n) time with O(1) space, no recursion overhead, and no risk of stack overflow
The Fibonacci sequence appears everywhere: spiral patterns in sunflowers, branching in trees, the golden ratio in art and architecture, and countless coding interviews. Its mathematical definition is deceptively simple—each number is the sum of the two preceding ones—but implementing it efficiently reveals fundamental concepts in algorithm design.
This sequence serves as the perfect vehicle for understanding dynamic programming because it exhibits the two key properties that make DP applicable: overlapping subproblems and optimal substructure. Let’s explore four progressively optimized implementations and understand exactly why each improvement matters.
The Naive Recursive Approach
The mathematical definition of Fibonacci translates directly into code:
def fib_recursive(n: int) -> int:
if n <= 1:
return n
return fib_recursive(n - 1) + fib_recursive(n - 2)
This implementation is elegant and mirrors the mathematical definition perfectly. It’s also catastrophically inefficient.
The time complexity is O(2^n). To understand why, consider that each call to fib_recursive(n) spawns two more calls, which each spawn two more, forming a binary tree of calls. The tree’s height is n, so the number of nodes approaches 2^n.
Try computing fib_recursive(40) and watch your computer struggle. On my machine, it takes several seconds. fib_recursive(50) would take hours.
The space complexity is O(n) due to the call stack depth—we recurse n levels deep before hitting a base case and unwinding.
Visualizing the Problem: Redundant Computation
The inefficiency becomes obvious when you trace the call tree for fib(5):
fib(5)
├── fib(4)
│ ├── fib(3)
│ │ ├── fib(2)
│ │ │ ├── fib(1) = 1
│ │ │ └── fib(0) = 0
│ │ └── fib(1) = 1
│ └── fib(2)
│ ├── fib(1) = 1
│ └── fib(0) = 0
└── fib(3)
├── fib(2)
│ ├── fib(1) = 1
│ └── fib(0) = 0
└── fib(1) = 1
Notice that fib(3) is computed twice, fib(2) is computed three times, and fib(1) is computed five times. This redundancy explodes as n grows.
Let’s quantify this waste:
from collections import defaultdict
call_counts = defaultdict(int)
def fib_counted(n: int) -> int:
call_counts[n] += 1
if n <= 1:
return n
return fib_counted(n - 1) + fib_counted(n - 2)
# Reset and compute
call_counts.clear()
result = fib_counted(35)
print(f"fib(35) = {result}")
print(f"Total function calls: {sum(call_counts.values()):,}")
print(f"\nCalls per value (first 10):")
for i in range(10):
print(f" fib({i}): {call_counts[i]:,} times")
Output:
fib(35) = 9227465
Total function calls: 29,860,703
Calls per value (first 10):
fib(0): 5,702,887 times
fib(1): 9,227,465 times
fib(2): 5,702,887 times
fib(3): 3,524,578 times
fib(4): 2,178,309 times
fib(5): 1,346,269 times
fib(6): 832,040 times
fib(7): 514,229 times
fib(8): 317,811 times
fib(9): 196,418 times
Nearly 30 million function calls to compute a single number. We’re computing fib(1) over 9 million times when we only need to compute it once.
This is the hallmark of overlapping subproblems: the same subproblems appear repeatedly in our recursion tree. Dynamic programming eliminates this redundancy.
Memoization (Top-Down DP)
Memoization is the simplest optimization: cache results as you compute them. If you’ve already computed fib(k), return the cached value instead of recomputing it.
def fib_memoized(n: int, cache: dict = None) -> int:
if cache is None:
cache = {}
if n in cache:
return cache[n]
if n <= 1:
return n
cache[n] = fib_memoized(n - 1, cache) + fib_memoized(n - 2, cache)
return cache[n]
Or more elegantly with Python’s built-in decorator:
from functools import lru_cache
@lru_cache(maxsize=None)
def fib_lru(n: int) -> int:
if n <= 1:
return n
return fib_lru(n - 1) + fib_lru(n - 2)
The time complexity drops to O(n). Each unique value of fib(k) for k from 0 to n is computed exactly once, then retrieved from cache for all subsequent calls.
The space complexity is O(n) for both the cache (storing n values) and the call stack (recursion depth of n).
Let’s verify the improvement:
call_counts_memo = defaultdict(int)
cache = {}
def fib_memo_counted(n: int) -> int:
call_counts_memo[n] += 1
if n in cache:
return cache[n]
if n <= 1:
return n
cache[n] = fib_memo_counted(n - 1) + fib_memo_counted(n - 2)
return cache[n]
call_counts_memo.clear()
cache.clear()
result = fib_memo_counted(35)
print(f"fib(35) = {result}")
print(f"Total function calls: {sum(call_counts_memo.values())}")
Output:
fib(35) = 9227465
Total function calls: 69
From 29 million calls down to 69. That’s the power of eliminating redundant computation.
Tabulation (Bottom-Up DP)
Memoization works top-down: start with the problem you want to solve and recurse into subproblems. Tabulation works bottom-up: start with the smallest subproblems and build up to your answer.
def fib_tabulation(n: int) -> int:
if n <= 1:
return n
dp = [0] * (n + 1)
dp[0] = 0
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
The time complexity is O(n)—we iterate from 2 to n exactly once.
The space complexity is O(n) for the dp array.
Tabulation has several advantages over memoization:
- No recursion overhead: Function calls are expensive. Iteration is cheap.
- No stack overflow risk: Deep recursion can crash your program. Python’s default recursion limit is 1000.
- Predictable memory access: Sequential array access is cache-friendly.
The trade-off is that tabulation requires you to understand the order in which subproblems must be solved. For Fibonacci, this is trivial. For more complex DP problems, determining the correct iteration order can be challenging.
Space-Optimized Iterative Solution
Look at the tabulation solution again. At each step, we only need dp[i-1] and dp[i-2]. We’re storing n values but only ever reading two of them. This is wasteful.
def fib_optimized(n: int) -> int:
if n <= 1:
return n
prev2 = 0 # fib(i-2)
prev1 = 1 # fib(i-1)
for _ in range(2, n + 1):
current = prev1 + prev2
prev2 = prev1
prev1 = current
return prev1
The time complexity remains O(n).
The space complexity drops to O(1)—we only store three integers regardless of n.
This is the solution you should use in production code. It’s fast, memory-efficient, and won’t crash on large inputs.
For even cleaner Python, you can use tuple unpacking:
def fib_pythonic(n: int) -> int:
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b
Comparison and When to Use Each
| Approach | Time | Space | Stack Safe | Best For |
|---|---|---|---|---|
| Naive Recursive | O(2^n) | O(n) | No | Teaching recursion |
| Memoization | O(n) | O(n) | No | Quick optimization of existing recursive code |
| Tabulation | O(n) | O(n) | Yes | When you need all intermediate values |
| Space-Optimized | O(n) | O(1) | Yes | Production code |
Use naive recursion only for teaching purposes or when n is guaranteed to be very small (< 20).
Use memoization when you have existing recursive code and want a quick optimization, or when the subproblem dependencies are complex and top-down thinking is more natural.
Use tabulation when you need to access intermediate results later, or when you want to avoid recursion entirely.
Use space-optimized iteration for production code when you only need the final answer.
For completeness: there’s also an O(log n) solution using matrix exponentiation. The insight is that Fibonacci numbers can be computed using matrix multiplication:
[F(n+1)] [1 1]^n [1]
[F(n) ] = [1 0] × [0]
Matrix exponentiation by squaring achieves O(log n) time. This matters for computing extremely large Fibonacci numbers (n in the millions), but for typical use cases, the O(n) iterative solution is fast enough and far simpler.
The Fibonacci sequence is a gateway to dynamic programming. The same pattern—identifying overlapping subproblems, caching results, and choosing between top-down and bottom-up approaches—applies to hundreds of more complex problems. Master this simple example, and you’ll have the foundation to tackle them all.