Staircase Problem: Number of Ways to Climb
You're standing at the bottom of a staircase with `n` steps. You can climb either 1 or 2 steps at a time. How many distinct ways can you reach the top?
Key Insights
- The staircase problem is a disguised Fibonacci sequence—recognizing this pattern instantly reveals the optimal solution approach and demonstrates how dynamic programming transforms exponential problems into linear ones.
- Memoization and tabulation are interchangeable strategies with different trade-offs: top-down is more intuitive but risks stack overflow, while bottom-up is iterative and often faster in practice.
- Space optimization from O(n) to O(1) is the difference between a good solution and a great one—interviewers specifically look for this final refinement.
Introduction to the Staircase Problem
You’re standing at the bottom of a staircase with n steps. You can climb either 1 or 2 steps at a time. How many distinct ways can you reach the top?
This deceptively simple problem appears in technical interviews at Google, Amazon, and countless other companies. It’s not because climbing stairs is particularly interesting—it’s because this problem elegantly tests whether you understand dynamic programming fundamentals.
The staircase problem forces you to recognize overlapping subproblems, derive a recurrence relation, and optimize from brute force to an efficient solution. Master this problem, and you’ve internalized a pattern that applies to hundreds of other DP challenges.
Let’s work through the complete solution journey, from naive recursion to space-optimized elegance.
Recognizing the Pattern: Fibonacci Connection
Before writing any code, think about the problem structure. To reach step n, you must have come from either step n-1 (taking 1 step) or step n-2 (taking 2 steps). There’s no other way.
This means the number of ways to reach step n equals the sum of ways to reach steps n-1 and n-2:
ways(n) = ways(n-1) + ways(n-2)
Look familiar? This is exactly the Fibonacci recurrence relation. The base cases are:
ways(1) = 1— one way to climb one step (take 1 step)ways(2) = 2— two ways to climb two steps (1+1 or 2)
Here’s the simplest recursive implementation:
def climb_stairs_recursive(n: int) -> int:
if n <= 2:
return n
return climb_stairs_recursive(n - 1) + climb_stairs_recursive(n - 2)
This works correctly. For n = 5, it returns 8. For n = 10, it returns 89. The logic is sound, but there’s a serious problem lurking beneath the surface.
The Naive Recursive Approach and Its Pitfalls
The recursive solution has exponential time complexity: O(2^n). To understand why, let’s visualize the recursion tree for n = 5:
climb(5)
/ \
climb(4) climb(3)
/ \ / \
climb(3) climb(2) climb(2) climb(1)
/ \
climb(2) climb(1)
Notice how climb(3) is computed twice, climb(2) is computed three times. As n grows, this redundancy explodes. For n = 40, you’re making billions of redundant function calls.
Let’s prove this with a call counter:
call_count = 0
def climb_stairs_counted(n: int) -> int:
global call_count
call_count += 1
if n <= 2:
return n
return climb_stairs_counted(n - 1) + climb_stairs_counted(n - 2)
# Test with increasing values of n
for n in [10, 20, 30, 35]:
call_count = 0
result = climb_stairs_counted(n)
print(f"n={n}: result={result}, calls={call_count}")
Running this produces alarming output:
n=10: result=89, calls=177
n=20: result=10946, calls=21891
n=30: result=1346269, calls=2692537
n=35: result=14930352, calls=29860703
At n = 35, we’re making nearly 30 million function calls. At n = 50, your program will hang. This is unacceptable for production code and will fail any reasonable interview.
Optimization with Memoization (Top-Down DP)
The fix is straightforward: cache results as we compute them. If we’ve already calculated ways(3), store it and return the cached value on subsequent calls.
This technique is called memoization—a top-down dynamic programming approach where we start from the target and work backward, caching along the way.
def climb_stairs_memo(n: int, memo: dict = None) -> int:
if memo is None:
memo = {}
if n <= 2:
return n
if n in memo:
return memo[n]
memo[n] = climb_stairs_memo(n - 1, memo) + climb_stairs_memo(n - 2, memo)
return memo[n]
A cleaner approach uses Python’s functools.lru_cache decorator:
from functools import lru_cache
@lru_cache(maxsize=None)
def climb_stairs_cached(n: int) -> int:
if n <= 2:
return n
return climb_stairs_cached(n - 1) + climb_stairs_cached(n - 2)
Both implementations achieve O(n) time complexity and O(n) space complexity. Each subproblem is solved exactly once, and we store n results in our cache.
The memoized version handles n = 1000 instantly, whereas the naive recursive version would take longer than the age of the universe.
Tabulation Approach (Bottom-Up DP)
Memoization works top-down: start at n, recurse to base cases, then build back up. Tabulation works bottom-up: start at base cases and iteratively build toward n.
def climb_stairs_tabulation(n: int) -> int:
if n <= 2:
return n
dp = [0] * (n + 1)
dp[1] = 1
dp[2] = 2
for i in range(3, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
Both approaches have identical time and space complexity: O(n) each. However, tabulation has practical advantages:
- No recursion overhead — No function call stack to manage
- No stack overflow risk — Deep recursion can crash with large
n - Better cache locality — Sequential array access is CPU-friendly
For interviews, either approach is acceptable. For production code, I prefer tabulation for its predictable memory behavior.
Space-Optimized Solution
Here’s where good solutions become great ones. Look at the tabulation code: we maintain an array of size n, but we only ever access the previous two elements. Why store the entire history?
We can reduce space complexity from O(n) to O(1) by tracking just two variables:
def climb_stairs_optimized(n: int) -> int:
if n <= 2:
return n
prev2 = 1 # ways to reach step 1
prev1 = 2 # ways to reach step 2
for i in range(3, n + 1):
current = prev1 + prev2
prev2 = prev1
prev1 = current
return prev1
Here’s the same solution in Java for those who prefer it:
public int climbStairs(int n) {
if (n <= 2) {
return n;
}
int prev2 = 1;
int prev1 = 2;
for (int i = 3; i <= n; i++) {
int current = prev1 + prev2;
prev2 = prev1;
prev1 = current;
}
return prev1;
}
This is the optimal solution: O(n) time, O(1) space. In interviews, reaching this optimization demonstrates mastery of the problem and attention to resource efficiency.
Extending the Problem: K Steps Variant
Interviewers often extend the problem: instead of 1 or 2 steps, you can take 1 to k steps at a time. How does this change the solution?
The recurrence relation becomes:
ways(n) = ways(n-1) + ways(n-2) + ... + ways(n-k)
A naive extension of our tabulation approach would be O(n*k) time with O(n) space. But we can do better using a sliding window technique:
def climb_stairs_k_steps(n: int, k: int) -> int:
if n == 0:
return 1
if n < 0:
return 0
# dp[i] represents ways to reach step i
dp = [0] * (n + 1)
dp[0] = 1 # one way to stay at ground level
window_sum = dp[0] # sliding window sum of last k elements
for i in range(1, n + 1):
dp[i] = window_sum
window_sum += dp[i]
# Remove element falling out of window
if i >= k:
window_sum -= dp[i - k]
return dp[n]
This maintains O(n) time complexity regardless of k. The sliding window sum avoids recalculating the sum of the previous k elements at each step.
For the space-optimized version with O(k) space:
from collections import deque
def climb_stairs_k_optimized(n: int, k: int) -> int:
if n == 0:
return 1
window = deque([1]) # Start with dp[0] = 1
window_sum = 1
for i in range(1, n + 1):
current = window_sum
window.append(current)
window_sum += current
if len(window) > k:
window_sum -= window.popleft()
return window[-1]
This generalized solution handles any step configuration while maintaining efficiency.
Wrapping Up
The staircase problem teaches a complete dynamic programming workflow:
- Identify the recurrence relation
- Implement naive recursion to verify correctness
- Add memoization to eliminate redundant work
- Convert to tabulation for iterative efficiency
- Optimize space by tracking only necessary state
This progression from O(2^n) time and O(n) stack space to O(n) time and O(1) space represents the kind of optimization thinking that separates competent engineers from exceptional ones.
When you encounter a new DP problem, follow this same pattern. The specific recurrence will change, but the optimization journey remains constant.