Two Pointer Technique: Efficient Array Searching
Every developer writes this code at some point: two nested loops iterating over an array to find pairs matching some condition. It works. It's intuitive. And it falls apart the moment your input...
Key Insights
- Two pointers reduce O(n²) nested loops to O(n) linear scans by exploiting sorted data properties—each pointer movement eliminates multiple comparisons at once.
- The technique comes in two flavors: converging pointers (moving inward from both ends) and same-direction pointers (fast/slow for in-place modifications).
- Two pointers work when you can make greedy decisions about which pointer to move based on comparing current state to your target condition.
Why Nested Loops Don’t Scale
Every developer writes this code at some point: two nested loops iterating over an array to find pairs matching some condition. It works. It’s intuitive. And it falls apart the moment your input grows.
Consider finding two numbers in an array that sum to a target. The naive approach checks every possible pair:
def two_sum_naive(nums, target):
n = len(nums)
for i in range(n):
for j in range(i + 1, n):
if nums[i] + nums[j] == target:
return [i, j]
return []
With 10,000 elements, you’re making roughly 50 million comparisons. With 100,000 elements, 5 billion. This O(n²) complexity kills performance in production systems processing real datasets.
The two pointer technique exploits a key insight: when data is sorted, you can eliminate entire ranges of possibilities with a single comparison. Instead of checking every pair, you strategically move two pointers based on whether your current sum is too high or too low. This drops complexity to O(n).
How Two Pointers Work
The technique uses two index variables that traverse the array according to specific rules. There are two primary patterns:
Converging pointers start at opposite ends and move toward each other. Use this when you need to find pairs or evaluate conditions across the full array span. The pointers meet in the middle, guaranteeing you’ve considered all relevant combinations.
Same-direction pointers (sometimes called fast/slow pointers) both start at the beginning but move at different rates. Use this for in-place array modifications where one pointer reads and another writes.
Here’s the mental model for converging pointers:
Array: [1, 3, 5, 7, 9, 11]
↑ ↑
left right
Target sum: 12
Step 1: 1 + 11 = 12 ✓ Found!
If sum < target: move left pointer right (need larger value)
If sum > target: move right pointer left (need smaller value)
The key insight is that moving a pointer eliminates all pairs involving the old position. When you move the left pointer right, you’re saying “no valid pair includes the old left value with any remaining right values.” This is only valid reasoning when the array is sorted.
Classic Problem: Two Sum on Sorted Array
Given a sorted array and a target sum, find two numbers that add up to the target. Return their indices.
def two_sum_sorted(nums, target):
left, right = 0, len(nums) - 1
while left < right:
current_sum = nums[left] + nums[right]
if current_sum == target:
return [left, right]
elif current_sum < target:
left += 1 # Need a larger sum, move left pointer right
else:
right -= 1 # Need a smaller sum, move right pointer left
return [] # No valid pair found
Why does moving pointers work here? Because the array is sorted:
- If
current_sum < target, movingrightleft would only make the sum smaller (we’d pick an even smaller number). Soleftmust move right. - If
current_sum > target, movingleftright would only make the sum larger. Sorightmust move left.
Each iteration eliminates one position from consideration. We make at most n moves total, giving us O(n) time complexity with O(1) space.
Pattern: Removing Duplicates In-Place
Same-direction pointers shine when modifying arrays in-place. The classic example is removing duplicates from a sorted array, returning the new length.
The trick: use a “write” pointer that tracks where to place the next unique element, and a “read” pointer that scans through the array.
def remove_duplicates(nums):
if not nums:
return 0
write = 1 # Position to write next unique element
for read in range(1, len(nums)):
if nums[read] != nums[read - 1]:
nums[write] = nums[read]
write += 1
return write
# Example walkthrough:
# Input: [1, 1, 2, 2, 2, 3]
#
# read=1: nums[1]=1 == nums[0]=1, skip
# read=2: nums[2]=2 != nums[1]=1, write nums[2] to position 1, write=2
# read=3: nums[3]=2 == nums[2]=2, skip
# read=4: nums[4]=2 == nums[3]=2, skip
# read=5: nums[5]=3 != nums[4]=2, write nums[5] to position 2, write=3
#
# Result: [1, 2, 3, ...] with length 3
The write pointer always stays at or behind the read pointer, ensuring we never overwrite data we haven’t processed yet. This pattern applies to many in-place filtering problems: remove specific values, move zeros to end, or compress arrays.
Pattern: Container With Most Water
This problem demonstrates greedy decision-making with converging pointers. Given an array of heights representing vertical lines, find two lines that form a container holding the most water.
The area between lines at positions i and j with heights h[i] and h[j] is:
area = min(h[i], h[j]) * (j - i)
def max_area(heights):
left, right = 0, len(heights) - 1
max_water = 0
while left < right:
width = right - left
height = min(heights[left], heights[right])
max_water = max(max_water, width * height)
# Move the pointer pointing to the shorter line
if heights[left] < heights[right]:
left += 1
else:
right -= 1
return max_water
The greedy choice here is crucial: always move the pointer pointing to the shorter line. Why? The area is limited by the shorter line. Moving the taller line inward can only decrease width while the height stays capped by the shorter line—guaranteed worse or equal. Moving the shorter line gives a chance (not guarantee) of finding a taller line that increases area.
This greedy reasoning is what makes two pointers applicable. If you can’t justify why moving one pointer is definitively better (or at least not worse), the technique may not apply.
Pattern: Three Sum Reduction
Many problems involving three or more elements reduce to two-pointer subproblems. The classic Three Sum problem asks: find all unique triplets in an array that sum to zero.
The strategy: fix one element, then use two pointers to find pairs that sum to its negation.
def three_sum(nums):
nums.sort() # Must sort first
result = []
n = len(nums)
for i in range(n - 2):
# Skip duplicates for the first element
if i > 0 and nums[i] == nums[i - 1]:
continue
target = -nums[i]
left, right = i + 1, n - 1
while left < right:
current_sum = nums[left] + nums[right]
if current_sum == target:
result.append([nums[i], nums[left], nums[right]])
# Skip duplicates for second and third elements
while left < right and nums[left] == nums[left + 1]:
left += 1
while left < right and nums[right] == nums[right - 1]:
right -= 1
left += 1
right -= 1
elif current_sum < target:
left += 1
else:
right -= 1
return result
The outer loop is O(n), and the inner two-pointer scan is O(n), giving O(n²) total—a significant improvement over the O(n³) brute force approach. The duplicate-skipping logic prevents returning the same triplet multiple times.
This reduction pattern extends further: Four Sum becomes “fix one element, run Three Sum on the rest.” Each additional element adds another O(n) factor.
When to Use (and When Not To)
Use two pointers when:
- Data is sorted (or you can afford to sort it)
- You’re searching for pairs or subarrays meeting a condition
- You can make greedy decisions about which pointer to move
- You need O(1) space complexity
Consider alternatives when:
- Data isn’t sorted and sorting would exceed time budget
- You need to find elements by value, not by relationship (use hash maps)
- The condition depends on non-adjacent elements in complex ways
- You need to find a single element (use binary search)
Common mistakes:
-
Forgetting to sort: Two pointers on unsorted data produces wrong results. Always verify sorting is done.
-
Wrong termination condition: Use
left < rightfor pairs (they shouldn’t meet),left <= rightfor binary search variants. -
Infinite loops: Ensure at least one pointer moves every iteration. A common bug is forgetting to move pointers after finding a match.
-
Off-by-one in same-direction: When using read/write pointers, carefully track whether you’re comparing
nums[read]withnums[read-1]ornums[write-1]. -
Ignoring duplicates: For problems requiring unique results, add explicit duplicate-skipping logic after finding matches.
Two pointers is a fundamental technique that appears constantly in interview problems and production code alike. Master the two patterns—converging and same-direction—and you’ll recognize opportunities to apply them across dozens of problem types. The key is always the same: can you eliminate possibilities faster than checking them one by one?