Ternary Search: Unimodal Function Search
Binary search finds elements in sorted arrays. Ternary search solves a different problem: finding the maximum or minimum of a unimodal function. While binary search asks 'is my target to the left or...
Key Insights
- Ternary search finds extrema in unimodal functions by eliminating one-third of the search space per iteration, achieving O(log₃ n) complexity—ideal when you can’t compute derivatives
- The algorithm requires strict unimodality; applying it to multi-modal functions produces silently wrong results without any error indication
- For continuous optimization, golden-section search is often superior due to fewer function evaluations, but ternary search remains the go-to for discrete domains
Introduction to Ternary Search
Binary search finds elements in sorted arrays. Ternary search solves a different problem: finding the maximum or minimum of a unimodal function. While binary search asks “is my target to the left or right?”, ternary search asks “is the peak to the left, middle, or right?”
This distinction matters because many real-world optimization problems don’t give you a sorted array—they give you a function where you can evaluate any point but can’t see the whole landscape. You’re searching for the hilltop while standing in fog.
The algorithm divides the search space into thirds using two probe points, evaluates the function at both, and eliminates the third that definitely doesn’t contain the extremum. Repeat until convergence.
Understanding Unimodal Functions
A unimodal function has exactly one peak (for maximum) or one valley (for minimum) in the search interval. Formally, for a function f on interval [a, b] with maximum at point m:
- f is strictly increasing on [a, m]
- f is strictly decreasing on [m, b]
Classic examples include parabolas opening downward, Gaussian distributions, and many cost/revenue functions in economics.
import numpy as np
def example_unimodal(x: float) -> float:
"""A simple unimodal function with maximum at x=5."""
return -(x - 5) ** 2 + 25
def visualize_unimodal():
"""Generate points to visualize the function."""
x_values = np.linspace(0, 10, 100)
y_values = [example_unimodal(x) for x in x_values]
# Find and display the maximum
max_idx = np.argmax(y_values)
print(f"Maximum at x={x_values[max_idx]:.2f}, y={y_values[max_idx]:.2f}")
return x_values, y_values
Binary search fails here because there’s no ordering relationship. If you probe the midpoint and get f(5) = 25, you have no idea whether to go left or right—you’d need to know if you’re on the ascending or descending side. Ternary search solves this by probing two points and comparing them.
The Algorithm Explained
Here’s the core logic: given interval [left, right], compute two midpoints that divide it into thirds:
mid1 = left + (right - left) / 3
mid2 = right - (right - left) / 3
Evaluate f(mid1) and f(mid2). For finding a maximum:
- If f(mid1) < f(mid2): the maximum is in [mid1, right] (eliminate left third)
- If f(mid1) > f(mid2): the maximum is in [left, mid2] (eliminate right third)
- If f(mid1) == f(mid2): the maximum is in [mid1, mid2] (eliminate both outer thirds)
Each iteration reduces the search space by at least one-third, giving O(log₃ n) complexity.
from typing import Callable
def ternary_search_max(
f: Callable[[float], float],
left: float,
right: float,
epsilon: float = 1e-9
) -> float:
"""
Find x that maximizes f(x) in interval [left, right].
Assumes f is unimodal with a single maximum.
"""
while right - left > epsilon:
mid1 = left + (right - left) / 3
mid2 = right - (right - left) / 3
if f(mid1) < f(mid2):
left = mid1
else:
right = mid2
return (left + right) / 2
# Usage
result = ternary_search_max(example_unimodal, 0, 10)
print(f"Maximum found at x = {result:.6f}") # Output: ~5.000000
The number of iterations is approximately log₃(range/epsilon). For a range of 10 and epsilon of 1e-9, that’s about 63 iterations.
Implementation Variations
Discrete Domain
When searching over integers (array indices, discrete parameter values), the continuous version needs modification:
from typing import List
def ternary_search_discrete_max(arr: List[float]) -> int:
"""
Find index of maximum element in a unimodal array.
Returns the index, not the value.
"""
left, right = 0, len(arr) - 1
while right - left > 2:
mid1 = left + (right - left) // 3
mid2 = right - (right - left) // 3
if arr[mid1] < arr[mid2]:
left = mid1
else:
right = mid2
# Linear search over remaining elements (at most 3)
max_idx = left
for i in range(left, right + 1):
if arr[i] > arr[max_idx]:
max_idx = i
return max_idx
# Example: unimodal array
unimodal_array = [1, 4, 8, 15, 22, 25, 21, 14, 9, 3]
idx = ternary_search_discrete_max(unimodal_array)
print(f"Maximum at index {idx}, value = {unimodal_array[idx]}")
Finding Minimum
For minimum instead of maximum, flip the comparison:
def ternary_search_min(
f: Callable[[float], float],
left: float,
right: float,
epsilon: float = 1e-9
) -> float:
"""Find x that minimizes f(x) in interval [left, right]."""
while right - left > epsilon:
mid1 = left + (right - left) / 3
mid2 = right - (right - left) / 3
if f(mid1) > f(mid2): # Flipped comparison
left = mid1
else:
right = mid2
return (left + right) / 2
Iteration-Limited Version
For production code, add an iteration limit to prevent infinite loops from floating-point issues:
def ternary_search_safe(
f: Callable[[float], float],
left: float,
right: float,
epsilon: float = 1e-9,
max_iterations: int = 200,
find_max: bool = True
) -> tuple[float, int]:
"""
Safe ternary search with iteration limit.
Returns (result, iterations_used).
"""
iterations = 0
while right - left > epsilon and iterations < max_iterations:
mid1 = left + (right - left) / 3
mid2 = right - (right - left) / 3
if find_max:
should_go_right = f(mid1) < f(mid2)
else:
should_go_right = f(mid1) > f(mid2)
if should_go_right:
left = mid1
else:
right = mid2
iterations += 1
return (left + right) / 2, iterations
Practical Applications
Revenue Optimization
A classic application: finding the price that maximizes revenue given a demand curve.
def demand_curve(price: float) -> float:
"""
Demand as function of price.
Higher price = lower demand (realistic model).
"""
base_demand = 10000
price_sensitivity = 50
return max(0, base_demand - price_sensitivity * price)
def revenue(price: float) -> float:
"""Revenue = price × demand."""
return price * demand_curve(price)
def find_optimal_price(min_price: float, max_price: float) -> dict:
"""Find price that maximizes revenue."""
optimal_price = ternary_search_max(revenue, min_price, max_price)
optimal_revenue = revenue(optimal_price)
optimal_demand = demand_curve(optimal_price)
return {
"price": optimal_price,
"revenue": optimal_revenue,
"units_sold": optimal_demand
}
# Find optimal pricing
result = find_optimal_price(0, 200)
print(f"Optimal price: ${result['price']:.2f}")
print(f"Expected revenue: ${result['revenue']:.2f}")
print(f"Units sold: {result['units_sold']:.0f}")
Hyperparameter Tuning
When a hyperparameter has a unimodal effect on validation accuracy:
def tune_learning_rate(
train_fn: Callable[[float], float], # Returns validation accuracy
min_lr: float = 1e-5,
max_lr: float = 1.0
) -> float:
"""
Find optimal learning rate assuming unimodal accuracy curve.
Works on log scale for learning rates.
"""
import math
def accuracy_at_log_lr(log_lr: float) -> float:
lr = math.exp(log_lr)
return train_fn(lr)
optimal_log_lr = ternary_search_max(
accuracy_at_log_lr,
math.log(min_lr),
math.log(max_lr),
epsilon=0.01 # Coarser precision for expensive evaluations
)
return math.exp(optimal_log_lr)
Comparison with Alternative Approaches
Golden-section search reduces the search space by the golden ratio (~0.618) per iteration but reuses one function evaluation from the previous iteration. For expensive function evaluations, this 50% reduction in calls often beats ternary search despite slower convergence per iteration.
Binary search with derivatives works if you can compute f’(x). Search for where the derivative equals zero. This requires only one evaluation per iteration but needs derivative computation.
Gradient descent handles multivariate optimization and doesn’t require unimodality, but may find local optima and requires tuning step sizes.
Choose ternary search when:
- You can’t compute derivatives
- The function is known to be unimodal
- You’re working in a discrete domain
- Function evaluation is cheap
Common Pitfalls and Edge Cases
The biggest danger is applying ternary search to non-unimodal functions. It will return something, but that something is likely wrong.
def validate_unimodality(
f: Callable[[float], float],
left: float,
right: float,
samples: int = 100
) -> tuple[bool, str]:
"""
Heuristic check for unimodality.
Not foolproof, but catches obvious violations.
"""
step = (right - left) / samples
values = [f(left + i * step) for i in range(samples + 1)]
# Find direction changes
direction_changes = 0
increasing = None
for i in range(1, len(values)):
if values[i] > values[i-1]:
current_dir = "up"
elif values[i] < values[i-1]:
current_dir = "down"
else:
continue
if increasing is not None and current_dir != increasing:
direction_changes += 1
increasing = current_dir
if direction_changes == 0:
return False, "Function is monotonic, not unimodal"
elif direction_changes == 1:
return True, "Function appears unimodal"
else:
return False, f"Function has {direction_changes} direction changes"
def safe_ternary_search(
f: Callable[[float], float],
left: float,
right: float,
**kwargs
) -> float:
"""Ternary search with unimodality validation."""
is_unimodal, message = validate_unimodality(f, left, right)
if not is_unimodal:
raise ValueError(f"Function may not be unimodal: {message}")
return ternary_search_max(f, left, right, **kwargs)
Other pitfalls include:
- Plateau regions: When f(mid1) == f(mid2), both points might be on a flat section. The algorithm still works but converges slower.
- Floating-point precision: Very small epsilon values (< 1e-15) cause issues. Stick to 1e-9 for doubles.
- Boundary extrema: If the maximum is at the boundary, ternary search converges to it but slowly. Consider checking boundaries explicitly.
Ternary search is a focused tool for a specific problem class. Understand its assumptions, validate your inputs, and it will serve you well in optimization tasks where derivatives aren’t available.