Hamiltonian Path: Visiting All Vertices

A Hamiltonian path visits every vertex in a graph exactly once. A Hamiltonian cycle does the same but returns to the starting vertex, forming a closed loop. The distinction matters: some graphs have...

Key Insights

  • Hamiltonian path problems are NP-complete, meaning no known polynomial-time algorithm exists—understanding this shapes how you approach solutions in production systems.
  • Dynamic programming with bitmasks reduces complexity from O(n!) to O(n² × 2ⁿ), making graphs up to ~20 vertices tractable on modern hardware.
  • Real-world applications like PCB drilling and genome sequencing often use heuristics and approximations rather than exact solutions for practical performance.

Introduction to Hamiltonian Paths

A Hamiltonian path visits every vertex in a graph exactly once. A Hamiltonian cycle does the same but returns to the starting vertex, forming a closed loop. The distinction matters: some graphs have Hamiltonian paths but no Hamiltonian cycles.

Sir William Rowan Hamilton introduced this concept in 1857 through his “Icosian Game,” a puzzle requiring players to find a cycle visiting all vertices of a dodecahedron. What seemed like a recreational puzzle turned out to encode one of the hardest problems in computer science.

The core constraint is deceptively simple: visit every vertex exactly once. No revisits, no skips. This single constraint transforms an otherwise straightforward graph traversal into a computationally brutal problem.

The Computational Complexity Challenge

Hamiltonian path is NP-complete. This means two things: we can verify a solution in polynomial time (just check the path visits all vertices once), but finding that solution has no known polynomial-time algorithm.

Compare this to Eulerian paths, which visit every edge exactly once. Euler’s problem has a simple O(E) solution—just check if at most two vertices have odd degree. The Hamiltonian problem offers no such shortcut.

Why the dramatic difference? Eulerian paths have local structure we can exploit. At each vertex, we just need to balance incoming and outgoing edges. Hamiltonian paths require global knowledge—every decision affects all future possibilities.

For algorithm design, this means:

  • Exact solutions work only for small graphs (typically n < 25)
  • Large-scale problems require heuristics or approximations
  • Problem-specific structure can sometimes help (tournament graphs, for example, always have Hamiltonian paths)

Backtracking Solution

The brute-force approach uses recursive backtracking. We build a path incrementally, making choices at each step and abandoning paths that can’t lead to solutions.

The decision tree has n choices for the first vertex, up to n-1 for the second, and so on. Without pruning, this gives O(n!) complexity. Pruning helps by cutting branches early when we detect dead ends.

def hamiltonian_path_backtracking(graph):
    """
    Find a Hamiltonian path using backtracking.
    
    Args:
        graph: Adjacency matrix where graph[i][j] = 1 if edge exists
    
    Returns:
        List of vertices forming the path, or None if no path exists
    """
    n = len(graph)
    path = []
    visited = [False] * n
    
    def backtrack(current):
        path.append(current)
        visited[current] = True
        
        # Base case: all vertices visited
        if len(path) == n:
            return True
        
        # Try all adjacent unvisited vertices
        for next_vertex in range(n):
            if graph[current][next_vertex] and not visited[next_vertex]:
                if backtrack(next_vertex):
                    return True
        
        # Backtrack: remove current vertex and mark unvisited
        path.pop()
        visited[current] = False
        return False
    
    # Try starting from each vertex
    for start in range(n):
        if backtrack(start):
            return path
        path.clear()
        visited = [False] * n
    
    return None


# Example usage
graph = [
    [0, 1, 1, 0, 0],
    [1, 0, 1, 1, 0],
    [1, 1, 0, 0, 1],
    [0, 1, 0, 0, 1],
    [0, 0, 1, 1, 0]
]

result = hamiltonian_path_backtracking(graph)
print(f"Hamiltonian path: {result}")  # Output: [0, 1, 3, 4, 2] or similar

This implementation tries each vertex as a starting point and recursively explores neighbors. When stuck, it backtracks by unmarking the current vertex and trying alternatives.

Dynamic Programming with Bitmasks

The Held-Karp approach trades space for time using state compression. Instead of tracking the order of visited vertices (which gives n! states), we track which vertices have been visited using a bitmask (giving 2ⁿ states).

The key insight: for finding whether a Hamiltonian path exists, we only need to know the set of visited vertices and the current endpoint—not the order we visited them.

State definition: dp[mask][v] = True if we can visit exactly the vertices in mask and end at vertex v.

def hamiltonian_path_dp(graph):
    """
    Find Hamiltonian path using dynamic programming with bitmasks.
    
    Time complexity: O(n² × 2ⁿ)
    Space complexity: O(n × 2ⁿ)
    
    Args:
        graph: Adjacency matrix
    
    Returns:
        List of vertices forming the path, or None if no path exists
    """
    n = len(graph)
    
    # dp[mask][v] = True if we can reach state (mask, v)
    # mask: bitmask of visited vertices
    # v: current vertex (last in path)
    dp = [[False] * n for _ in range(1 << n)]
    parent = [[(-1, -1)] * n for _ in range(1 << n)]
    
    # Base case: start at each vertex
    for v in range(n):
        dp[1 << v][v] = True
    
    # Fill DP table
    for mask in range(1, 1 << n):
        for last in range(n):
            if not dp[mask][last]:
                continue
            if not (mask & (1 << last)):
                continue
                
            # Try extending to each unvisited neighbor
            for next_v in range(n):
                if mask & (1 << next_v):  # Already visited
                    continue
                if not graph[last][next_v]:  # No edge
                    continue
                    
                new_mask = mask | (1 << next_v)
                dp[new_mask][next_v] = True
                parent[new_mask][next_v] = (mask, last)
    
    # Check if full path exists and reconstruct
    full_mask = (1 << n) - 1
    for end in range(n):
        if dp[full_mask][end]:
            # Reconstruct path
            path = []
            mask, v = full_mask, end
            while v != -1:
                path.append(v)
                mask, v = parent[mask][v]
            return path[::-1]
    
    return None


# Example usage
graph = [
    [0, 1, 1, 0, 0],
    [1, 0, 1, 1, 0],
    [1, 1, 0, 0, 1],
    [0, 1, 0, 0, 1],
    [0, 0, 1, 1, 0]
]

result = hamiltonian_path_dp(graph)
print(f"Hamiltonian path: {result}")

The complexity improvement is significant. For n=15: factorial is ~1.3 trillion operations, while n² × 2ⁿ is ~7.4 million. That’s a 175,000x speedup.

Practical Optimizations and Heuristics

For real applications, several optimizations make backtracking viable for larger inputs:

Degree-based ordering: Start with low-degree vertices. They have fewer neighbors, so if they can’t be included in a path, we fail fast.

Pruning on connectivity: If removing the current path disconnects unvisited vertices, abandon immediately.

Warnsdorff’s rule: For Knight’s Tour and similar problems, always move to the vertex with the fewest onward moves. This greedy heuristic often finds solutions without backtracking.

def hamiltonian_path_optimized(graph):
    """
    Optimized backtracking with degree-based vertex ordering.
    
    Starts from lowest-degree vertex and prioritizes
    neighbors by their remaining degree (fewest first).
    """
    n = len(graph)
    
    # Calculate initial degrees
    degrees = [sum(row) for row in graph]
    
    # Sort vertices by degree (ascending)
    vertices_by_degree = sorted(range(n), key=lambda v: degrees[v])
    
    path = []
    visited = [False] * n
    
    def get_unvisited_neighbors(v):
        """Get unvisited neighbors sorted by remaining unvisited degree."""
        neighbors = []
        for u in range(n):
            if graph[v][u] and not visited[u]:
                # Count unvisited neighbors of u (excluding v)
                remaining_degree = sum(
                    1 for w in range(n) 
                    if graph[u][w] and not visited[w] and w != v
                )
                neighbors.append((remaining_degree, u))
        # Sort by remaining degree (Warnsdorff's heuristic)
        neighbors.sort()
        return [u for _, u in neighbors]
    
    def backtrack(current):
        path.append(current)
        visited[current] = True
        
        if len(path) == n:
            return True
        
        # Early pruning: check if any unvisited vertex is now isolated
        for v in range(n):
            if not visited[v]:
                has_unvisited_neighbor = any(
                    graph[v][u] and not visited[u] 
                    for u in range(n)
                )
                if not has_unvisited_neighbor and len(path) < n - 1:
                    path.pop()
                    visited[current] = False
                    return False
        
        for next_v in get_unvisited_neighbors(current):
            if backtrack(next_v):
                return True
        
        path.pop()
        visited[current] = False
        return False
    
    # Try starting from lowest-degree vertices first
    for start in vertices_by_degree:
        if backtrack(start):
            return path
        path.clear()
        visited = [False] * n
    
    return None

Real-World Applications

PCB Drilling: A drill must visit all hole positions. Minimizing travel time means finding a short Hamiltonian path. Manufacturing uses TSP solvers, which build on Hamiltonian path algorithms.

Genome Sequencing: DNA fragment assembly can be modeled as finding Hamiltonian paths through overlap graphs. Modern sequencers process millions of fragments this way.

Knight’s Tour: A classic puzzle where a knight must visit every square on a chessboard exactly once. This is a Hamiltonian path on the knight’s move graph.

def knights_tour(n=8):
    """
    Solve Knight's Tour on an n×n board using Warnsdorff's heuristic.
    
    Returns board with move numbers, or None if no solution.
    """
    # Knight move offsets
    moves = [
        (-2, -1), (-2, 1), (-1, -2), (-1, 2),
        (1, -2), (1, 2), (2, -1), (2, 1)
    ]
    
    board = [[-1] * n for _ in range(n)]
    
    def is_valid(x, y):
        return 0 <= x < n and 0 <= y < n and board[x][y] == -1
    
    def count_onward_moves(x, y):
        """Count valid moves from position (x, y)."""
        return sum(1 for dx, dy in moves if is_valid(x + dx, y + dy))
    
    def solve(x, y, move_num):
        board[x][y] = move_num
        
        if move_num == n * n - 1:
            return True
        
        # Get valid next moves, sorted by onward move count (Warnsdorff)
        next_moves = []
        for dx, dy in moves:
            nx, ny = x + dx, y + dy
            if is_valid(nx, ny):
                next_moves.append((count_onward_moves(nx, ny), nx, ny))
        
        next_moves.sort()
        
        for _, nx, ny in next_moves:
            if solve(nx, ny, move_num + 1):
                return True
        
        board[x][y] = -1
        return False
    
    if solve(0, 0, 0):
        return board
    return None


# Solve and display
tour = knights_tour(8)
if tour:
    for row in tour:
        print(' '.join(f'{cell:2}' for cell in row))

Warnsdorff’s heuristic makes Knight’s Tour tractable even for large boards. Without it, backtracking would take astronomical time on an 8×8 board.

Conclusion and Further Reading

Hamiltonian path problems sit at the intersection of theoretical computer science and practical engineering. The NP-completeness means you won’t find a magic algorithm, but understanding the problem structure lets you choose appropriate tools:

  • Use backtracking with heuristics for small graphs or structured problems
  • Apply bitmask DP when you need guaranteed solutions for n ≤ 20
  • Consider approximations and problem-specific optimizations for larger instances

Related problems worth exploring: the Traveling Salesman Problem (weighted Hamiltonian cycle), graph coloring (another NP-complete classic), and longest path problems in DAGs (polynomial time, interestingly). Each builds on the foundations covered here.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.