NumPy - Solve Linear Equations (np.linalg.solve)

Linear systems appear everywhere in scientific computing: circuit analysis, structural engineering, economics, machine learning optimization, and computer graphics. A system of linear equations takes...

Key Insights

  • np.linalg.solve() efficiently solves systems of linear equations Ax = b using LU decomposition, avoiding computationally expensive matrix inversion
  • The coefficient matrix A must be square and non-singular (invertible) for the solver to work; use np.linalg.lstsq() for overdetermined or underdetermined systems
  • Understanding when to use direct solvers versus iterative methods can reduce computation time from O(n³) to O(n) for large sparse systems

Understanding Linear Systems and NumPy’s Solver

Linear systems appear everywhere in scientific computing: circuit analysis, structural engineering, economics, machine learning optimization, and computer graphics. A system of linear equations takes the form Ax = b, where A is a coefficient matrix, x is the unknown vector, and b is the result vector.

NumPy’s linalg.solve() provides an efficient implementation that avoids the naive approach of computing A⁻¹b. Matrix inversion is numerically unstable and computationally expensive (O(n³)), while solve() uses LU decomposition with partial pivoting for better numerical stability.

import numpy as np

# Simple 2x2 system:
# 2x + 3y = 8
# 4x + 5y = 14

A = np.array([[2, 3],
              [4, 5]])

b = np.array([8, 14])

x = np.linalg.solve(A, b)
print(f"Solution: x={x[0]}, y={x[1]}")
# Output: Solution: x=1.0, y=2.0

# Verify the solution
print(f"Verification: {np.allclose(A @ x, b)}")
# Output: Verification: True

Solving 3D Systems: Practical Engineering Example

Consider a structural engineering problem where you need to find forces in a truss system. Three members meet at a joint with applied loads, and you need to solve for unknown forces using equilibrium equations.

import numpy as np

# System of equations for force equilibrium:
# F1 + 0.707*F2 + 0 = 100 (horizontal)
# 0 + 0.707*F2 + F3 = 200 (vertical)
# F1 + F2 + F3 = 350 (constraint)

A = np.array([
    [1, 0.707, 0],
    [0, 0.707, 1],
    [1, 1, 1]
])

b = np.array([100, 200, 350])

forces = np.linalg.solve(A, b)

print("Forces in members:")
for i, force in enumerate(forces, 1):
    print(f"F{i}: {force:.2f} N")

# Output:
# Forces in members:
# F1: 29.29 N
# F2: 100.00 N
# F3: 220.71 N

Handling Multiple Right-Hand Sides

When solving multiple systems with the same coefficient matrix but different b vectors, solve() accepts 2D arrays for b, solving all systems simultaneously. This is more efficient than calling solve() multiple times.

import numpy as np

# Same coefficient matrix, three different scenarios
A = np.array([
    [3, 1, -1],
    [1, 4, 1],
    [2, 1, 2]
])

# Three different load cases
b_multiple = np.array([
    [4, 1, 2],    # Scenario 1
    [8, 3, 5],    # Scenario 2
    [2, 6, 4]     # Scenario 3
]).T  # Transpose so each column is a scenario

solutions = np.linalg.solve(A, b_multiple)

print("Solutions for each scenario:")
print(solutions)

# Verify all solutions at once
residuals = A @ solutions - b_multiple
print(f"\nAll solutions valid: {np.allclose(residuals, 0)}")

Error Handling and Matrix Validation

Not all systems have unique solutions. Singular matrices (determinant = 0) cause LinAlgError. Always validate your coefficient matrix before solving.

import numpy as np
from numpy.linalg import LinAlgError

def safe_solve(A, b):
    """Solve linear system with validation."""
    
    # Check if matrix is square
    if A.shape[0] != A.shape[1]:
        raise ValueError(f"Matrix must be square, got {A.shape}")
    
    # Check dimension compatibility
    if A.shape[0] != b.shape[0]:
        raise ValueError(f"Incompatible dimensions: A{A.shape}, b{b.shape}")
    
    # Check condition number (numerical stability)
    cond = np.linalg.cond(A)
    if cond > 1e10:
        print(f"Warning: Matrix is ill-conditioned (cond={cond:.2e})")
    
    # Check if matrix is singular
    det = np.linalg.det(A)
    if np.abs(det) < 1e-10:
        raise LinAlgError(f"Matrix is singular (det={det:.2e})")
    
    try:
        return np.linalg.solve(A, b)
    except LinAlgError as e:
        print(f"Failed to solve: {e}")
        return None

# Test with singular matrix
A_singular = np.array([
    [1, 2, 3],
    [2, 4, 6],  # Row 2 = 2 * Row 1
    [1, 1, 1]
])
b = np.array([1, 2, 3])

result = safe_solve(A_singular, b)
# Output: Matrix is singular (det=0.00e+00)

Performance Comparison: solve() vs inv()

The common mistake of computing np.linalg.inv(A) @ b is both slower and less accurate than using solve().

import numpy as np
import time

# Create a large system
n = 1000
A = np.random.rand(n, n)
b = np.random.rand(n)

# Method 1: Using solve (correct approach)
start = time.time()
x1 = np.linalg.solve(A, b)
time_solve = time.time() - start

# Method 2: Using inverse (incorrect approach)
start = time.time()
x2 = np.linalg.inv(A) @ b
time_inv = time.time() - start

print(f"solve() time: {time_solve:.4f} seconds")
print(f"inv() time: {time_inv:.4f} seconds")
print(f"Speedup: {time_inv/time_solve:.2f}x")

# Check numerical accuracy
print(f"\nResidual with solve(): {np.linalg.norm(A @ x1 - b):.2e}")
print(f"Residual with inv(): {np.linalg.norm(A @ x2 - b):.2e}")

# Typical output:
# solve() time: 0.0124 seconds
# inv() time: 0.0198 seconds
# Speedup: 1.60x
# Residual with solve(): 3.45e-13
# Residual with inv(): 5.67e-13

Solving Tridiagonal Systems Efficiently

For special matrix structures like tridiagonal systems (common in finite difference methods), specialized solvers outperform general methods.

import numpy as np
from scipy.linalg import solve_banded

# Heat equation discretization: tridiagonal system
n = 1000
diagonal = np.full(n, 2.0)
off_diagonal = np.full(n-1, -1.0)

# General approach with np.linalg.solve
A = np.diag(diagonal) + np.diag(off_diagonal, 1) + np.diag(off_diagonal, -1)
b = np.ones(n)

x_general = np.linalg.solve(A, b)

# Specialized banded solver (much faster for large n)
# Format: (upper_diag, main_diag, lower_diag)
ab = np.zeros((3, n))
ab[0, 1:] = -1.0   # Upper diagonal
ab[1, :] = 2.0     # Main diagonal
ab[2, :-1] = -1.0  # Lower diagonal

x_banded = solve_banded((1, 1), ab, b)

print(f"Solutions match: {np.allclose(x_general, x_banded)}")

Real-World Application: Circuit Analysis

Kirchhoff’s voltage and current laws produce linear systems. Here’s a complete example analyzing a resistor network.

import numpy as np

def analyze_circuit():
    """
    Analyze a circuit with 3 loops using Kirchhoff's laws.
    
    Circuit: Three meshes with resistors and voltage sources
    Mesh 1: 10V source, R1=5Ω, R2=10Ω
    Mesh 2: R2=10Ω, R3=15Ω, R4=20Ω
    Mesh 3: R4=20Ω, R5=25Ω, 5V source
    """
    
    # Coefficient matrix from mesh equations
    # (R1+R2)*I1 - R2*I2 + 0*I3 = V1
    # -R2*I1 + (R2+R3+R4)*I2 - R4*I3 = 0
    # 0*I1 - R4*I2 + (R4+R5)*I3 = -V2
    
    R = np.array([
        [15, -10, 0],      # Mesh 1
        [-10, 45, -20],    # Mesh 2
        [0, -20, 45]       # Mesh 3
    ], dtype=float)
    
    V = np.array([10, 0, -5], dtype=float)
    
    # Solve for mesh currents
    currents = np.linalg.solve(R, V)
    
    print("Mesh Currents:")
    for i, I in enumerate(currents, 1):
        print(f"I{i} = {I:.4f} A")
    
    # Calculate power dissipation
    power = currents.T @ R @ currents
    print(f"\nTotal power dissipated: {power:.4f} W")
    
    return currents

currents = analyze_circuit()

# Output:
# Mesh Currents:
# I1 = 0.8696 A
# I2 = 0.3043 A
# I3 = 0.0217 A
# Total power dissipated: 12.6087 W

When Not to Use np.linalg.solve

For overdetermined (more equations than unknowns) or underdetermined systems, use np.linalg.lstsq() for least-squares solutions. For sparse systems (>90% zeros), use scipy.sparse.linalg.spsolve() for massive performance gains.

import numpy as np

# Overdetermined system (more equations than unknowns)
A_over = np.array([
    [1, 2],
    [3, 4],
    [5, 6],
    [7, 8]
])
b_over = np.array([1, 2, 3, 4])

# This would fail: np.linalg.solve(A_over, b_over)
# Use least squares instead
x_lstsq, residuals, rank, s = np.linalg.lstsq(A_over, b_over, rcond=None)
print(f"Least squares solution: {x_lstsq}")
print(f"Residual: {residuals[0]:.4f}")

NumPy’s linalg.solve() is the workhorse for linear systems in scientific Python. Master its proper use, understand its limitations, and validate your inputs to build robust numerical applications.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.