NumPy - Matrix Inverse (np.linalg.inv)

The inverse of a square matrix A, denoted A⁻¹, satisfies the property AA⁻¹ = A⁻¹A = I, where I is the identity matrix. NumPy provides `np.linalg.inv()` for computing matrix inverses using LU...

Key Insights

  • Matrix inversion is computationally expensive with O(n³) complexity and should only be used when mathematically necessary—most linear systems solve faster with np.linalg.solve() instead
  • NumPy’s linalg.inv() uses LAPACK’s optimized routines but will raise LinAlgError for singular matrices, requiring condition number checks for numerical stability
  • Inverse matrices amplify numerical errors significantly; a matrix with condition number 10⁶ can lose 6 digits of precision, making iterative methods preferable for ill-conditioned systems

Understanding Matrix Inversion

The inverse of a square matrix A, denoted A⁻¹, satisfies the property AA⁻¹ = A⁻¹A = I, where I is the identity matrix. NumPy provides np.linalg.inv() for computing matrix inverses using LU decomposition with partial pivoting.

import numpy as np

# Basic matrix inversion
A = np.array([[4, 7],
              [2, 6]])

A_inv = np.linalg.inv(A)
print("Original matrix:\n", A)
print("\nInverse matrix:\n", A_inv)

# Verify: A @ A_inv should equal identity matrix
identity = A @ A_inv
print("\nVerification (A @ A_inv):\n", identity)
print("\nClose to identity?", np.allclose(identity, np.eye(2)))

Output demonstrates the fundamental property, though floating-point arithmetic introduces minor deviations from perfect identity.

When NOT to Use Matrix Inversion

The most common misuse of np.linalg.inv() occurs when solving linear systems. Computing x = A⁻¹b is both slower and less accurate than using specialized solvers.

# Bad approach: explicit inversion
A = np.random.rand(1000, 1000)
b = np.random.rand(1000)

import time

# Method 1: Explicit inverse (AVOID THIS)
start = time.time()
x_inv = np.linalg.inv(A) @ b
time_inv = time.time() - start

# Method 2: Direct solver (PREFER THIS)
start = time.time()
x_solve = np.linalg.solve(A, b)
time_solve = time.time() - start

print(f"Inverse method: {time_inv:.4f}s")
print(f"Solve method: {time_solve:.4f}s")
print(f"Speedup: {time_inv/time_solve:.2f}x")
print(f"Results match: {np.allclose(x_inv, x_solve)}")

The solver typically runs 2-3x faster and maintains better numerical stability by avoiding the intermediate inverse computation.

Handling Singular and Near-Singular Matrices

Singular matrices (determinant = 0) have no inverse. NumPy raises LinAlgError when attempting to invert them.

# Singular matrix example
singular = np.array([[1, 2],
                     [2, 4]])  # Second row is 2x first row

try:
    inv = np.linalg.inv(singular)
except np.linalg.LinAlgError as e:
    print(f"Error: {e}")
    print(f"Determinant: {np.linalg.det(singular)}")

More dangerous are near-singular matrices that technically have inverses but produce unreliable results:

# Near-singular matrix
near_singular = np.array([[1, 2],
                          [2, 4.0001]])

inv = np.linalg.inv(near_singular)
print("Inverse computed:\n", inv)

# Check condition number
cond = np.linalg.cond(near_singular)
print(f"\nCondition number: {cond:.2e}")

# Verify accuracy
identity_check = near_singular @ inv
print("\nA @ A_inv:\n", identity_check)
print(f"Max deviation from identity: {np.max(np.abs(identity_check - np.eye(2))):.2e}")

A condition number above 10¹⁵ indicates the matrix is effectively singular for double-precision arithmetic.

Checking Matrix Invertibility

Before attempting inversion, validate the matrix condition:

def safe_inverse(A, rcond=1e-15):
    """
    Safely compute matrix inverse with condition number check.
    
    Parameters:
    -----------
    A : ndarray
        Square matrix to invert
    rcond : float
        Reciprocal condition number threshold
    
    Returns:
    --------
    A_inv : ndarray or None
        Inverse matrix, or None if ill-conditioned
    """
    if A.shape[0] != A.shape[1]:
        raise ValueError("Matrix must be square")
    
    # Check condition number
    cond = np.linalg.cond(A)
    
    if cond > 1/rcond:
        print(f"Warning: Matrix is ill-conditioned (cond={cond:.2e})")
        return None
    
    try:
        A_inv = np.linalg.inv(A)
        
        # Verify inversion accuracy
        error = np.max(np.abs(A @ A_inv - np.eye(A.shape[0])))
        if error > 1e-10:
            print(f"Warning: Large inversion error ({error:.2e})")
        
        return A_inv
    
    except np.linalg.LinAlgError:
        print("Error: Matrix is singular")
        return None

# Test with various matrices
test_matrices = [
    np.array([[4, 7], [2, 6]]),  # Well-conditioned
    np.array([[1, 2], [2, 4.0001]]),  # Ill-conditioned
    np.array([[1, 2], [2, 4]])  # Singular
]

for i, mat in enumerate(test_matrices, 1):
    print(f"\nTest {i}:")
    result = safe_inverse(mat)
    print("Inverse computed successfully" if result is not None else "Inversion failed/skipped")

Performance Considerations for Large Matrices

Matrix inversion scales poorly with size. For large matrices, consider alternatives:

# Compare methods for different matrix sizes
sizes = [100, 500, 1000, 2000]

for n in sizes:
    A = np.random.rand(n, n)
    A = A @ A.T  # Make symmetric positive definite
    b = np.random.rand(n)
    
    # Time inversion
    start = time.time()
    A_inv = np.linalg.inv(A)
    x1 = A_inv @ b
    time_inv = time.time() - start
    
    # Time direct solve
    start = time.time()
    x2 = np.linalg.solve(A, b)
    time_solve = time.time() - start
    
    print(f"\nMatrix size {n}x{n}:")
    print(f"  Inverse: {time_inv:.4f}s")
    print(f"  Solve:   {time_solve:.4f}s")
    print(f"  Ratio:   {time_inv/time_solve:.2f}x")

For matrices larger than 1000×1000, the performance gap widens significantly.

Batch Matrix Inversion

NumPy supports vectorized operations for inverting multiple matrices simultaneously:

# Invert multiple matrices at once
batch_size = 100
matrix_size = 3

# Create batch of random matrices
matrices = np.random.rand(batch_size, matrix_size, matrix_size)

# Batch inversion
inverses = np.linalg.inv(matrices)

print(f"Input shape: {matrices.shape}")
print(f"Output shape: {inverses.shape}")

# Verify first matrix
verification = matrices[0] @ inverses[0]
print(f"\nFirst matrix verification:\n{verification}")
print(f"Is identity: {np.allclose(verification, np.eye(matrix_size))}")

# Check how many inversions succeeded
successes = 0
for i in range(batch_size):
    if np.allclose(matrices[i] @ inverses[i], np.eye(matrix_size)):
        successes += 1

print(f"\nSuccessful inversions: {successes}/{batch_size}")

This approach efficiently processes multiple small matrices, common in computer graphics and robotics applications.

Pseudo-Inverse for Non-Square Matrices

For rectangular or rank-deficient matrices, use np.linalg.pinv() to compute the Moore-Penrose pseudo-inverse:

# Non-square matrix
A = np.array([[1, 2],
              [3, 4],
              [5, 6]])

# Pseudo-inverse
A_pinv = np.linalg.pinv(A)

print(f"Original shape: {A.shape}")
print(f"Pseudo-inverse shape: {A_pinv.shape}")

# Properties of pseudo-inverse
print(f"\nA @ A_pinv @ A equals A: {np.allclose(A @ A_pinv @ A, A)}")
print(f"\nA_pinv @ A @ A_pinv equals A_pinv: {np.allclose(A_pinv @ A @ A_pinv, A_pinv)}")

# Solve overdetermined system
b = np.array([1, 2, 3])
x = A_pinv @ b
print(f"\nLeast-squares solution: {x}")
print(f"Residual: {np.linalg.norm(A @ x - b):.6f}")

The pseudo-inverse provides least-squares solutions for overdetermined systems and minimum-norm solutions for underdetermined systems.

Numerical Stability in Practice

Real-world applications require careful handling of numerical precision:

def robust_inverse(A, method='lu'):
    """
    Compute inverse with numerical stability checks.
    
    Parameters:
    -----------
    A : ndarray
        Matrix to invert
    method : str
        'lu' for standard inversion, 'svd' for SVD-based approach
    """
    # Check condition number
    cond = np.linalg.cond(A)
    print(f"Condition number: {cond:.2e}")
    
    if cond > 1e12:
        print("Warning: Matrix is ill-conditioned")
        if method == 'svd':
            # Use SVD for better stability
            U, s, Vt = np.linalg.svd(A)
            # Threshold small singular values
            s_inv = np.where(s > 1e-10, 1/s, 0)
            A_inv = Vt.T @ np.diag(s_inv) @ U.T
            return A_inv
    
    return np.linalg.inv(A)

# Test with ill-conditioned matrix
hilbert = np.array([[1/(i+j+1) for j in range(5)] for i in range(5)])
print("Standard inversion:")
inv_standard = robust_inverse(hilbert, method='lu')

print("\nSVD-based inversion:")
inv_svd = robust_inverse(hilbert, method='svd')

# Compare accuracy
I = np.eye(5)
error_standard = np.linalg.norm(hilbert @ inv_standard - I)
error_svd = np.linalg.norm(hilbert @ inv_svd - I)

print(f"\nStandard method error: {error_standard:.2e}")
print(f"SVD method error: {error_svd:.2e}")

For ill-conditioned problems, SVD-based approaches provide superior numerical stability at the cost of increased computation time.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.