NumPy - Outer Product (np.outer)

The outer product takes two vectors and produces a matrix by multiplying every element of the first vector with every element of the second. For vectors **a** of length m and **b** of length n, the...

Key Insights

  • The outer product computes all possible products between elements of two vectors, producing a matrix where result[i,j] = a[i] * b[j], essential for tensor operations, coordinate grids, and polynomial expansions
  • np.outer flattens input arrays automatically and always returns a 2D matrix, making it fundamentally different from np.multiply or broadcasting operations that preserve input dimensions
  • Performance optimizations include using np.outer for small arrays (<1000 elements) but switching to np.einsum or explicit broadcasting for larger datasets to avoid memory overhead

Understanding the Outer Product

The outer product takes two vectors and produces a matrix by multiplying every element of the first vector with every element of the second. For vectors a of length m and b of length n, the result is an m×n matrix where each element (i,j) equals a[i] × b[j].

import numpy as np

a = np.array([1, 2, 3])
b = np.array([4, 5, 6, 7])

result = np.outer(a, b)
print(result)
print(f"Shape: {result.shape}")
[[ 4  5  6  7]
 [ 8 10 12 14]
 [12 15 18 21]]
Shape: (3, 4)

The operation systematically multiplies each element: 1×4=4, 1×5=5, continuing through all combinations. This differs from the dot product, which produces a scalar, or element-wise multiplication, which requires matching dimensions.

Flattening Behavior and Dimensionality

A critical characteristic of np.outer is its automatic flattening of inputs. Regardless of input shape, it treats arrays as 1D vectors:

# Multi-dimensional inputs are flattened
matrix_a = np.array([[1, 2], [3, 4]])
matrix_b = np.array([[5, 6], [7, 8]])

result = np.outer(matrix_a, matrix_b)
print(f"Input shapes: {matrix_a.shape}, {matrix_b.shape}")
print(f"Output shape: {result.shape}")
print(result)
Input shapes: (2, 2), (2, 2)
Output shape: (4, 4)
[[ 5  6  7  8]
 [10 12 14 16]
 [15 18 21 24]
 [20 24 28 32]]

This behavior makes np.outer predictable but requires awareness when working with structured data. The function essentially performs a.ravel() and b.ravel() internally before computing the product.

Practical Applications

Generating Multiplication Tables

The outer product naturally creates multiplication tables:

def multiplication_table(n):
    """Generate an n×n multiplication table."""
    numbers = np.arange(1, n + 1)
    return np.outer(numbers, numbers)

table = multiplication_table(10)
print(table)

# Format for readability
def print_table(table):
    for row in table:
        print(' '.join(f'{x:4d}' for x in row))

print_table(table[:5, :5])  # Print 5×5 subset
   1    2    3    4    5
   2    4    6    8   10
   3    6    9   12   15
   4    8   12   16   20
   5   10   15   20   25

Creating Coordinate Grids

Outer products efficiently generate coordinate grids for plotting and computation:

# Create a 2D grid for function evaluation
x = np.linspace(-5, 5, 50)
y = np.linspace(-5, 5, 50)

# Method 1: Using outer product
ones_x = np.ones_like(x)
ones_y = np.ones_like(y)
X = np.outer(x, ones_y)
Y = np.outer(ones_x, y)

# Compute z = x^2 + y^2 (distance from origin)
Z = X**2 + Y**2

print(f"Grid shapes: X={X.shape}, Y={Y.shape}, Z={Z.shape}")
print(f"Sample Z values:\n{Z[:3, :3]}")
Grid shapes: X=(50, 50), Y=(50, 50), Z=(50, 50)
Sample Z values:
[[50.         49.79591837 49.18367347]
 [49.18367347 48.97959184 48.36734694]
 [47.34693878 47.14285714 46.53061224]]

Polynomial Basis Functions

Generate polynomial basis matrices for regression:

def polynomial_basis(x, degrees):
    """Create polynomial basis using outer product."""
    # Create powers of x
    powers = np.arange(len(degrees))
    
    # Each column is x raised to a different power
    basis = np.outer(x, np.ones(len(degrees)))
    for i, deg in enumerate(degrees):
        basis[:, i] = x ** deg
    
    return basis

# Alternative using outer for coefficient matrix
x_data = np.array([1, 2, 3, 4, 5])
coefficients = np.array([1, 0.5, 0.25])  # 1 + 0.5x + 0.25x^2

# Create Vandermonde-like matrix
powers = np.arange(len(coefficients))
X = np.column_stack([x_data**p for p in powers])

# Compute polynomial values
y = X @ coefficients
print(f"Polynomial values: {y}")
Polynomial values: [ 1.75  4.    7.75 13.   19.75]

Performance Considerations

Memory Usage

The outer product creates a full matrix, which can consume significant memory:

import sys

a = np.arange(1000)
b = np.arange(1000)

result = np.outer(a, b)
memory_mb = result.nbytes / (1024**2)
print(f"Memory usage: {memory_mb:.2f} MB")
print(f"Array size: {result.shape}")
Memory usage: 7.63 MB
Array size: (1000, 1000)

For large arrays, consider alternatives that avoid materializing the full matrix.

Comparing Performance with Alternatives

import time

def benchmark_outer_product(size):
    a = np.random.rand(size)
    b = np.random.rand(size)
    
    # Method 1: np.outer
    start = time.perf_counter()
    result1 = np.outer(a, b)
    time1 = time.perf_counter() - start
    
    # Method 2: Broadcasting
    start = time.perf_counter()
    result2 = a[:, np.newaxis] * b[np.newaxis, :]
    time2 = time.perf_counter() - start
    
    # Method 3: einsum
    start = time.perf_counter()
    result3 = np.einsum('i,j->ij', a, b)
    time3 = time.perf_counter() - start
    
    print(f"Size: {size}")
    print(f"np.outer:     {time1*1000:.3f} ms")
    print(f"Broadcasting: {time2*1000:.3f} ms")
    print(f"einsum:       {time3*1000:.3f} ms")
    print()

benchmark_outer_product(100)
benchmark_outer_product(500)
benchmark_outer_product(1000)

For most use cases, np.outer provides the clearest syntax. Broadcasting offers similar performance with more flexibility, while einsum excels with complex tensor operations.

Advanced Patterns

Weighted Distance Matrices

Compute pairwise weighted distances:

# Points in 1D space
points = np.array([1.0, 2.5, 4.0, 5.5])
weights = np.array([1.0, 0.5, 0.8, 1.2])

# Weighted outer product for distance scaling
weight_matrix = np.outer(weights, weights)

# Compute pairwise distances
distances = np.abs(points[:, np.newaxis] - points[np.newaxis, :])

# Apply weights
weighted_distances = distances * weight_matrix
print("Weighted distance matrix:")
print(weighted_distances)
Weighted distance matrix:
[[0.   0.75 2.4  5.4 ]
 [0.75 0.   0.6  1.8 ]
 [2.4  0.6  0.   1.44]
 [5.4  1.8  1.44 0.  ]]

Tensor Construction

Build higher-dimensional tensors:

# Create 3D tensor from three vectors
a = np.array([1, 2])
b = np.array([3, 4, 5])
c = np.array([6, 7])

# Two-step outer product
temp = np.outer(a, b)  # Shape: (2, 3)
result = np.outer(temp.ravel(), c).reshape(2, 3, 2)

print(f"Tensor shape: {result.shape}")
print(f"Tensor:\n{result}")
Tensor shape: (2, 3, 2)
Tensor:
[[[ 18  21]
  [ 24  28]
  [ 30  35]]

 [[ 36  42]
  [ 48  56]
  [ 60  70]]]

Common Pitfalls

Type Coercion

The outer product respects NumPy’s type promotion rules:

int_array = np.array([1, 2, 3], dtype=np.int32)
float_array = np.array([1.5, 2.5], dtype=np.float64)

result = np.outer(int_array, float_array)
print(f"Result dtype: {result.dtype}")
print(result)
Result dtype: float64
[[ 1.5  2.5]
 [ 3.   5. ]
 [ 4.5  7.5]]

Empty Arrays

Handle edge cases explicitly:

empty = np.array([])
non_empty = np.array([1, 2, 3])

result = np.outer(empty, non_empty)
print(f"Shape with empty array: {result.shape}")
print(f"Result: {result}")
Shape with empty array: (0, 3)
Result: []

The outer product is a fundamental operation for matrix construction, coordinate generation, and tensor algebra. Understanding its flattening behavior and performance characteristics enables efficient implementation of algorithms requiring all pairwise products between vector elements.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.