How to Implement LDA (Linear Discriminant Analysis) in Python

Linear Discriminant Analysis (LDA) is a supervised machine learning technique that simultaneously performs dimensionality reduction and classification. Unlike Principal Component Analysis (PCA),...

Key Insights

  • LDA maximizes class separability by finding linear combinations of features that best separate multiple classes, making it superior to PCA for supervised dimensionality reduction tasks
  • The technique works by maximizing the ratio of between-class variance to within-class variance, effectively creating new axes that emphasize differences between categories
  • LDA serves dual purposes as both a dimensionality reduction technique and a probabilistic classifier, often outperforming PCA when class labels are available during training

Introduction to Linear Discriminant Analysis

Linear Discriminant Analysis (LDA) is a supervised machine learning technique that simultaneously performs dimensionality reduction and classification. Unlike Principal Component Analysis (PCA), which finds directions of maximum variance regardless of class labels, LDA explicitly searches for the feature subspace that maximizes class separability.

The fundamental difference is crucial: PCA is unsupervised and might project data onto axes where classes overlap significantly, while LDA uses class information to find projections where classes are maximally separated. This makes LDA particularly valuable when you have labeled training data and want to reduce dimensions while preserving discriminative information.

LDA assumes that data from each class follows a Gaussian distribution with class-specific means but shared covariance matrices. While this assumption isn’t always perfectly met in practice, LDA remains remarkably effective across diverse applications including face recognition, medical diagnosis, and customer segmentation.

Mathematical Foundation

LDA’s objective is to find a projection that maximizes Fisher’s criterion: the ratio of between-class scatter to within-class scatter. Let’s break down these concepts:

Within-class scatter matrix (S_W) measures the spread of samples around their respective class means. It quantifies how scattered each class is internally.

Between-class scatter matrix (S_B) measures the separation between different class means. It quantifies how far apart the class centers are from each other.

Fisher’s criterion seeks to maximize S_B while minimizing S_W. Mathematically, we solve the generalized eigenvalue problem: S_B * w = λ * S_W * w, where w represents the discriminant vectors we’re searching for.

Let’s visualize the problem with synthetic data:

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal

# Generate three classes with different means
np.random.seed(42)
mean1, mean2, mean3 = [0, 0], [3, 3], [6, 1]
cov = [[1, 0.5], [0.5, 1]]

class1 = np.random.multivariate_normal(mean1, cov, 100)
class2 = np.random.multivariate_normal(mean2, cov, 100)
class3 = np.random.multivariate_normal(mean3, cov, 100)

plt.figure(figsize=(10, 6))
plt.scatter(class1[:, 0], class1[:, 1], label='Class 1', alpha=0.6)
plt.scatter(class2[:, 0], class2[:, 1], label='Class 2', alpha=0.6)
plt.scatter(class3[:, 0], class3[:, 1], label='Class 3', alpha=0.6)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()
plt.title('Multi-Class Data Before LDA')
plt.grid(True, alpha=0.3)
plt.show()

This visualization shows three classes with some overlap. LDA will find the optimal direction to project this 2D data onto a 1D line that maximizes separation between classes.

Implementing LDA from Scratch

Building LDA from scratch solidifies understanding of its mechanics. Here’s a complete implementation:

import numpy as np

class LDAFromScratch:
    def __init__(self, n_components):
        self.n_components = n_components
        self.linear_discriminants = None
        
    def fit(self, X, y):
        n_features = X.shape[1]
        class_labels = np.unique(y)
        
        # Calculate overall mean and class means
        mean_overall = np.mean(X, axis=0)
        S_W = np.zeros((n_features, n_features))
        S_B = np.zeros((n_features, n_features))
        
        for c in class_labels:
            X_c = X[y == c]
            mean_c = np.mean(X_c, axis=0)
            
            # Within-class scatter matrix
            S_W += (X_c - mean_c).T.dot(X_c - mean_c)
            
            # Between-class scatter matrix
            n_c = X_c.shape[0]
            mean_diff = (mean_c - mean_overall).reshape(n_features, 1)
            S_B += n_c * (mean_diff).dot(mean_diff.T)
        
        # Solve generalized eigenvalue problem
        A = np.linalg.inv(S_W).dot(S_B)
        eigenvalues, eigenvectors = np.linalg.eig(A)
        
        # Sort eigenvectors by eigenvalues in descending order
        eigenvectors = eigenvectors.T
        idxs = np.argsort(abs(eigenvalues))[::-1]
        eigenvalues = eigenvalues[idxs]
        eigenvectors = eigenvectors[idxs]
        
        # Store first n_components eigenvectors
        self.linear_discriminants = eigenvectors[0:self.n_components]
        
    def transform(self, X):
        return np.dot(X, self.linear_discriminants.T)

# Test the implementation
X = np.vstack([class1, class2, class3])
y = np.array([0]*100 + [1]*100 + [2]*100)

lda = LDAFromScratch(n_components=1)
lda.fit(X, y)
X_projected = lda.transform(X)

plt.figure(figsize=(10, 6))
plt.scatter(X_projected[y==0], np.zeros(100), label='Class 1', alpha=0.6)
plt.scatter(X_projected[y==1], np.zeros(100), label='Class 2', alpha=0.6)
plt.scatter(X_projected[y==2], np.zeros(100), label='Class 3', alpha=0.6)
plt.xlabel('LD1')
plt.yticks([])
plt.legend()
plt.title('Data Projected onto First Linear Discriminant')
plt.grid(True, alpha=0.3)
plt.show()

This implementation demonstrates the core steps: computing scatter matrices, solving the eigenvalue problem, and projecting data onto the discriminant axes. The projected data shows clear separation between classes along a single dimension.

Using Scikit-learn’s LDA Implementation

For production code, use scikit-learn’s optimized implementation:

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report

# Load iris dataset
iris = load_iris()
X, y = iris.data, iris.target

# Split data
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, random_state=42
)

# Initialize and fit LDA
lda = LinearDiscriminantAnalysis(n_components=2)
X_train_lda = lda.fit_transform(X_train, y_train)
X_test_lda = lda.transform(X_test)

print(f"Original shape: {X_train.shape}")
print(f"Reduced shape: {X_train_lda.shape}")
print(f"Explained variance ratio: {lda.explained_variance_ratio_}")

# Visualize the transformed data
plt.figure(figsize=(10, 6))
for i, target_name in enumerate(iris.target_names):
    plt.scatter(
        X_train_lda[y_train == i, 0],
        X_train_lda[y_train == i, 1],
        label=target_name,
        alpha=0.6
    )
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.title('Iris Dataset in LDA Space')
plt.grid(True, alpha=0.3)
plt.show()

# Use LDA as a classifier
lda_classifier = LinearDiscriminantAnalysis()
lda_classifier.fit(X_train, y_train)
y_pred = lda_classifier.predict(X_test)

print(f"\nClassification Accuracy: {accuracy_score(y_test, y_pred):.3f}")
print("\nClassification Report:")
print(classification_report(y_test, y_pred, target_names=iris.target_names))

Scikit-learn’s implementation handles numerical stability issues, provides multiple solver options, and integrates seamlessly with other sklearn components.

Real-World Application: Multi-Class Classification

Let’s apply LDA to the wine dataset and compare it against PCA:

from sklearn.datasets import load_wine
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
import time

# Load and preprocess wine dataset
wine = load_wine()
X, y = wine.data, wine.target

# Scale features (important for LDA)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(
    X_scaled, y, test_size=0.3, random_state=42, stratify=y
)

# Approach 1: LDA for dimensionality reduction + classification
lda = LinearDiscriminantAnalysis(n_components=2)
X_train_lda = lda.fit_transform(X_train, y_train)
X_test_lda = lda.transform(X_test)

lr_lda = LogisticRegression(max_iter=1000)
start = time.time()
lr_lda.fit(X_train_lda, y_train)
lda_time = time.time() - start
lda_score = lr_lda.score(X_test_lda, y_test)

# Approach 2: PCA for dimensionality reduction + classification
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train)
X_test_pca = pca.transform(X_test)

lr_pca = LogisticRegression(max_iter=1000)
start = time.time()
lr_pca.fit(X_train_pca, y_train)
pca_time = time.time() - start
pca_score = lr_pca.score(X_test_pca, y_test)

# Approach 3: Direct LDA classification
lda_direct = LinearDiscriminantAnalysis()
start = time.time()
lda_direct.fit(X_train, y_train)
direct_time = time.time() - start
direct_score = lda_direct.score(X_test, y_test)

print("Performance Comparison:")
print(f"LDA (2D) + LogReg: {lda_score:.3f} (time: {lda_time:.4f}s)")
print(f"PCA (2D) + LogReg: {pca_score:.3f} (time: {pca_time:.4f}s)")
print(f"LDA Direct:        {direct_score:.3f} (time: {direct_time:.4f}s)")

This comparison typically shows LDA outperforming PCA for classification tasks because it leverages label information during dimensionality reduction.

Performance Tuning and Best Practices

Solver Selection: Scikit-learn offers multiple solvers:

# Compare different solvers
solvers = ['svd', 'lsqr', 'eigen']
results = {}

for solver in solvers:
    try:
        lda = LinearDiscriminantAnalysis(solver=solver)
        lda.fit(X_train, y_train)
        score = lda.score(X_test, y_test)
        results[solver] = score
        print(f"{solver}: {score:.3f}")
    except Exception as e:
        print(f"{solver}: Failed - {e}")
  • svd: Most stable, recommended for most cases
  • lsqr: Efficient for large datasets
  • eigen: Useful when you need shrinkage regularization

Feature Scaling: LDA is sensitive to feature scales. Always standardize your features:

from sklearn.pipeline import Pipeline

pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('lda', LinearDiscriminantAnalysis())
])
pipeline.fit(X_train, y_train)

Handling Imbalanced Classes: Use class priors or resampling:

# Specify class priors
lda = LinearDiscriminantAnalysis(priors=[0.2, 0.3, 0.5])
lda.fit(X_train, y_train)

When to Use LDA:

  • You have labeled data (supervised learning)
  • Number of samples exceeds number of features
  • Classes are roughly Gaussian distributed
  • You need interpretable linear decision boundaries
  • You want both dimensionality reduction and classification

When to Avoid LDA:

  • Highly non-linear class boundaries exist
  • Sample size is smaller than feature count (consider regularized LDA)
  • Classes have significantly different covariance matrices (use QDA instead)

Conclusion

Linear Discriminant Analysis remains a powerful tool in the machine learning toolkit, particularly when you need interpretable dimensionality reduction with supervised learning. Its mathematical elegance—maximizing between-class variance while minimizing within-class variance—translates directly into practical performance gains over unsupervised alternatives like PCA.

The key to successful LDA implementation lies in understanding your data characteristics: ensure adequate sample sizes, properly scale features, and verify that the Gaussian assumption isn’t drastically violated. When these conditions are met, LDA provides an efficient, interpretable solution that often outperforms more complex alternatives.

Start with scikit-learn’s implementation for production work, but understanding the underlying mathematics and scratch implementation helps you diagnose issues and make informed decisions about when LDA is the right choice for your specific problem.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.