How to Perform K-Fold Cross-Validation in Python
A single train-test split is a gamble. You might get lucky and split your data in a way that makes your model look great, or you might get unlucky and end up with a pessimistic performance estimate....
Key Insights
- K-fold cross-validation splits your data into k subsets and trains k models, each using a different fold as validation, giving you a more reliable estimate of model performance than a single train-test split
- Stratified k-fold maintains class distribution across folds and should be your default choice for classification problems, especially with imbalanced datasets
- Use
cross_val_score()for quick validation during exploration, but implement manual k-fold loops when you need fine-grained control over preprocessing or want to inspect individual fold performance
Introduction to Cross-Validation
A single train-test split is a gamble. You might get lucky and split your data in a way that makes your model look great, or you might get unlucky and end up with a pessimistic performance estimate. Worse, you have no idea which scenario you’re in.
K-fold cross-validation solves this problem by performing multiple train-test splits and averaging the results. Instead of relying on one arbitrary split, you get a robust estimate of how your model performs across different subsets of your data. This is critical for detecting overfitting—a model that memorizes training data will show dramatically worse performance when validated across multiple folds.
The concept is straightforward: divide your dataset into k equal-sized folds, train k different models (each using k-1 folds for training and one fold for validation), then average the performance metrics. This gives you both a mean performance estimate and a standard deviation that tells you how stable your model is across different data subsets.
Understanding K-Fold Mechanics
K-fold cross-validation works through a rotation process. If you choose k=5, your data gets split into five equal parts. In the first iteration, folds 2-5 train the model while fold 1 validates. In the second iteration, folds 1, 3, 4, and 5 train while fold 2 validates. This continues until each fold has served as the validation set exactly once.
Here’s how 5-fold cross-validation splits a dataset with 100 samples:
# Fold 1: Train on samples [20-99], Test on samples [0-19]
# Fold 2: Train on samples [0-19, 40-99], Test on samples [20-39]
# Fold 3: Train on samples [0-39, 60-99], Test on samples [40-59]
# Fold 4: Train on samples [0-59, 80-99], Test on samples [60-79]
# Fold 5: Train on samples [0-79], Test on samples [80-99]
# Each sample appears in exactly 4 training sets and 1 test set
# Final score = average of 5 test scores
The final performance metric is typically the mean of all k validation scores, with the standard deviation indicating consistency. A high standard deviation suggests your model’s performance varies significantly depending on which data it sees—a red flag for production deployment.
Basic K-Fold Implementation with Scikit-learn
The KFold class from scikit-learn handles the splitting logic. Here’s a complete implementation:
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import KFold
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Load dataset
X, y = load_iris(return_X_y=True)
# Initialize k-fold splitter
kfold = KFold(n_splits=5, shuffle=True, random_state=42)
# Store scores from each fold
scores = []
# Manual k-fold loop
for fold, (train_idx, val_idx) in enumerate(kfold.split(X), 1):
# Split data
X_train, X_val = X[train_idx], X[val_idx]
y_train, y_val = y[train_idx], y[val_idx]
# Train model
model = LogisticRegression(max_iter=200)
model.fit(X_train, y_train)
# Evaluate
y_pred = model.predict(X_val)
score = accuracy_score(y_val, y_pred)
scores.append(score)
print(f"Fold {fold}: {score:.4f}")
print(f"\nMean Accuracy: {np.mean(scores):.4f} (+/- {np.std(scores):.4f})")
Always set shuffle=True unless you’re working with time-series data. Without shuffling, your folds might contain non-representative samples, especially if your dataset has any inherent ordering.
Stratified K-Fold for Classification
Standard k-fold doesn’t consider class distribution. If you have 90 samples of class A and 10 of class B, random splitting might create folds where class B barely appears or doesn’t appear at all. This destroys your model’s ability to learn minority classes.
Stratified k-fold maintains the same class distribution in each fold as in the overall dataset. For classification tasks, this should be your default choice:
from sklearn.model_selection import StratifiedKFold
from sklearn.datasets import make_classification
# Create imbalanced dataset
X, y = make_classification(n_samples=1000, n_classes=3,
weights=[0.7, 0.2, 0.1],
n_informative=10, random_state=42)
# Compare class distributions
def check_distribution(name, kfold_obj):
print(f"\n{name}:")
for fold, (train_idx, val_idx) in enumerate(kfold_obj.split(X, y), 1):
y_val = y[val_idx]
unique, counts = np.unique(y_val, return_counts=True)
print(f" Fold {fold}: {dict(zip(unique, counts))}")
# Regular K-Fold
kfold = KFold(n_splits=5, shuffle=True, random_state=42)
check_distribution("Regular K-Fold", kfold)
# Stratified K-Fold
stratified = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
check_distribution("Stratified K-Fold", stratified)
# Compare performance
def evaluate_cv(kfold_obj, X, y):
scores = []
model = LogisticRegression(max_iter=200)
for train_idx, val_idx in kfold_obj.split(X, y):
model.fit(X[train_idx], y[train_idx])
score = model.score(X[val_idx], y[val_idx])
scores.append(score)
return np.mean(scores), np.std(scores)
regular_mean, regular_std = evaluate_cv(kfold, X, y)
stratified_mean, stratified_std = evaluate_cv(stratified, X, y)
print(f"\nRegular K-Fold: {regular_mean:.4f} (+/- {regular_std:.4f})")
print(f"Stratified K-Fold: {stratified_mean:.4f} (+/- {stratified_std:.4f})")
Stratified k-fold typically shows lower standard deviation because each fold sees a representative sample of all classes, leading to more consistent performance across folds.
Using cross_val_score for Simplified Validation
Writing manual k-fold loops gets tedious. Scikit-learn provides cross_val_score() and cross_validate() for streamlined cross-validation:
from sklearn.model_selection import cross_val_score, cross_validate
from sklearn.ensemble import RandomForestClassifier
X, y = load_iris(return_X_y=True)
model = RandomForestClassifier(n_estimators=100, random_state=42)
# Simple cross-validation with single metric
scores = cross_val_score(model, X, y, cv=5, scoring='accuracy')
print(f"Accuracy: {scores.mean():.4f} (+/- {scores.std():.4f})")
# Multiple metrics with cross_validate
scoring = ['accuracy', 'precision_macro', 'recall_macro', 'f1_macro']
results = cross_validate(model, X, y, cv=5, scoring=scoring,
return_train_score=True)
for metric in scoring:
test_scores = results[f'test_{metric}']
train_scores = results[f'train_{metric}']
print(f"{metric}:")
print(f" Test: {test_scores.mean():.4f} (+/- {test_scores.std():.4f})")
print(f" Train: {train_scores.mean():.4f} (+/- {train_scores.std():.4f})")
# Check for overfitting by comparing train vs test scores
print(f"\nFit time: {results['fit_time'].mean():.3f}s per fold")
print(f"Score time: {results['score_time'].mean():.3f}s per fold")
The return_train_score=True parameter is invaluable for detecting overfitting. If training scores are much higher than test scores, your model is memorizing rather than learning.
Advanced Techniques and Best Practices
Time-series data requires special handling because shuffling destroys temporal relationships. Use TimeSeriesSplit to maintain chronological order:
from sklearn.model_selection import TimeSeriesSplit
import pandas as pd
# Simulate time-series data
dates = pd.date_range('2020-01-01', periods=100, freq='D')
X = np.random.randn(100, 5)
y = np.random.randn(100)
tscv = TimeSeriesSplit(n_splits=5)
for fold, (train_idx, val_idx) in enumerate(tscv.split(X), 1):
print(f"Fold {fold}:")
print(f" Train: {dates[train_idx[0]]} to {dates[train_idx[-1]]}")
print(f" Val: {dates[val_idx[0]]} to {dates[val_idx[-1]]}")
For hyperparameter tuning, use nested cross-validation to avoid data leakage. The outer loop estimates generalization performance while the inner loop selects hyperparameters:
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
# Define parameter grid
param_grid = {
'C': [0.1, 1, 10],
'gamma': ['scale', 'auto', 0.01, 0.1]
}
# GridSearchCV performs inner cross-validation
grid_search = GridSearchCV(
SVC(),
param_grid,
cv=5, # Inner CV
scoring='accuracy',
n_jobs=-1
)
# Outer cross-validation estimates true performance
outer_scores = cross_val_score(
grid_search,
X,
y,
cv=5, # Outer CV
scoring='accuracy'
)
print(f"Nested CV Accuracy: {outer_scores.mean():.4f} (+/- {outer_scores.std():.4f})")
# Fit on full data to get best parameters
grid_search.fit(X, y)
print(f"Best parameters: {grid_search.best_params_}")
Conclusion
Choose k=5 for most applications—it provides a good balance between computational cost and reliable performance estimates. Use k=10 if you have abundant computational resources and want more stable estimates, or k=3 for quick experimentation with large datasets.
Always use StratifiedKFold for classification unless you have a specific reason not to. For time-series, use TimeSeriesSplit. For regression with no special considerations, standard KFold works fine.
Remember that cross-validation is computationally expensive—you’re training k models instead of one. If you’re iterating quickly during initial exploration, start with a simple train-test split, then validate your final model choices with proper k-fold cross-validation before deployment.
The standard deviation from cross-validation tells you as much as the mean. A model with 85% accuracy and 1% standard deviation is often preferable to one with 87% accuracy and 5% standard deviation—consistency matters in production.