Negative Binomial Distribution in Python: Complete Guide
The negative binomial distribution answers a simple question: how many failures occur before achieving a fixed number of successes? If you're flipping a biased coin and want to know how many tails...
Key Insights
- The negative binomial distribution handles overdispersed count data where variance exceeds the mean—a common real-world scenario that Poisson models fail to capture
- SciPy’s
nbinomuses a different parameterization than textbooks; understanding the (n, p) convention prevents frustrating bugs - When your count data shows overdispersion, negative binomial regression via statsmodels provides a drop-in replacement for Poisson GLMs with dramatically better fits
Introduction to the Negative Binomial Distribution
The negative binomial distribution answers a simple question: how many failures occur before achieving a fixed number of successes? If you’re flipping a biased coin and want to know how many tails you’ll see before getting 5 heads, negative binomial gives you that answer.
This makes it invaluable for modeling real-world count data: customer support tickets before a subscription cancellation, insurance claims per policy holder, or bugs found before a release milestone. Unlike the Poisson distribution, which assumes mean equals variance, negative binomial accommodates overdispersion—when variance exceeds the mean, which happens constantly in practice.
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# Compare Poisson vs Negative Binomial with same mean
mean = 5
x = np.arange(0, 20)
# Poisson: variance = mean = 5
poisson_pmf = stats.poisson.pmf(x, mu=mean)
# Negative Binomial: same mean, but variance = 10 (overdispersed)
# For NB: mean = n*(1-p)/p, variance = n*(1-p)/p^2
# With n=5, p=0.5: mean=5, variance=10
nb_pmf = stats.nbinom.pmf(x, n=5, p=0.5)
fig, ax = plt.subplots(figsize=(10, 6))
width = 0.35
ax.bar(x - width/2, poisson_pmf, width, label='Poisson (var=5)', alpha=0.8)
ax.bar(x + width/2, nb_pmf, width, label='Neg. Binomial (var=10)', alpha=0.8)
ax.set_xlabel('Count')
ax.set_ylabel('Probability')
ax.set_title('Poisson vs Negative Binomial: Same Mean, Different Variance')
ax.legend()
plt.tight_layout()
plt.show()
The negative binomial’s heavier tail captures the reality that extreme values occur more frequently than Poisson predicts.
Mathematical Foundation
The probability mass function for the negative binomial distribution gives the probability of observing exactly k failures before r successes:
$$P(X = k) = \binom{k + r - 1}{k} p^r (1-p)^k$$
Where r is the number of successes required and p is the probability of success on each trial.
Key moments:
- Mean: μ = r(1-p)/p
- Variance: σ² = r(1-p)/p²
- Skewness: (2-p)/√(r(1-p))
Notice that variance equals mean divided by p. Since 0 < p ≤ 1, variance always exceeds or equals the mean—this is the overdispersion property.
def manual_nbinom_pmf(k, r, p):
"""Calculate negative binomial PMF manually."""
from math import comb, pow
return comb(k + r - 1, k) * (p ** r) * ((1 - p) ** k)
# Compare manual calculation with SciPy
r, p = 5, 0.4
k_values = range(10)
manual_probs = [manual_nbinom_pmf(k, r, p) for k in k_values]
scipy_probs = stats.nbinom.pmf(k_values, n=r, p=p)
print("k\tManual\t\tSciPy\t\tMatch")
print("-" * 50)
for k, m, s in zip(k_values, manual_probs, scipy_probs):
print(f"{k}\t{m:.6f}\t{s:.6f}\t{np.isclose(m, s)}")
# Verify mean and variance formulas
samples = stats.nbinom.rvs(n=r, p=p, size=100000)
theoretical_mean = r * (1 - p) / p
theoretical_var = r * (1 - p) / p**2
print(f"\nTheoretical mean: {theoretical_mean:.4f}, Sample mean: {samples.mean():.4f}")
print(f"Theoretical var: {theoretical_var:.4f}, Sample var: {samples.var():.4f}")
Implementation with SciPy
SciPy’s scipy.stats.nbinom is your primary tool for working with negative binomial distributions. However, its parameterization trips up many developers.
Critical: SciPy uses (n, p) where n is the number of successes and p is the success probability. The distribution models the number of failures before n successes. Some textbooks parameterize differently, using failure probability or total trials.
from scipy.stats import nbinom
# Define distribution: 3 successes needed, 40% success probability
n, p = 3, 0.4
nb = nbinom(n=n, p=p)
# Core functions
x = np.arange(0, 20)
# PMF: probability of exactly x failures
pmf_values = nb.pmf(x)
# CDF: probability of at most x failures
cdf_values = nb.cdf(x)
# Survival function: probability of more than x failures
sf_values = nb.sf(x)
# Quantiles (inverse CDF)
quantiles = [0.25, 0.5, 0.75, 0.95]
ppf_values = nb.ppf(quantiles)
print(f"Quantiles {quantiles}: {ppf_values}")
# Summary statistics
print(f"Mean: {nb.mean():.2f}")
print(f"Variance: {nb.var():.2f}")
print(f"Std Dev: {nb.std():.2f}")
# Random sampling
samples = nb.rvs(size=10000, random_state=42)
# Visualization
fig, axes = plt.subplots(1, 3, figsize=(14, 4))
axes[0].bar(x, pmf_values, color='steelblue', alpha=0.7)
axes[0].set_title('PMF')
axes[0].set_xlabel('Failures before 3 successes')
axes[1].step(x, cdf_values, where='mid', color='darkgreen', linewidth=2)
axes[1].set_title('CDF')
axes[1].set_xlabel('Failures')
axes[2].hist(samples, bins=range(20), density=True, alpha=0.7, color='coral')
axes[2].bar(x, pmf_values, alpha=0.5, color='steelblue', label='Theoretical PMF')
axes[2].set_title('Samples vs PMF')
axes[2].legend()
plt.tight_layout()
plt.show()
Practical Application: Modeling Count Data
Real datasets rarely follow Poisson distributions perfectly. Here’s how to detect overdispersion and fit a negative binomial model.
from scipy.stats import nbinom, poisson
from scipy.optimize import minimize
# Simulated daily support tickets (overdispersed)
np.random.seed(42)
tickets = nbinom.rvs(n=3, p=0.3, size=365) # One year of data
# Check for overdispersion
mean_tickets = tickets.mean()
var_tickets = tickets.var()
dispersion_ratio = var_tickets / mean_tickets
print(f"Mean: {mean_tickets:.2f}")
print(f"Variance: {var_tickets:.2f}")
print(f"Dispersion ratio: {dispersion_ratio:.2f}")
print(f"Overdispersed: {dispersion_ratio > 1}")
# Fit negative binomial using MLE
def neg_log_likelihood(params, data):
n, p = params
if n <= 0 or p <= 0 or p >= 1:
return np.inf
return -np.sum(nbinom.logpmf(data, n=n, p=p))
# Initial guess based on method of moments
init_p = mean_tickets / var_tickets
init_n = mean_tickets * init_p / (1 - init_p)
result = minimize(
neg_log_likelihood,
x0=[init_n, init_p],
args=(tickets,),
method='L-BFGS-B',
bounds=[(0.01, 100), (0.01, 0.99)]
)
fitted_n, fitted_p = result.x
print(f"\nFitted parameters: n={fitted_n:.3f}, p={fitted_p:.3f}")
# Goodness-of-fit comparison
x_range = np.arange(0, tickets.max() + 1)
observed_freq, _ = np.histogram(tickets, bins=np.arange(-0.5, tickets.max() + 1.5))
nb_expected = nbinom.pmf(x_range, n=fitted_n, p=fitted_p) * len(tickets)
poisson_expected = poisson.pmf(x_range, mu=mean_tickets) * len(tickets)
fig, ax = plt.subplots(figsize=(10, 6))
ax.bar(x_range - 0.2, observed_freq, width=0.4, label='Observed', alpha=0.7)
ax.plot(x_range, nb_expected, 'ro-', label='Neg. Binomial fit', markersize=8)
ax.plot(x_range, poisson_expected, 'g^--', label='Poisson fit', markersize=8)
ax.set_xlabel('Daily Tickets')
ax.set_ylabel('Frequency')
ax.legend()
ax.set_title('Fitting Distributions to Support Ticket Data')
plt.show()
Negative Binomial Regression with Statsmodels
When count data depends on covariates, negative binomial regression outperforms Poisson regression for overdispersed data.
import statsmodels.api as sm
import pandas as pd
# Generate synthetic data: website visits affecting purchases
np.random.seed(42)
n_customers = 500
data = pd.DataFrame({
'visits': np.random.poisson(10, n_customers),
'account_age_months': np.random.exponential(12, n_customers),
'premium_member': np.random.binomial(1, 0.3, n_customers)
})
# True relationship with overdispersion
linear_pred = (0.5 +
0.08 * data['visits'] +
0.02 * data['account_age_months'] +
0.4 * data['premium_member'])
mu = np.exp(linear_pred)
# Generate overdispersed counts
alpha = 0.5 # Dispersion parameter
data['purchases'] = nbinom.rvs(n=1/alpha, p=1/(1 + alpha*mu))
# Fit Poisson model
poisson_model = sm.GLM(
data['purchases'],
sm.add_constant(data[['visits', 'account_age_months', 'premium_member']]),
family=sm.families.Poisson()
).fit()
# Fit Negative Binomial model
nb_model = sm.GLM(
data['purchases'],
sm.add_constant(data[['visits', 'account_age_months', 'premium_member']]),
family=sm.families.NegativeBinomial(alpha=1.0)
).fit()
print("Poisson Model Summary:")
print(poisson_model.summary().tables[1])
print(f"\nPoisson AIC: {poisson_model.aic:.2f}")
print("\n" + "="*60)
print("\nNegative Binomial Model Summary:")
print(nb_model.summary().tables[1])
print(f"\nNegative Binomial AIC: {nb_model.aic:.2f}")
# Interpret coefficients
print("\nCoefficient Interpretation (NB model):")
for name, coef in zip(['const', 'visits', 'account_age_months', 'premium_member'],
nb_model.params):
rate_ratio = np.exp(coef)
print(f" {name}: exp({coef:.3f}) = {rate_ratio:.3f} rate ratio")
The negative binomial model typically shows larger standard errors (more honest uncertainty) and better AIC scores for overdispersed data.
Performance and NumPy Optimization
For Monte Carlo simulations or large-scale analyses, vectorized operations dramatically outperform loops.
import time
def monte_carlo_loop(n_simulations, n_param, p_param, threshold):
"""Slow: Python loop approach."""
exceeds_threshold = 0
for _ in range(n_simulations):
sample = nbinom.rvs(n=n_param, p=p_param)
if sample > threshold:
exceeds_threshold += 1
return exceeds_threshold / n_simulations
def monte_carlo_vectorized(n_simulations, n_param, p_param, threshold):
"""Fast: Vectorized NumPy approach."""
samples = nbinom.rvs(n=n_param, p=p_param, size=n_simulations)
return (samples > threshold).mean()
# Benchmark
n_sims = 100000
n_param, p_param, threshold = 5, 0.3, 15
start = time.time()
result_loop = monte_carlo_loop(n_sims, n_param, p_param, threshold)
loop_time = time.time() - start
start = time.time()
result_vec = monte_carlo_vectorized(n_sims, n_param, p_param, threshold)
vec_time = time.time() - start
print(f"Loop result: {result_loop:.4f} in {loop_time:.3f}s")
print(f"Vectorized result: {result_vec:.4f} in {vec_time:.3f}s")
print(f"Speedup: {loop_time/vec_time:.1f}x")
# Large-scale simulation: estimating tail probabilities
n_bootstrap = 10000
sample_size = 1000
# Vectorized bootstrap for confidence interval on mean
all_samples = nbinom.rvs(n=5, p=0.4, size=(n_bootstrap, sample_size))
bootstrap_means = all_samples.mean(axis=1)
ci_lower, ci_upper = np.percentile(bootstrap_means, [2.5, 97.5])
print(f"\n95% CI for mean: [{ci_lower:.3f}, {ci_upper:.3f}]")
Summary and Best Practices
Parameter Selection Guidelines:
- Start with method of moments: estimate p = mean/variance, then n = mean × p/(1-p)
- Use MLE for final fitting when accuracy matters
- For regression, let statsmodels estimate the dispersion parameter
Common Mistakes to Avoid:
- Confusing SciPy’s (n, p) parameterization with alternative forms
- Using Poisson when dispersion ratio exceeds 1.5
- Ignoring convergence warnings during MLE fitting
- Forgetting that SciPy’s nbinom models failures, not total trials
Quick Reference:
| Task | Code |
|---|---|
| Create distribution | nbinom(n=5, p=0.4) |
| PMF at k | nbinom.pmf(k, n, p) |
| CDF at k | nbinom.cdf(k, n, p) |
| Random samples | nbinom.rvs(n, p, size=1000) |
| Mean/Variance | nb.mean(), nb.var() |
| Fit to data | minimize(neg_log_likelihood, ...) |
| Regression | sm.GLM(..., family=NegativeBinomial()) |
The negative binomial distribution bridges the gap between theoretical simplicity and real-world messiness. When your count data shows overdispersion—and it usually will—reach for negative binomial instead of forcing a Poisson fit.