F Distribution in R: Complete Guide

The F distribution emerges from the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom. If you have two chi-squared distributions with df1 and...

Key Insights

  • The F distribution in R is controlled by four core functions (df(), pf(), qf(), rf()) that handle density, probability, quantiles, and random sampling—mastering these covers 90% of practical use cases.
  • Always specify the lower.tail parameter explicitly in pf() when calculating p-values; the default TRUE gives left-tail probability, but F-tests typically need right-tail values.
  • The F distribution’s shape depends entirely on two degrees of freedom parameters (df1 and df2), and understanding this relationship is essential for interpreting ANOVA and variance comparison results.

Introduction to the F Distribution

The F distribution emerges from the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom. If you have two chi-squared distributions with df1 and df2 degrees of freedom, their ratio follows an F distribution with parameters (df1, df2).

This might sound abstract, but the practical applications are concrete. You encounter the F distribution whenever you’re comparing variances: ANOVA tests use it to determine if group means differ significantly, regression analysis uses it to assess overall model significance, and variance ratio tests use it directly to compare population spreads.

The distribution is right-skewed and bounded at zero on the left. As degrees of freedom increase, it approaches a normal distribution. Understanding these properties helps you interpret test results and spot anomalies in your analyses.

R’s Built-in F Distribution Functions

R provides four functions for working with the F distribution, following the standard naming convention used across all probability distributions:

Function Purpose Returns
df() Probability density Height of the PDF at a given point
pf() Cumulative probability P(X ≤ x) or P(X > x)
qf() Quantile function x value for a given probability
rf() Random generation Random samples from F distribution

Each function requires two degrees of freedom parameters: df1 (numerator) and df2 (denominator). An optional ncp parameter specifies the non-centrality parameter for non-central F distributions, though you’ll rarely need this outside power analysis.

# Basic usage of each function
# Density at F = 2 with df1 = 5, df2 = 20
df(2, df1 = 5, df2 = 20)
# [1] 0.1332092

# Cumulative probability: P(F <= 2)
pf(2, df1 = 5, df2 = 20)
# [1] 0.8789669

# Quantile: What F value has 95% of distribution below it?
qf(0.95, df1 = 5, df2 = 20)
# [1] 2.710891

# Generate 5 random F values
set.seed(42)
rf(5, df1 = 5, df2 = 20)
# [1] 0.4979025 1.3476076 0.7528052 1.0849792 0.3737338

The order matters: df1 always comes before df2. Swapping them gives different results because the F distribution is not symmetric in its parameters.

Calculating F Distribution Probabilities

The pf() function is your workhorse for hypothesis testing. By default, it returns the left-tail probability (area under the curve to the left of your value). For F-tests, you typically want the right-tail probability—the area to the right.

# Suppose you calculated an F-statistic of 3.5 with df1 = 3, df2 = 36
f_stat <- 3.5
df1 <- 3
df2 <- 36

# Wrong approach (left tail) - this is NOT your p-value
pf(f_stat, df1, df2)
# [1] 0.9749451

# Correct approach: right-tail probability
p_value <- pf(f_stat, df1, df2, lower.tail = FALSE)
p_value
# [1] 0.02505488

# Alternative: subtract from 1
1 - pf(f_stat, df1, df2)
# [1] 0.02505488

I recommend always using lower.tail = FALSE rather than subtracting from 1. It’s more explicit about your intent and avoids floating-point precision issues with very small p-values.

For critical values, use qf() to find the threshold F value for your chosen significance level:

# Critical values for common significance levels
# df1 = 3, df2 = 36 (matching our example above)

# Alpha = 0.05 (95th percentile)
qf(0.95, df1 = 3, df2 = 36)
# [1] 2.866266

# Alpha = 0.01 (99th percentile)
qf(0.99, df1 = 3, df2 = 36)
# [1] 4.377097

# Alpha = 0.10 (90th percentile)
qf(0.90, df1 = 3, df2 = 36)
# [1] 2.242656

Your F-statistic of 3.5 exceeds the critical value of 2.87 at α = 0.05, confirming the significant result we found with the p-value approach.

Visualizing the F Distribution

Visualization reveals how degrees of freedom shape the distribution. Low df1 creates extreme right skew; as both parameters increase, the distribution becomes more symmetric and concentrated.

library(ggplot2)

# Create data for multiple F distributions
x <- seq(0, 5, length.out = 500)

f_data <- data.frame(
  x = rep(x, 4),
  density = c(
    df(x, df1 = 1, df2 = 10),
    df(x, df1 = 5, df2 = 10),
    df(x, df1 = 10, df2 = 30),
    df(x, df1 = 30, df2 = 100)
  ),
  params = rep(c("df1=1, df2=10", "df1=5, df2=10", 
                 "df1=10, df2=30", "df1=30, df2=100"), each = 500)
)

ggplot(f_data, aes(x = x, y = density, color = params)) +
  geom_line(linewidth = 1) +
  labs(
    title = "F Distribution with Varying Degrees of Freedom",
    x = "F value",
    y = "Density",
    color = "Parameters"
  ) +
  theme_minimal() +
  scale_color_brewer(palette = "Set1") +
  coord_cartesian(ylim = c(0, 1))

For hypothesis testing visualizations, shading the critical region helps communicate results:

# Visualize a specific F-test result
df1 <- 3
df2 <- 36
f_stat <- 3.5
critical_value <- qf(0.95, df1, df2)

x <- seq(0, 6, length.out = 500)
y <- df(x, df1, df2)

plot_data <- data.frame(x = x, y = y)

ggplot(plot_data, aes(x = x, y = y)) +
  geom_line(linewidth = 1) +
  geom_area(data = subset(plot_data, x >= critical_value),
            aes(x = x, y = y), fill = "red", alpha = 0.3) +
  geom_vline(xintercept = f_stat, linetype = "dashed", 
             color = "blue", linewidth = 1) +
  annotate("text", x = f_stat + 0.3, y = 0.5, 
           label = paste("F =", f_stat), color = "blue") +
  annotate("text", x = 4.5, y = 0.1, 
           label = "α = 0.05\nrejection region", color = "red") +
  labs(
    title = "F Distribution (df1=3, df2=36) with Critical Region",
    x = "F value",
    y = "Density"
  ) +
  theme_minimal()

Practical Application: ANOVA F-Tests

One-way ANOVA tests whether group means differ by comparing between-group variance to within-group variance. The ratio follows an F distribution under the null hypothesis.

# Sample data: plant growth under three fertilizer types
set.seed(123)
fertilizer_a <- rnorm(10, mean = 20, sd = 3)
fertilizer_b <- rnorm(10, mean = 23, sd = 3)
fertilizer_c <- rnorm(10, mean = 22, sd = 3)

growth <- c(fertilizer_a, fertilizer_b, fertilizer_c)
treatment <- factor(rep(c("A", "B", "C"), each = 10))

# Using aov()
model <- aov(growth ~ treatment)
summary(model)
#             Df Sum Sq Mean Sq F value Pr(>F)  
# treatment    2  56.55  28.277   3.708 0.0375 *
# Residuals   27 205.91   7.626

# Manual calculation to understand the mechanics
group_means <- tapply(growth, treatment, mean)
grand_mean <- mean(growth)
n_per_group <- 10
k <- 3  # number of groups

# Between-group sum of squares
ss_between <- n_per_group * sum((group_means - grand_mean)^2)

# Within-group sum of squares
ss_within <- sum(tapply(growth, treatment, function(x) sum((x - mean(x))^2)))

# Degrees of freedom
df_between <- k - 1
df_within <- length(growth) - k

# Mean squares
ms_between <- ss_between / df_between
ms_within <- ss_within / df_within

# F-statistic
f_manual <- ms_between / ms_within
f_manual
# [1] 3.708073

# P-value
pf(f_manual, df_between, df_within, lower.tail = FALSE)
# [1] 0.03746297

The manual calculation matches aov() output exactly. This confirms your understanding and helps when you need to compute F-statistics from summary statistics alone.

Practical Application: Comparing Variances

The F-test for comparing two population variances uses the ratio of sample variances directly. R’s var.test() handles this automatically.

# Compare variance between two manufacturing processes
set.seed(456)
process_a <- rnorm(25, mean = 100, sd = 5)
process_b <- rnorm(30, mean = 100, sd = 8)

# Built-in function
var.test(process_a, process_b)
# F test to compare two variances
# 
# data:  process_a and process_b
# F = 0.38384, num df = 24, denom df = 29, p-value = 0.01SEP
# alternative hypothesis: true ratio of variances is not equal to 1
# 95 percent confidence interval:
#  0.1768024 0.8580889
# sample estimates:
# ratio of variances 
#          0.3838421

# Manual verification
var_a <- var(process_a)
var_b <- var(process_b)
f_stat <- var_a / var_b
df1 <- length(process_a) - 1
df2 <- length(process_b) - 1

# Two-tailed p-value
p_lower <- pf(f_stat, df1, df2)
p_value <- 2 * min(p_lower, 1 - p_lower)

cat("F-statistic:", round(f_stat, 5), "\n")
cat("P-value:", round(p_value, 5), "\n")

The significant result (p < 0.05) indicates the variances differ—Process B has more variability than Process A.

Summary and Quick Reference

The F distribution functions in R follow predictable patterns. Here’s a quick reference for common tasks:

Task Code Notes
P-value from F-stat pf(f, df1, df2, lower.tail = FALSE) Right-tail for standard F-tests
Critical value qf(1 - alpha, df1, df2) Or qf(alpha, df1, df2, lower.tail = FALSE)
Density at point df(x, df1, df2) For plotting PDFs
Random samples rf(n, df1, df2) For simulation studies
ANOVA F-test summary(aov(y ~ group)) Extracts F and p-value automatically
Variance comparison var.test(x, y) Two-sample F-test

Remember that df1 corresponds to the numerator (between-group or model variance) and df2 to the denominator (within-group or residual variance). Getting these reversed produces incorrect p-values—a common mistake that’s easy to avoid once you understand the underlying ratio structure.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.