R - ANOVA (Analysis of Variance)

ANOVA partitions total variance into between-group and within-group components. The F-statistic compares these variances: if between-group variance significantly exceeds within-group variance, at...

Key Insights

  • ANOVA tests whether means across three or more groups differ significantly, extending t-tests beyond two-group comparisons while controlling family-wise error rate
  • R provides multiple ANOVA implementations: aov() for balanced designs, lm() for flexibility, and car::Anova() for unbalanced data with Type II/III sums of squares
  • Post-hoc tests and assumption checking (normality, homogeneity of variance) are critical steps that determine whether ANOVA results are valid and interpretable

Understanding ANOVA Fundamentals

ANOVA partitions total variance into between-group and within-group components. The F-statistic compares these variances: if between-group variance significantly exceeds within-group variance, at least one group mean differs from others.

The null hypothesis states all group means are equal. Rejecting this only tells you differences exist—not which groups differ.

# Generate sample data: crop yields under different fertilizers
set.seed(123)
fertilizer_a <- rnorm(30, mean = 25, sd = 3)
fertilizer_b <- rnorm(30, mean = 28, sd = 3)
fertilizer_c <- rnorm(30, mean = 26, sd = 3)
fertilizer_d <- rnorm(30, mean = 30, sd = 3)

# Create data frame
crop_data <- data.frame(
  yield = c(fertilizer_a, fertilizer_b, fertilizer_c, fertilizer_d),
  fertilizer = factor(rep(c("A", "B", "C", "D"), each = 30))
)

# Basic ANOVA
model <- aov(yield ~ fertilizer, data = crop_data)
summary(model)

The output shows degrees of freedom, sum of squares, mean squares, F-value, and p-value. A significant p-value (typically < 0.05) indicates group means differ.

One-Way ANOVA Implementation

One-way ANOVA examines one independent variable (factor) with multiple levels. Use aov() for straightforward analysis or lm() when you need model flexibility.

# Using aov()
aov_model <- aov(yield ~ fertilizer, data = crop_data)
summary(aov_model)

# Using lm() - identical results, different object type
lm_model <- lm(yield ~ fertilizer, data = crop_data)
anova(lm_model)

# Extract coefficients
summary(lm_model)$coefficients

# Group means
aggregate(yield ~ fertilizer, data = crop_data, FUN = mean)

The lm() approach treats the first level alphabetically as the reference group. Coefficients represent differences from this baseline.

Two-Way ANOVA with Interactions

Two-way ANOVA examines two factors simultaneously and can test for interaction effects—whether one factor’s effect depends on the other factor’s level.

# Add second factor: irrigation method
crop_data$irrigation <- factor(rep(rep(c("Drip", "Spray"), each = 15), 4))

# Two-way ANOVA with interaction
model_2way <- aov(yield ~ fertilizer * irrigation, data = crop_data)
summary(model_2way)

# Alternative notation (explicit)
model_2way_alt <- aov(yield ~ fertilizer + irrigation + fertilizer:irrigation, 
                      data = crop_data)

# Visualize interaction
interaction.plot(
  x.factor = crop_data$fertilizer,
  trace.factor = crop_data$irrigation,
  response = crop_data$yield,
  fun = mean,
  type = "b",
  legend = TRUE,
  xlab = "Fertilizer Type",
  ylab = "Mean Yield",
  pch = c(1, 19),
  col = c("blue", "red")
)

Non-parallel lines in the interaction plot suggest interaction effects. Significant interactions mean you cannot interpret main effects independently.

Assumption Testing

ANOVA assumes normality of residuals, homogeneity of variance, and independence. Violating these assumptions invalidates results.

# Normality: Shapiro-Wilk test
shapiro.test(residuals(model))

# Normality: Q-Q plot
qqnorm(residuals(model))
qqline(residuals(model), col = "red")

# Homogeneity of variance: Levene's test
library(car)
leveneTest(yield ~ fertilizer, data = crop_data)

# Homogeneity of variance: Bartlett's test (sensitive to non-normality)
bartlett.test(yield ~ fertilizer, data = crop_data)

# Visual check: residuals vs fitted
plot(model, which = 1)

# Visual check: boxplots by group
boxplot(yield ~ fertilizer, data = crop_data,
        xlab = "Fertilizer", ylab = "Yield")

If assumptions fail, consider transformations (log, square root) or non-parametric alternatives like Kruskal-Wallis test.

Post-Hoc Testing

ANOVA only indicates differences exist. Post-hoc tests identify which specific groups differ while controlling for multiple comparisons.

# Tukey's HSD (Honest Significant Difference)
tukey_results <- TukeyHSD(model)
tukey_results

# Plot Tukey results
plot(tukey_results)

# Pairwise t-tests with Bonferroni correction
pairwise.t.test(crop_data$yield, crop_data$fertilizer, 
                p.adjust.method = "bonferroni")

# Dunnett's test (compare all to control)
library(multcomp)
dunnett_test <- glht(model, linfct = mcp(fertilizer = "Dunnett"))
summary(dunnett_test)

Tukey’s HSD is most common for balanced designs. Bonferroni is conservative. Choose based on your experimental design and research questions.

Type I, II, and III Sums of Squares

For unbalanced designs (unequal group sizes), sum of squares calculation matters. Type I is sequential, Type II tests main effects without interactions, Type III tests each effect adjusted for all others.

# Create unbalanced data
unbalanced_data <- crop_data[c(1:25, 31:50, 61:85, 91:120), ]

# Type I (default in base R)
model_unbal <- aov(yield ~ fertilizer + irrigation, data = unbalanced_data)
summary(model_unbal)

# Type II - main effects adjusted for each other
library(car)
Anova(model_unbal, type = "II")

# Type III - each effect adjusted for all others
Anova(model_unbal, type = "III")

# For Type III, use sum-to-zero contrasts
options(contrasts = c("contr.sum", "contr.poly"))
model_unbal_t3 <- aov(yield ~ fertilizer + irrigation, data = unbalanced_data)
Anova(model_unbal_t3, type = "III")

Type II is generally recommended unless you have specific reasons to use Type III (common in medical/pharmaceutical research).

Repeated Measures ANOVA

When measurements come from the same subjects across conditions, observations aren’t independent. Repeated measures ANOVA accounts for within-subject correlation.

# Simulated repeated measures: plant growth over time
plant_id <- rep(1:20, times = 4)
time_point <- factor(rep(c("Week1", "Week2", "Week3", "Week4"), each = 20))
growth <- c(rnorm(20, 5, 1), rnorm(20, 8, 1), 
            rnorm(20, 12, 1), rnorm(20, 15, 1))

rm_data <- data.frame(plant_id = factor(plant_id), 
                      time = time_point, 
                      growth = growth)

# Repeated measures ANOVA
library(ez)
rm_model <- ezANOVA(
  data = rm_data,
  dv = growth,
  wid = plant_id,
  within = time,
  detailed = TRUE
)
print(rm_model)

# Alternative: aov with Error term
rm_aov <- aov(growth ~ time + Error(plant_id/time), data = rm_data)
summary(rm_aov)

The Error() term specifies the within-subject structure. Sphericity assumption (equal variances of differences) must be checked; Greenhouse-Geisser or Huynh-Feldt corrections adjust for violations.

Effect Size and Power

Statistical significance doesn’t indicate practical importance. Effect sizes quantify magnitude; power analysis determines sample size requirements.

# Eta-squared (proportion of variance explained)
library(effectsize)
eta_squared(model)

# Omega-squared (less biased than eta-squared)
omega_squared(model)

# Cohen's f
cohens_f(model)

# Power analysis for ANOVA
library(pwr)
pwr.anova.test(k = 4,              # number of groups
               n = 30,             # sample size per group
               f = 0.25,           # effect size
               sig.level = 0.05)   # alpha

# Determine required sample size
pwr.anova.test(k = 4, 
               f = 0.25, 
               sig.level = 0.05, 
               power = 0.80)

Report effect sizes alongside p-values. Cohen’s f values: 0.10 (small), 0.25 (medium), 0.40 (large).

Practical Considerations

Always visualize data before running ANOVA. Outliers and distributional issues become obvious in plots but hidden in summary statistics.

# Comprehensive diagnostic plot
par(mfrow = c(2, 2))
plot(model)
par(mfrow = c(1, 1))

# Modern visualization with ggplot2
library(ggplot2)
ggplot(crop_data, aes(x = fertilizer, y = yield, fill = fertilizer)) +
  geom_boxplot(alpha = 0.7) +
  geom_jitter(width = 0.2, alpha = 0.3) +
  theme_minimal() +
  labs(title = "Yield Distribution by Fertilizer Type",
       x = "Fertilizer", y = "Yield (kg/hectare)")

ANOVA is robust to moderate violations of normality with large samples, but severely skewed data or extreme outliers require attention. When assumptions fail consistently, consider generalized linear models or non-parametric alternatives.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.