How to Perform a F-Test in Excel

The F-test is a statistical method for comparing the variances of two populations. While t-tests get most of the attention for comparing group means, the F-test answers a different question: are the...

Key Insights

  • Excel provides three distinct methods for F-tests: the F.TEST() function for quick p-values, the Data Analysis ToolPak for comprehensive output, and manual calculation using VAR.S() for full control and verification.
  • The F-test compares variances between two datasets, not means—use it before running a t-test to determine whether you should assume equal or unequal variances.
  • Always place the larger variance in the numerator when calculating manually, and remember that F-tests assume normally distributed data—violating this assumption can produce misleading results.

Introduction to F-Tests

The F-test is a statistical method for comparing the variances of two populations. While t-tests get most of the attention for comparing group means, the F-test answers a different question: are the spreads of two datasets significantly different?

This matters more than you might think. Before running a two-sample t-test, you need to know whether to assume equal variances. Manufacturing processes use F-tests to compare consistency between production lines. Financial analysts compare volatility between investment portfolios. Quality control teams verify that new equipment produces results as consistent as the old equipment.

The null hypothesis of an F-test states that the two population variances are equal. The alternative hypothesis states they differ. The test produces an F-statistic—the ratio of the two sample variances—and a corresponding p-value that tells you how likely you’d observe such a ratio if the variances were truly equal.

Excel gives you multiple ways to perform this test. Let’s work through each approach.

Prerequisites and Data Preparation

Before running any F-test, you need two sets of numerical data arranged in columns or rows. The data should represent independent samples from two populations you want to compare.

Here’s a sample dataset structure comparing test scores from two different teaching methods:

     A              B
1    Method A       Method B
2    78             82
3    85             79
4    72             88
5    91             85
6    68             90
7    83             76
8    77             84
9    89             81
10   74             87
11   80             83
12   86             78
13   71             92
14   88             80
15   79             86
16   82             75
17   75             89
18   90             84
19   73             77
20   84             91
21   81             82

A few data preparation requirements:

  • Remove any blank cells within your data ranges—they’ll cause errors or be ignored inconsistently
  • Ensure all values are numeric, not text that looks like numbers
  • Check for outliers that might skew variance calculations
  • Label your columns clearly so you remember which group is which

The sample sizes don’t need to be equal, but both groups should have enough observations for meaningful variance estimates. Generally, aim for at least 10 observations per group, though more is better.

Using Excel’s Built-in F.TEST Function

The fastest way to perform an F-test in Excel is the F.TEST() function. It returns the two-tailed probability that the variances in your two arrays are not significantly different.

The syntax is straightforward:

=F.TEST(array1, array2)

Using our sample data:

=F.TEST(A2:A21, B2:B21)

This returns approximately 0.673, which is your p-value.

Here’s what’s happening under the hood: Excel calculates the variances of both arrays, computes the F-statistic as the ratio of these variances, then returns the two-tailed probability from the F-distribution.

Important details about F.TEST():

' Basic usage
=F.TEST(A2:A21, B2:B21)    ' Returns: 0.673 (approximately)

' The function automatically handles which variance goes in numerator
' It returns the same p-value regardless of argument order
=F.TEST(B2:B21, A2:A21)    ' Returns: 0.673 (same result)

' Works with named ranges
=F.TEST(MethodA, MethodB)   ' If you've defined named ranges

The F.TEST() function is convenient but limited. It only gives you the p-value—not the F-statistic, degrees of freedom, or critical values. For complete analysis, you’ll need the Data Analysis ToolPak or manual calculations.

Using the Data Analysis ToolPak

The Data Analysis ToolPak provides comprehensive F-test output including the F-statistic, critical values, and both one-tail and two-tail p-values.

First, enable the ToolPak if you haven’t already:

  1. Click FileOptionsAdd-ins
  2. In the Manage dropdown, select Excel Add-ins and click Go
  3. Check Analysis ToolPak and click OK

Now run the F-test:

  1. Click Data tab → Data Analysis (in the Analysis group)
  2. Select F-Test Two-Sample for Variances and click OK
  3. Configure the dialog:
Variable 1 Range: $A$1:$A$21
Variable 2 Range: $B$1:$B$21
Labels: ✓ (checked, since row 1 contains headers)
Alpha: 0.05
Output Range: $D$1

Click OK and Excel produces this output table:

     D                    E              F
1    F-Test Two-Sample for Variances
2
3                         Method A       Method B
4    Mean                 80.35          83.45
5    Variance             44.66          26.89
6    Observations         20             20
7    df                   19             19
8    F                    1.661
9    P(F<=f) one-tail     0.134
10   F Critical one-tail  2.168

Interpreting this output:

  • Variance: Method A (44.66) has higher variance than Method B (26.89)
  • F-statistic (1.661): The ratio of the larger variance to the smaller
  • P(F<=f) one-tail (0.134): Probability of observing this F-ratio if variances are equal
  • F Critical one-tail (2.168): The threshold F-value at α = 0.05

Since 1.661 < 2.168 (F < F Critical), we fail to reject the null hypothesis. The variances are not significantly different at the 0.05 level.

Note: The ToolPak performs a one-tailed test. For a two-tailed test, double the p-value (0.134 × 2 = 0.268) or halve your alpha before comparing.

Manual F-Test Calculation

Calculating the F-test manually gives you complete control and helps verify automated results. The process involves three steps: calculate variances, compute the F-ratio, and find the p-value.

' Step 1: Calculate sample variances using VAR.S (sample variance)
' Cell D2:
=VAR.S(A2:A21)    ' Returns: 44.66 (Method A variance)

' Cell E2:
=VAR.S(B2:B21)    ' Returns: 26.89 (Method B variance)

' Step 2: Calculate F-statistic (larger variance / smaller variance)
' Cell D4:
=MAX(D2,E2)/MIN(D2,E2)    ' Returns: 1.661

' Alternative if you know which is larger:
=D2/E2    ' Returns: 1.661 (since Method A has larger variance)

' Step 3: Calculate degrees of freedom
' Cell D5:
=COUNT(A2:A21)-1    ' Returns: 19 (df for Method A)

' Cell E5:
=COUNT(B2:B21)-1    ' Returns: 19 (df for Method B)

' Step 4: Calculate p-value using F.DIST.RT (right-tail)
' Cell D7:
=F.DIST.RT(D4, D5, E5)    ' Returns: 0.134 (one-tailed p-value)

' For two-tailed p-value:
=2*MIN(F.DIST.RT(D4, D5, E5), 1-F.DIST.RT(D4, D5, E5))

You can also find the critical F-value for comparison:

' Critical F-value at alpha = 0.05, one-tailed
=F.INV.RT(0.05, 19, 19)    ' Returns: 2.168

' Critical F-value at alpha = 0.05, two-tailed (use alpha/2)
=F.INV.RT(0.025, 19, 19)   ' Returns: 2.526

Why use VAR.S() instead of VAR.P()? The VAR.S() function calculates sample variance (dividing by n-1), which is appropriate when your data represents a sample from a larger population. Use VAR.P() only when you have the entire population, which is rare in practice.

Interpreting Results and Making Decisions

The p-value from an F-test tells you the probability of observing your calculated F-ratio (or a more extreme value) if the null hypothesis is true—if the population variances are actually equal.

Standard interpretation at α = 0.05:

  • p-value < 0.05: Reject the null hypothesis. The variances are significantly different.
  • p-value ≥ 0.05: Fail to reject the null hypothesis. No significant difference in variances.

Build a decision formula directly in Excel:

' One-tailed decision
=IF(F.DIST.RT(1.661, 19, 19) < 0.05, "Variances Differ", "Variances Equal")

' Using F.TEST for two-tailed decision
=IF(F.TEST(A2:A21, B2:B21) < 0.05, "Reject Null", "Fail to Reject")

' Complete decision cell with dynamic calculation
=IF(F.TEST(A2:A21, B2:B21) < 0.05, 
    "Significant difference in variances (use unequal variance t-test)", 
    "No significant difference (equal variance assumption OK)")

For our sample data, the p-value of 0.673 far exceeds 0.05. We fail to reject the null hypothesis—there’s no statistically significant difference between the variances of the two teaching methods.

This result has practical implications. If you’re planning a t-test to compare the means of these groups, you can reasonably assume equal variances and use a standard two-sample t-test rather than Welch’s t-test.

Common Errors and Best Practices

Handling unequal sample sizes: The F-test works with unequal groups, but the degrees of freedom change accordingly. Excel handles this automatically, but be aware that very unequal sample sizes reduce statistical power.

' This works fine with different sized arrays
=F.TEST(A2:A21, B2:B31)    ' 20 vs 30 observations

Dealing with non-numeric data: F-tests require numbers. Text values cause #VALUE! errors.

' Check for non-numeric values before testing
=SUMPRODUCT(--(ISNUMBER(A2:A21)))    ' Should equal COUNT(A2:A21)

The normality assumption: F-tests assume both populations are normally distributed. This assumption matters more than many analysts realize—the F-test is sensitive to departures from normality. If your data is clearly non-normal, consider Levene’s test instead (though Excel doesn’t have a built-in function for it).

One-tailed vs. two-tailed: Use a one-tailed test only when you have a directional hypothesis before collecting data (“Method A will have greater variance than Method B”). For most exploratory analyses, the two-tailed test is appropriate.

Common formula errors:

' Wrong: Using population variance
=VAR.P(A2:A21)/VAR.P(B2:B21)    ' Underestimates variance

' Right: Using sample variance  
=VAR.S(A2:A21)/VAR.S(B2:B21)    ' Correct for samples

' Wrong: Smaller variance in numerator (gives F < 1)
=26.89/44.66    ' Returns 0.602

' Right: Larger variance in numerator
=44.66/26.89    ' Returns 1.661

When to use alternatives: If your data violates normality assumptions, consider Bartlett’s test (sensitive to normality but handles multiple groups) or Levene’s test (robust to non-normality). For comparing more than two groups, you’ll need ANOVA-based approaches rather than the two-sample F-test.

The F-test is a foundational tool for variance comparison. Master these Excel techniques, understand the assumptions, and you’ll make better decisions about which subsequent analyses are appropriate for your data.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.