The F-ratio is a statistic for evaluating whether two variances or standard deviations are significantly different, obtained by dividing one variance by another variance.

The F ratio is the ratio of two mean square values. If the null hypothesis is true, you expect F to have a value close to 1.0 most of the time. A large F ratio means that the variation among group means is more than you’d expect to see by chance. You’ll see a large F ratio both when the null hypothesis is wrong (the data are not sampled from populations with the same mean) and when random sampling happened to end up with large values in some groups and small values in others.

The P value is determined from the F ratio and the two values for degrees of freedom shown in the ANOVA table.

To calculate the F-ratio, two estimates of the variance are made.

  1. Variance between samples: An estimate of σ2 that is the variance of the sample means multiplied by n (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation.
  2. Variance within samples: An estimate of σ2 that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation.

References

Illowsky, B., & Dean, S. (2013, September). Introductory Statistics. Retrieved July 8, 2021, from https://openstax.org/books/introductory-statistics/pages/1-introduction