Home » What is the Family-wise Error Rate?

What is the Family-wise Error Rate?

by Erma Khan

In a hypothesis test, there is always a type I error rate that tells us the probability of rejecting a null hypothesis that is actually true. In other words, it’s the probability of getting a “false positive”, i.e. when we claim there is a statistically significant effect, but there actually isn’t.

When we perform one hypothesis test, the type I error rate is equal to the significance level (α), which is commonly chosen to be 0.01, 0.05, or 0.10. However, when we conduct multiple hypothesis tests at once, the probability of getting a false positive increases.

For example, imagine that we roll a 20-sided dice. The probability that the dice lands on a “1” is just 5%. But if we roll two of these dice at once, the probability that one of the dice will land on a “1” increases to 9.75%. If we roll five dice at once, the probability increases to 22.6%. 

The more dice we roll, the higher the probability that one of the dice will land on a 1. Similarly, if we conduct several hypothesis tests at once using a significance level of .05, the probability that we get a false positive increases to beyond 0.05.

How to Estimate the Family-wise Error Rate

The formula to estimate the family-wise error rate is as follows:

Family-wise error rate = 1 – (1-α)n

where:

  • α: The significance level for a single hypothesis test
  • n: The total number of tests

For example, suppose we conduct 5 different comparisons using an alpha level of α = .05. The family-wise error rate would be calculated as:

Family-wise error rate = 1 – (1-α)= 1 – (1-.05)5 = 0.2262.

In other words, the probability of getting a type I error on at least one of the hypothesis tests is over 22%!

How to Control the Family-wise Error Rate

There are several methods that can be used to control the family-wise error rate, including:

1. The Bonferroni Correction.

Adjust the α value used to assess significance such that:

αnew = αold / n

For example, if we conduct 5 different comparisons using an alpha level of α = .05, then using the Bonferroni Correction our new alpha level would be:

αnew = αold / n = .05 / 5 = .01.

2. The Sidak Correction.

Adjust the α value used to assess significance such that:

αnew = 1 – (1-αold)1/n

For example, if we conduct 5 different comparisons using an alpha level of α = .05, then using the Sidak Correction our new alpha level would be:

αnew = 1 – (1-αold)1/n = 1 – (1-.05)1/5 = .010206.

3. The Bonferroni-Holm Correction.

This procedure works as follows:

  1. Use the Bonferroni Correction to calculate αnew = αold / n.
  2. Perform each hypothesis test and order the p-values from all tests from smallest to largest.
  3. If the first p-value is greater than or equal to αnew, stop the procedure. No p-values are significant.
  4. If the first p-value is less than αnew, then it is significant. Now compare the second p-value to αnew. If it’s greater than or equal to αnew, stop the procedure. No further p-values are significant.

By using one of these corrections to the significance level, we can dramatically reduce the probability of committing a type I error among a family of hypothesis tests.

Related Posts