Home » Standard Deviation vs. Standard Error: What’s the Difference?

Standard Deviation vs. Standard Error: What’s the Difference?

by Erma Khan

Two terms that students often confuse in statistics are standard deviation and standard error.

The standard deviation measures how spread out values are in a dataset.

The standard error is the standard deviation of the mean in repeated samples from a population.

Let’s check out an example to clearly illustrate this idea.

Example: Standard Deviation vs. Standard Error

Suppose we measure the weight of 10 different turtles.

For this sample of 10 turtles, we can calculate the sample mean and the sample standard deviation:

Suppose the standard deviation turns out to be 8.68. This gives us an idea of how spread out the weights are of these turtles.

But suppose we collect another simple random sample of 10 turtles and take their measurements as well.

More than likely, this sample of 10 turtles will have a slightly different mean and standard deviation, even if they’re taken from the same population:

Now if we imagine that we take repeated samples from the same population and record the sample mean and sample standard deviation for each sample:

Now imagine that we plot each of the sample means on the same line:

The standard deviation of these means is known as the standard error.

The formula to actually calculate the standard error is:

Standard Error = s/ √n

where:

  • s: sample standard deviation
  • n: sample size

What’s the Point of Using the Standard Error?

When we calculate the mean of a given sample, we’re not actually interested in knowing the mean of that particular sample, but rather the mean of the larger population that the sample comes from.

However, we use samples because they’re much easier to collect data for compared to an entire population.

And of course the sample mean will vary from sample to sample, so we use the standard error of the mean as a way to measure how precise our estimate is of the mean.

You’ll notice from the formula to calculate the standard error that as the sample size (n) increases, the standard error decreases:

Standard Error = s/ √n

This should make sense as larger sample sizes reduce variability and increase the chance that our sample mean is closer to the actual population mean.

When to Use Standard Deviation vs. Standard Error

If we are simply interested in measuring how spread out values are in a dataset, we can use the standard deviation.

However, if we’re interested in quantifying the uncertainty around an estimate of the mean, we can use the standard error of the mean.

Depending on your specific scenario and what you’re trying to accomplish, you may choose to use either the standard deviation or the standard error.

Related Posts