Standard Deviation of Sample Mean Calculator

Enter your population standard deviation (σ) and sample size (n) to calculate the standard deviation of the sample mean — also known as the standard error of the mean. The result tells you how much variability to expect between sample means and the true population mean, which is essential for building confidence intervals and understanding sampling error.

The standard deviation of the full population. If unknown, use a sample estimate.

The number of observations in each sample. Must be a positive integer.

Results

Standard Deviation of Sample Mean (σx̄)

--

Variance of Sample Mean (σ²/n)

--

Square Root of Sample Size (√n)

--

Population Variance (σ²)

--

Population SD vs Standard Error of the Mean

Results Table

Frequently Asked Questions

What is the standard deviation of the distribution of the sample mean?

It is the standard deviation of all possible sample means for a given sample size, also called the standard error of the mean (SEM). It equals the population standard deviation (σ) divided by the square root of the sample size (n). It quantifies how much sample means are expected to vary around the true population mean.

What is the difference between sample distribution and sampling distribution?

A sample distribution refers to the distribution of individual data values within a single sample. A sampling distribution is the distribution of a statistic — such as the mean — computed from many repeated samples of the same size drawn from the same population. The two concepts are distinct and should not be confused.

How do you calculate the standard deviation of the sample mean?

Use the formula σx̄ = σ / √n, where σ is the population standard deviation and n is the sample size. Simply divide the population standard deviation by the square root of the number of observations in your sample. The result is the standard error of the mean.

What is the standard deviation of the sample means called?

It is most commonly called the Standard Error of the Mean (SEM), though it also goes by 'standard deviation of the sampling distribution of the mean' or simply 'standard deviation of the sample mean.' All these terms refer to the same value: σ / √n.

How do I find the mean and standard deviation of the sampling distribution?

The mean of the sampling distribution equals the population mean (μ). The standard deviation of the sampling distribution equals σ / √n. So to fully describe the sampling distribution of the mean, you need the population mean, the population standard deviation, and your sample size.

How does sample size affect the standard error of the mean?

As sample size increases, the standard error decreases — specifically, it shrinks by a factor of √n. Larger samples produce sample means that cluster more tightly around the true population mean, reducing estimation error. Quadrupling the sample size cuts the standard error in half.

What does a standard error of 1 mean?

A standard error of 1 means that on average, a sample mean is expected to deviate from the true population mean by 1 unit. Whether that is considered large or small depends entirely on the scale and units of your data. Context and the magnitude of the population standard deviation are key.

How is the standard error used in confidence intervals?

To build a confidence interval, multiply the standard error by a critical value such as a z-score (e.g. 1.96 for 95% confidence) or a t-statistic. The result is the margin of error. Add and subtract this margin from the sample mean to get the lower and upper bounds of the confidence interval.

More Statistics Tools