t-Statistic Calculator

Enter your sample mean, population mean, standard deviation, and sample size to compute the t-statistic (t-value) for a one-sample t-test. The t-Statistic Calculator returns the t-value, degrees of freedom, and the standard error — everything you need to evaluate your null hypothesis against a population mean.

The average value computed from your sample data.

The known or hypothesised population mean (null hypothesis value).

The standard deviation calculated from your sample (not the population).

The total number of observations in your sample.

Results

t-Statistic (t-value)

--

Degrees of Freedom (df)

--

Standard Error (SE)

--

Mean Difference (x̄ − μ)

--

Sample Mean vs Population Mean

Frequently Asked Questions

What is a t-statistic?

The t-statistic (or t-value) is a measure of how many standard errors the sample mean is away from the population mean. It is calculated as t = (x̄ − μ) / (s / √n), where x̄ is the sample mean, μ is the population mean, s is the sample standard deviation, and n is the sample size. A larger absolute t-value indicates a greater difference between your sample and the hypothesised population mean.

What is the t-statistic formula?

The one-sample t-statistic formula is: t = (x̄ − μ) / (s / √n). Here, x̄ is your sample mean, μ is the population mean under the null hypothesis, s is the sample standard deviation, and n is the sample size. The denominator (s / √n) is called the standard error of the mean.

What are degrees of freedom in a t-test?

Degrees of freedom (df) represent the number of independent pieces of information available to estimate a statistic. For a one-sample t-test, df = n − 1, where n is the sample size. Degrees of freedom affect the shape of the t-distribution and are used alongside the t-value to determine the p-value.

What is the difference between a t-score and a Z-score?

Both t-scores and Z-scores measure how far a data point or sample mean lies from the mean in standard deviation units. The key difference is that Z-scores are used when the population standard deviation is known, while t-scores are used when it must be estimated from sample data. T-scores are generally preferred for small samples (n < 30) because the t-distribution has heavier tails, accounting for extra uncertainty.

What are the assumptions of a one-sample t-test?

The main assumptions are: (1) the data are continuous and roughly normally distributed (or the sample size is large enough for the Central Limit Theorem to apply), (2) the observations are independent of one another, (3) there are no extreme outliers that could distort the mean, and (4) the sample is randomly drawn from the population of interest.

How do I interpret the t-statistic result?

After computing the t-value, compare it against a critical t-value from a t-distribution table using your degrees of freedom and chosen significance level (e.g. α = 0.05). If |t| exceeds the critical value, you reject the null hypothesis. Alternatively, use the t-value to find the p-value — if p < α, the result is statistically significant.

When should I use a one-sample t-test vs. a two-sample t-test?

Use a one-sample t-test when comparing a single group's mean against a known or hypothesised population value (e.g. testing whether average exam scores equal 50). Use a two-sample (independent) t-test when comparing the means of two separate, unrelated groups. Use a paired t-test when the same subjects are measured twice (e.g. before and after an intervention).

What is the difference between a t-test and a z-test?

A t-test is appropriate when the population standard deviation is unknown and must be estimated from the sample, which is almost always the case in practice. A z-test is used when the population standard deviation is known or when the sample size is very large (n > 30 is a common rule of thumb). For small samples with unknown population variance, the t-test is the correct choice.

More Statistics Tools