t-Distribution Calculator

Enter your degrees of freedom and a t-score, then choose a probability type (left-tail, right-tail, or two-tailed) to get the corresponding p-value from the Student's t-distribution. You can also work in reverse — enter a probability to find the critical t-value. Perfect for hypothesis testing, confidence intervals, and statistical inference.

Typically sample size minus 1 (n − 1) for a one-sample t-test.

The observed t-statistic from your data or hypothesis test.

Results

p-value

--

Left-Tail P(X ≤ t)

--

Right-Tail P(X ≥ t)

--

Two-Tailed P(|X| ≥ |t|)

--

Between P(−|t| ≤ X ≤ |t|)

--

Probability Breakdown

Frequently Asked Questions

What is a t-score?

A t-score (or t-statistic) measures how many standard errors a sample mean is from the population mean. It is calculated as t = (x̄ − μ) / (s / √n), where x̄ is the sample mean, μ is the population mean, s is the sample standard deviation, and n is the sample size. Larger absolute t-scores indicate greater evidence against the null hypothesis.

What are degrees of freedom?

Degrees of freedom (ν or df) represent the number of independent values that can vary in a statistical calculation. For a one-sample t-test, df = n − 1, where n is the sample size. For a two-sample t-test, the formula is more complex. Degrees of freedom determine the exact shape of the t-distribution — as df increases, the distribution approaches the standard normal distribution.

What is the difference between a left-tail, right-tail, and two-tailed probability?

A left-tail probability P(X ≤ t) gives the chance of observing a t-score at or below your value. A right-tail probability P(X ≥ t) gives the chance of observing a t-score at or above your value. A two-tailed probability P(|X| ≥ |t|) combines both extremes and is used when your alternative hypothesis is non-directional (simply 'not equal to').

What is a p-value?

A p-value is the probability of observing a test statistic as extreme as (or more extreme than) your result, assuming the null hypothesis is true. A small p-value (typically below 0.05) suggests the result is statistically significant and provides evidence to reject the null hypothesis. The p-value alone does not measure effect size or practical importance.

How does the t-distribution differ from the normal distribution?

The t-distribution has heavier tails than the standard normal distribution, which accounts for the additional uncertainty that comes from estimating the population standard deviation from a small sample. As the degrees of freedom increase, the t-distribution converges toward the standard normal (Z) distribution. For df greater than 30, the two are very similar in practice.

When should I use the t-distribution instead of the normal distribution?

Use the t-distribution when your sample size is small (typically n < 30) and the population standard deviation is unknown — which is almost always the case in practice. Use the normal distribution only when the population standard deviation is known or when the sample size is very large, making the t-distribution essentially identical to the normal.

How do I calculate degrees of freedom for a two-sample t-test?

For a two-sample t-test with equal variances, df = n₁ + n₂ − 2. For a Welch's t-test (unequal variances), the degrees of freedom are estimated using the Welch–Satterthwaite equation: df = (s₁²/n₁ + s₂²/n₂)² / [(s₁²/n₁)²/(n₁−1) + (s₂²/n₂)²/(n₂−1)]. Most statistical software computes this automatically.

What is a standard deviation in the context of the t-distribution?

In this context, the standard deviation (s) is the sample standard deviation — a measure of how spread out the data values are around the sample mean. It is used in the denominator of the t-statistic formula along with the square root of the sample size to form the standard error. A larger standard deviation relative to the sample size results in a smaller t-score.

More Statistics Tools