Chi-Square Distribution Calculator

Enter your chi-square statistic (χ²) and degrees of freedom to compute the corresponding p-value — or enter a p-value to find the critical χ² value. Choose between right-tail, left-tail, or two-tail probability modes. Results include the cumulative probability P(χ² ≤ x) and upper-tail probability P(χ² > x), making this useful for hypothesis testing and goodness-of-fit analysis.

Number of independent values free to vary. For a contingency table: (rows−1)×(columns−1).

Leave blank if you want to find the χ² from a p-value.

Optional: enter a p-value (0–1) to compute the corresponding critical χ² value.

Results

P-Value (Right Tail)

--

Critical χ² Value

--

Cumulative Probability P(χ² ≤ x)

--

Upper Tail P(χ² > x)

--

Two-Tailed P-Value

--

Probability Distribution

Results Table

Frequently Asked Questions

What is a chi-square distribution?

The chi-square (χ²) distribution is a probability distribution used in statistical hypothesis testing. It arises when you sum the squares of independent standard normal random variables. Its shape depends on the degrees of freedom — with more degrees of freedom, the distribution becomes more symmetric and bell-shaped.

What are degrees of freedom in a chi-square test?

Degrees of freedom (df) represent the number of independent pieces of information available to estimate a parameter. For a goodness-of-fit test with k categories, df = k − 1. For a contingency table with r rows and c columns, df = (r − 1) × (c − 1). The degrees of freedom determine the shape of the chi-square distribution.

What is a chi-square critical value?

A critical value is the threshold χ² statistic that corresponds to a given significance level (α). If your computed chi-square statistic exceeds this critical value, you reject the null hypothesis. For example, at α = 0.05 with 5 degrees of freedom, the right-tail critical value is approximately 11.07.

What is a p-value and how do I interpret it?

The p-value is the probability of observing a chi-square statistic as extreme as — or more extreme than — the one calculated, assuming the null hypothesis is true. A p-value below your chosen significance level (commonly 0.05) suggests you should reject the null hypothesis. A larger p-value indicates the data is consistent with the null hypothesis.

What is the difference between left-tail and right-tail probabilities?

The right-tail probability P(χ² > x) gives the likelihood of getting a statistic larger than your observed value — this is the standard p-value used in most chi-square tests. The left-tail probability P(χ² ≤ x) is the cumulative probability up to your value. Most hypothesis tests use the right-tail because large χ² values indicate poor fit or significant association.

When should I use a chi-square test?

Use a chi-square test when you are working with count (frequency) data organized into categories. Common applications include goodness-of-fit tests (does data follow an expected distribution?), tests of independence (are two categorical variables related?), and tests of homogeneity (do different groups share the same distribution?). It is not appropriate for continuous data or rates/percentages directly.

What are the assumptions of the chi-square test?

The main assumptions are: (1) observations are independent, (2) data consists of frequencies or counts — not proportions or rates, (3) each expected cell frequency should be at least 5 (some guidelines say at least 1, with no more than 20% of cells below 5), and (4) the sample is drawn randomly from the population of interest.

How do I find expected frequencies for a chi-square test?

Expected frequencies depend on your null hypothesis. For a test of independence in a contingency table, the expected count for each cell = (row total × column total) / grand total. For a goodness-of-fit test, multiply the total sample size by the theoretically expected proportion for each category. Expected frequencies reflect what you would observe if the null hypothesis were true.

More Statistics Tools