Lilliefors Test Calculator

Enter your sample data into the Lilliefors Test Calculator to test whether your data follows a normal distribution when population parameters are unknown. Input a comma- or space-separated data series and set your significance level (α) — the calculator estimates the mean and variance from your sample, computes the Lilliefors test statistic (D), compares it against the critical value, and returns a clear pass/fail normality decision along with the p-value.

Enter numeric values separated by commas, spaces, or new lines. You may paste directly from Excel or Google Sheets.

Common choice is 0.05 (5%). Lower values require stronger evidence to reject normality.

Choose whether to include or exclude outliers (values beyond 1.5×IQR from quartiles).

Results

Lilliefors Test Statistic (D)

--

p-value (approx.)

--

Critical Value

--

Sample Mean (μ̂)

--

Sample Std Dev (σ̂)

--

Sample Size (n)

--

Normality Decision

--

Observed vs Expected Cumulative Distribution

Results Table

Frequently Asked Questions

What is the Lilliefors test?

The Lilliefors test is a normality test derived from the Kolmogorov–Smirnov test. Unlike the standard KS test, it accounts for the fact that the population mean and variance are unknown and must be estimated from the sample data. This makes it more realistic for practical use and reduces the conservatism of the original KS test.

What is the difference between the Lilliefors test and the Kolmogorov–Smirnov test?

The standard Kolmogorov–Smirnov test requires that the population parameters (mean and standard deviation) be fully specified in advance. The Lilliefors test relaxes this requirement by estimating those parameters from the sample, then using adjusted critical values that account for the additional uncertainty. This makes Lilliefors more appropriate when you only have sample data.

What are the hypotheses of the Lilliefors test?

The null hypothesis (H₀) states that the sample comes from a normal distribution with unknown mean and variance. The alternative hypothesis (H₁) states that the data do not follow a normal distribution. A small p-value (below your chosen α) leads you to reject H₀ and conclude non-normality.

How is the Lilliefors test statistic (D) calculated?

The test statistic D is the maximum absolute difference between the empirical cumulative distribution function Fn(x) and the theoretical normal CDF Φ((x − μ̂)/σ̂), where μ̂ and σ̂ are estimated from the sample. Formally: D = sup|Fn(x) − Φ((x − μ̂)/σ̂)|. A larger D indicates greater departure from normality.

How do I interpret the Lilliefors test result?

If the p-value is less than your significance level α (e.g., 0.05), you reject the null hypothesis and conclude the data are likely not normally distributed. If the p-value is greater than α, you fail to reject H₀ — meaning there is insufficient evidence of non-normality. Keep in mind that with small samples, the test has low power.

What sample size does the Lilliefors test require?

The Lilliefors test can be applied to samples as small as n = 4 or 5, but its power increases substantially with larger samples. For very small samples (n < 10), the test may not detect non-normality reliably. For large samples (n > 200), even trivial deviations from normality may become statistically significant.

When should I use the Lilliefors test instead of Shapiro–Wilk?

The Shapiro–Wilk test is generally considered more powerful for small to medium sample sizes and is often the preferred normality test. The Lilliefors test is a good alternative when your data contains many tied (repeated) values, which can affect Shapiro–Wilk, or when you specifically need a KS-based approach with estimated parameters.

What are the limitations of the Lilliefors test?

The Lilliefors test is less powerful than Shapiro–Wilk for detecting non-normality in small samples. It is sensitive to outliers, and critical values are tabulated for specific sample sizes. Additionally, like all normality tests, it only tells you whether to reject normality — not how far from normal the data are. Consider complementing the test with a Q–Q plot or histogram.

More Statistics Tools