Multinomial Distribution Calculator

Calculate multinomial probabilities for experiments with multiple outcome categories. Enter the probability and observed frequency for each outcome (up to 6 categories), and get the multinomial probability — the likelihood of observing that exact combination of frequencies across all trials.

Select how many distinct outcome categories your experiment has.

Probability of Outcome 1 occurring on any single trial (0 to 1).

Number of times Outcome 1 was observed.

Probability of Outcome 2 occurring on any single trial (0 to 1).

Number of times Outcome 2 was observed.

Probability of Outcome 3 occurring on any single trial (0 to 1).

Number of times Outcome 3 was observed.

Probability of Outcome 4 occurring on any single trial (0 to 1).

Number of times Outcome 4 was observed.

Probability of Outcome 5 occurring on any single trial (0 to 1).

Number of times Outcome 5 was observed.

Probability of Outcome 6 occurring on any single trial (0 to 1).

Number of times Outcome 6 was observed.

Results

Multinomial Probability

--

Total Trials (n)

--

Sum of Probabilities

--

Multinomial Coefficient

--

Log Probability (ln)

--

Outcome Probabilities vs. Observed Frequencies

Results Table

Frequently Asked Questions

What is a multinomial experiment?

A multinomial experiment is a statistical experiment consisting of repeated independent trials, where each trial results in one of several discrete outcomes. The probability of each outcome remains constant across all trials. Rolling a die multiple times is a classic example — each roll can result in one of six outcomes with fixed probabilities.

What is the multinomial distribution?

The multinomial distribution describes the probability of observing a specific combination of outcome frequencies across n total trials when each trial independently produces one of k possible outcomes. It generalizes the binomial distribution to more than two outcomes. The formula is: P = (n! / (f1! × f2! × … × fk!)) × p1^f1 × p2^f2 × … × pk^fk.

How is the multinomial distribution related to the binomial distribution?

The binomial distribution is a special case of the multinomial distribution where there are exactly two outcome categories (success and failure). When k=2, the multinomial formula reduces exactly to the binomial probability formula. The multinomial distribution simply extends the concept to any number of categories.

Why must the probabilities add up to 1?

Each trial must result in exactly one of the k outcomes, so the probabilities of all outcomes must collectively account for 100% of the possibilities. If your probabilities don't sum to 1, the inputs don't represent a valid probability distribution and the multinomial formula won't yield meaningful results.

What does 'frequency of an outcome' mean?

The frequency of an outcome is the number of times that outcome was observed across all trials of the experiment. For example, if you flip a coin 10 times and get heads 6 times and tails 4 times, the frequency of heads is 6 and the frequency of tails is 4, giving a total of n=10 trials.

What is the multinomial coefficient?

The multinomial coefficient is the term n! / (f1! × f2! × … × fk!) in the multinomial formula. It counts the number of distinct ways the observed frequencies could have been arranged across the n total trials. It generalizes the binomial coefficient (n choose k) to multiple outcome categories.

When should I use the multinomial distribution?

Use the multinomial distribution when you have a fixed number of independent trials, each trial has the same set of possible outcomes with constant probabilities, and you want to find the probability of a specific combination of outcome counts. Common applications include genetics, survey analysis, dice problems, and quality control with multiple defect types.

Why is the log probability shown as an output?

Multinomial probabilities can be extremely small numbers, especially with many trials or categories, making them difficult to read or compare directly. The natural log of the probability (ln P) avoids floating-point underflow issues and is often more useful for computations in statistics and machine learning, where log-likelihoods are standard.

More Statistics Tools