Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Discrete Probability Distribution Visualizer


Interactive visualization of six fundamental discrete distributions

Discrete Uniform

Equal probability for finite outcomes

Explanation

A discrete uniform distribution assigns equal probability to each value in a finite range. The probability mass function is P(X=k)=1ba+1P(X = k) = \frac{1}{b - a + 1} for akba \leq k \leq b. The expected value is E[X]=a+b2E[X] = \frac{a + b}{2}, and the variance is Var(X)=n2112\text{Var}(X) = \frac{n^2 - 1}{12}, where n=ba+1n = b - a + 1. Common examples include rolling a fair die, selecting a random card from a deck, or generating a random number from a finite range.








Selecting a Distribution Type

Click the distribution tabs at the top to switch between six discrete distributions. Each tab shows the distribution name: Discrete Uniform, Binomial, Geometric, Negative Binomial, Hypergeometric, and Poisson. The selected distribution displays in blue with its parameter controls below. Start with Discrete Uniform for the simplest case, or jump directly to the distribution relevant to your problem. Each distribution has a brief description underneath its name explaining what it models. The tool remembers your last selection, making it easy to compare distributions by switching back and forth.

Adjusting Distribution Parameters

Use the sliders to change distribution parameters and watch the bar chart update instantly. Each distribution has different parameters: Binomial uses nn (number of trials) and pp (success probability), while Poisson uses only λ\lambda (rate parameter). Drag sliders smoothly or click anywhere on the slider track to jump to that value. The current parameter values display next to each slider label. For the Hypergeometric distribution, note that the number of draws and success states cannot exceed the population size. The tool automatically constrains parameters to valid ranges. Experiment with extreme parameter values to see how distributions behave at their limits.

Reading the Bar Chart

The bar chart displays probability values on the vertical axis and possible outcome values on the horizontal axis. Each bar height represents the probability of that specific outcome occurring. Taller bars indicate more likely values. The chart automatically scales to show all bars clearly, adjusting the vertical axis as you change parameters. Hover over any bar to see the exact probability value displayed in a tooltip. The tooltip shows both the outcome value kk and its probability P(X=k)P(X = k) to six decimal places. For distributions with many possible values, the chart shows only values with non-negligible probability. Compare bar heights directly to understand relative likelihoods.

Understanding Probability Values

All probability values fall between 0 and 1, where 0 means impossible and 1 means certain. Values close to 0 indicate unlikely outcomes, while values near 1 indicate highly likely outcomes. The sum of all bar heights always equals exactly 1, representing certainty that one of the outcomes will occur. For symmetric distributions like Binomial with p=0.5p = 0.5, the highest bars cluster around the center. For skewed distributions like Geometric, the highest bar appears on one side with a long tail extending in the other direction. Notice how probability spreads out as you increase variance parameters. When comparing distributions, similar shapes suggest similar probabilistic behavior.

Comparing Distribution Shapes

Switch between distributions to observe different probability patterns. The Discrete Uniform creates equal-height bars across all values in its range. Binomial distributions form bell-shaped patterns that become more symmetric as nn increases. Geometric distributions always show exponential decay with the highest probability at the first value. Poisson distributions shift rightward as λ\lambda increases, transitioning from highly skewed to approximately symmetric. Negative Binomial distributions extend the geometric pattern, showing where the rr-th success is likely to occur. Hypergeometric distributions resemble Binomial but account for sampling without replacement. Understanding these shape differences helps you select the appropriate distribution for your scenario.

Working with Different Scenarios

Each distribution models specific real-world scenarios. Use Binomial for fixed-trial experiments like flipping a coin 10 times or testing 50 items for defects. Choose Poisson for rare events occurring over time, such as website visits per hour or earthquakes per year. Select Geometric when counting trials until the first success, like rolling dice until you get a six. Negative Binomial extends this to count trials until the rr-th success. Use Hypergeometric when sampling without replacement from a finite population, such as drawing cards or selecting items from a batch. The parameter controls let you adjust scenarios to match your specific situation. The explanation panel provides formulas and typical applications for the selected distribution.

What is a Probability Mass Function?

The probability mass function (PMF) assigns a probability to each discrete value a random variable can take. Written as P(X=k)P(X = k), it gives the probability that random variable XX equals exactly kk. The PMF must satisfy two properties: each probability is between 0 and 1, and all probabilities sum to 1. For example, a fair six-sided die has PMF P(X=k)=1/6P(X = k) = 1/6 for k=1,2,3,4,5,6k = 1, 2, 3, 4, 5, 6. The PMF differs from the probability density function used for continuous variables. For comprehensive coverage of random variables, expectation, and variance calculations using PMFs, see our detailed probability theory pages.

Binomial Distribution Fundamentals

The Binomial distribution models the number of successes in nn independent trials, each with success probability pp. Its PMF is P(X=k)=(nk)pk(1p)nkP(X = k) = \binom{n}{k} p^k (1-p)^{n-k}, where (nk)\binom{n}{k} is the binomial coefficient. The expected value is E[X]=npE[X] = np and variance is Var(X)=np(1p)\text{Var}(X) = np(1-p). As nn increases with pp fixed, the distribution becomes more bell-shaped and approximates a Normal distribution. When nn is large and pp is small such that npnp remains moderate, it approximates a Poisson distribution. For detailed derivations of Binomial properties, applications in hypothesis testing, and connections to combinatorics, see our Binomial distribution theory page.

Poisson Distribution Basics

The Poisson distribution describes the number of events occurring in a fixed interval when events happen at a constant average rate λ\lambda. Its PMF is P(X=k)=λkeλk!P(X = k) = \frac{\lambda^k e^{-\lambda}}{k!}. Both the mean and variance equal λ\lambda. This distribution applies when events occur independently and the average rate stays constant. The Poisson approximates Binomial when nn is large, pp is small, and npλnp \approx \lambda. Common applications include modeling rare events, arrivals in queueing systems, and defects in manufacturing. For in-depth coverage of Poisson process theory, exponential distribution connections, and statistical inference with Poisson data, see our detailed Poisson distribution page.

Geometric vs Negative Binomial

The Geometric distribution models trials until the first success, with PMF P(X=k)=(1p)k1pP(X = k) = (1-p)^{k-1} p. It has the memoryless property: the probability of success in future trials doesn't depend on past failures. The Negative Binomial generalizes this to count trials until the rr-th success. Its PMF is P(X=k)=(k1r1)pr(1p)krP(X = k) = \binom{k-1}{r-1} p^r (1-p)^{k-r}. When r=1r = 1, Negative Binomial reduces to Geometric. Both distributions are right-skewed with exponential decay. For comprehensive theory on these distributions, their role in survival analysis, and applications in sequential experiments, see our detailed pages on Geometric and Negative Binomial distributions.

When to Use Each Distribution

Choose Discrete Uniform when all outcomes in a range are equally likely, such as rolling a fair die or selecting a random integer. Use Binomial for counting successes in a fixed number of independent trials with constant success probability. Select Poisson when counting rare events over time or space intervals with a constant average rate. Choose Geometric when counting trials until the first success occurs. Use Negative Binomial when counting trials until rr successes occur, generalizing the geometric case. Select Hypergeometric when sampling without replacement from a finite population, where the composition of remaining items changes after each draw. The key distinction is whether sampling is with replacement (Binomial) or without replacement (Hypergeometric).

Related Probability Concepts

Expected Value Calculator - Compute the mean of discrete probability distributions to find the long-run average outcome.

Variance Calculator - Calculate the spread of probability distributions to quantify uncertainty around the expected value.

Cumulative Distribution Function - The CDF gives probabilities of being less than or equal to a value, computed by summing the PMF.

Continuous Probability Distributions - For uncountable outcomes like measurements, explore probability density functions instead.

Combinatorics - Understanding combinations and permutations is essential for calculating binomial and hypergeometric probabilities.

Random Variables - Learn about discrete random variables, their properties, and how probability mass functions define their behavior.