Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Discrete Distributions






Discrete Distributions


Discrete distributions are probability models for random variables that can take on a countable set of values—typically integers or a finite set of outcomes. Unlike continuous distributions, which describe phenomena like heights or temperatures that can take any value within a range, discrete distributions characterize scenarios with distinct, separable outcomes: the number of successes in a series of trials, the count of events in a time interval, or selections from a finite population.

Understanding discrete distributions is fundamental to probability theory and problem-solving. Each distribution arises from a specific probabilistic mechanism—whether sampling with or without replacement, counting trials until an event occurs, or modeling rare occurrences. Recognizing these underlying structures allows you to match problems to their appropriate models.

The distinctions matter mathematically.The simplest case—the discrete uniform distribution—assigns equal probability to each outcome in a finite set, serving as the foundation for understanding more complex models. At the other end, the negative binomial distribution generalizes the geometric case by counting trials until a specified number of successes rather than just the first. A binomial distribution assumes a fixed number of independent trials, while a geometric distribution counts trials until the first success—superficially similar setups that yield entirely different probability mass functions, moments, and analytical properties. Misidentifying the mechanism leads to incorrect calculations and invalid conclusions.
The Poisson distribution, meanwhile, models the occurrence of rare events over a continuous interval—time, space, or volume—making it distinct from the trial-based counting distributions.

This page systematically presents six fundamental discrete distributions, detailing their support, parameters, probability functions, and key statistical properties. Mastering these models equips you to tackle a wide range of probabilistic questions with precision and confidence.



Uniform Discrete Distribution

Uniform Discrete Distribution
Property Uniform Discrete Distribution
Description Models experiments where each outcome is equally likely (e.g., rolling a fair die, random selection from a finite set)
Support (Domain) X ∈ {a, a+1, a+2, ..., b}
Finite or Infinite? Finite
Bounds/Range [a, b] where a, b are integers and a ≤ b
Parameters a (minimum value), b (maximum value)
Number of trials known/fixed beforehand? Not applicable (single selection)
Selection Property/Mechanism All selections are equal - no outcome has special meaning
PMF (Probability Mass Function) P(X = k) = 1/(b - a + 1) for k ∈ {a, a+1, ..., b}
CDF (Cumulative Distribution Function) P(X ≤ k) = (k - a + 1)/(b - a + 1) for k ∈ {a, a+1, ..., b}
Mean E[X] = (a + b)/2
Variance Var(X) = ((b - a + 1)² - 1)/12
Standard Deviation σ = √(((b - a + 1)² - 1)/12)

Binomial Distribution

Binomial Distribution
Property Binomial Distribution
Description Models the number of successes in a fixed number of independent trials, each with the same probability of success (e.g., number of heads in 10 coin flips)
Support (Domain) X ∈ {0, 1, 2, ..., n}
Finite or Infinite? Finite
Bounds/Range [0, n] where n is a positive integer
Parameters n (number of trials), p (probability of success on each trial), where 0 ≤ p ≤ 1
Number of trials known/fixed beforehand? Yes, n is fixed before the experiment
Selection Property/Mechanism Fixed number of independent trials; counting total number of successes; each trial has binary outcome (success/failure)
PMF (Probability Mass Function) P(X = k) = C(n,k) × p^k × (1-p)^(n-k) for k ∈ {0, 1, ..., n}
CDF (Cumulative Distribution Function) P(X ≤ k) = Σ(i=0 to k) C(n,i) × p^i × (1-p)^(n-i)
Mean E[X] = np
Variance Var(X) = np(1-p)
Standard Deviation σ = √(np(1-p))

Geometric Distribution

Geometric Distribution
Property Geometric Distribution
Description Models the number of trials until the first success in a sequence of independent trials, each with the same probability of success (e.g., number of coin flips until first heads)
Support (Domain) X ∈ {1, 2, 3, ...}
Finite or Infinite? Infinite
Bounds/Range [1, ∞)
Parameters p (probability of success on each trial), where 0 < p ≤ 1
Number of trials known/fixed beforehand? No, trials continue until the first success occurs
Selection Property/Mechanism Variable number of independent trials; counting trials until first success; each trial has binary outcome (success/failure); memoryless property
PMF (Probability Mass Function) P(X = k) = (1-p)k-1 × p for k ∈ {1, 2, 3, ...}
CDF (Cumulative Distribution Function) P(X ≤ k) = 1 - (1-p)k for k ∈ {1, 2, 3, ...}
Mean E[X] = 1/p
Variance Var(X) = (1-p)/p2
Standard Deviation σ = √((1-p)/p2) = √(1-p)/p

Negative Binomial Distribution

Negative Binomial Distribution
Property Negative Binomial Distribution
Description Models the number of trials until r successes occur in a sequence of independent trials, each with the same probability of success (e.g., number of coin flips until 5th heads)
Support (Domain) X ∈ {r, r+1, r+2, ...}
Finite or Infinite? Infinite
Bounds/Range [r, ∞) where r is a positive integer
Parameters r (number of successes desired), p (probability of success on each trial), where 0 < p ≤ 1 and r is a positive integer
Number of trials known/fixed beforehand? No, trials continue until r successes occur
Selection Property/Mechanism Variable number of independent trials; counting trials until rth success; each trial has binary outcome (success/failure); generalization of geometric distribution
PMF (Probability Mass Function) P(X = k) = C(k-1, r-1) × pr × (1-p)k-r for k ∈ {r, r+1, r+2, ...}
CDF (Cumulative Distribution Function) P(X ≤ k) = Σ(i=r to k) C(i-1, r-1) × pr × (1-p)i-r
Mean E[X] = r/p
Variance Var(X) = r(1-p)/p2
Standard Deviation σ = √(r(1-p))/p

Hypergeometric Distribution

Hypergeometric Distribution
Property Hypergeometric Distribution
Description Models the number of successes in a sample drawn without replacement from a finite population containing both successes and failures (e.g., drawing red balls from an urn without replacing them)
Support (Domain) X ∈ {max(0, n-N+K), ..., min(n, K)}
Finite or Infinite? Finite
Bounds/Range [max(0, n-N+K), min(n, K)]
Parameters N (population size), K (number of success states in population), n (number of draws), where N, K, n are positive integers with K ≤ N and n ≤ N
Number of trials known/fixed beforehand? Yes, n is fixed before the experiment
Selection Property/Mechanism Sampling without replacement from finite population; fixed number of draws; counting successes in sample; each item can only be selected once
PMF (Probability Mass Function) P(X = k) = [C(K, k) × C(N-K, n-k)] / C(N, n)
CDF (Cumulative Distribution Function) P(X ≤ k) = Σ(i=0 to k) [C(K, i) × C(N-K, n-i)] / C(N, n)
Mean E[X] = n × (K/N)
Variance Var(X) = n × (K/N) × (1 - K/N) × [(N-n)/(N-1)]
Standard Deviation σ = √[n × (K/N) × (1 - K/N) × (N-n)/(N-1)]

Poisson Distribution

Poisson Distribution
Property Poisson Distribution
Description Models the number of events occurring in a fixed interval of time or space when events occur independently at a constant average rate (e.g., number of phone calls received per hour)
Support (Domain) X ∈ {0, 1, 2, 3, ...}
Finite or Infinite? Infinite
Bounds/Range [0, ∞)
Parameters λ (lambda, the average rate of events), where λ > 0
Number of trials known/fixed beforehand? No fixed number of trials; counts events in a fixed interval
Selection Property/Mechanism Events occur independently; constant average rate; events in non-overlapping intervals are independent; useful for rare events
PMF (Probability Mass Function) P(X = k) = (λk × e) / k! for k ∈ {0, 1, 2, ...}
CDF (Cumulative Distribution Function) P(X ≤ k) = Σ(i=0 to k) (λi × e) / i! = e × Σ(i=0 to k) λi / i!
Mean E[X] = λ
Variance Var(X) = λ
Standard Deviation σ = √λ