Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Discrete Distributions Calculator


Probability Calculators
Continuous Distributions Calculator
Statistics Calculator

Discrete Uniform Distribution Calculator

Enter the minimum (a) and maximum (b) values

Minimum value (integer)

Maximum value (integer)









Selecting Probability Query Types

Each distribution calculator offers six query modes for flexible probability calculations. All values (full distribution) displays the complete probability mass function across all possible outcomes, showing the PMF chart and detailed probability table. This mode reveals the distribution's shape and identifies the most likely values.

P(X = k) calculates the exact probability of a specific outcome. For a binomial distribution with n=10 and p=0.5, P(X=5) gives the probability of exactly 5 successes. This mode uses the distribution's PMF formula directly.

P(X < k) computes cumulative probability for values strictly less than k, summing probabilities from the minimum up to k-1. P(X ≤ k) includes k itself in the sum. These queries answer questions like "What's the probability of fewer than 3 defects?" versus "At most 3 defects?"

P(X > k) and P(X ≥ k) calculate right-tail probabilities, useful for questions like "What's the probability of exceeding 10 events?" The greater-than query sums from k+1 onward, while greater-or-equal starts at k. All calculators display results prominently with the query notation and six-decimal precision.

Using the Discrete Uniform Calculator

The discrete uniform distribution models scenarios where all outcomes have equal probability, like rolling a fair die or drawing a random integer. Enter a (minimum) as the smallest possible value and b (maximum) as the largest. For a standard die, set a=1 and b=6.

The calculator computes n = b - a + 1 (count of possible values) and probability = 1/n for each outcome. The PMF chart displays uniform-height bars showing equal probabilities. Mean equals (a+b)/2, the midpoint of the range. Variance follows the formula ((n²-1)/12).

Query specific values to see how uniform probability works. For a=1, b=6, P(X=3) = 1/6 ≈ 0.1667, identical to any other face. P(X ≤ 2) = 2/6 = 0.3333 since two values (1,2) fall in that range. The uniform distribution serves as a baseline for comparing more complex distributions and models perfectly random selection.

Calculating Binomial Distribution Probabilities

The binomial distribution models the number of successes in a fixed number of independent trials, each with the same success probability. Enter n (number of trials) as the total independent repetitions and p (success probability) as the probability for each trial, between 0 and 1.

For coin flipping, n=10 flips with p=0.5 for a fair coin models total heads. For manufacturing quality control with 2% defect rate, use p=0.02 with n equal to sample size. The calculator displays the PMF chart showing the distribution's shape—symmetric when p=0.5, right-skewed when p<0.5, left-skewed when p>0.5.

Mean equals np (expected successes), variance equals np(1-p). For n=20, p=0.3, expect 6 successes on average with variance 4.2. The PMF uses the formula C(n,k) × p^k × (1-p)^(n-k) where C(n,k) is the binomial coefficient "n choose k."

Query P(X=k) for exact probabilities, P(X≤k) for cumulative "at most k successes." The binomial models coin flips, quality control sampling, survey responses, medical trials—any scenario with fixed trials and constant success probability.

Working with Geometric Distribution

The geometric distribution models the number of trials needed until the first success occurs. Unlike binomial which fixes trial count, geometric asks "How many attempts until success?" Enter p (success probability) as the constant probability for each trial.

For p=0.2 (20% success rate), the calculator shows probabilities for needing 1, 2, 3... trials. P(X=1) = p (immediate success), P(X=2) = (1-p)×p (fail then succeed), and so on. The PMF chart displays exponentially decreasing bars—early success is most likely, long sequences less probable.

Mean equals 1/p, the expected number of trials. For p=0.2, expect 5 trials on average until first success. Variance equals (1-p)/p². Standard deviation measures spread around this expectation. The distribution has infinite support (theoretically could take forever), but the calculator displays up to reasonable maximum based on negligible probabilities.

Use geometric for questions like "How many job applications until an offer?" or "Attempts needed to roll a six?" The memoryless property means past failures don't affect future trial probabilities—each attempt has the same success chance regardless of history.

Negative Binomial Distribution Calculator

The negative binomial distribution generalizes the geometric distribution, modeling failures before achieving a target number of successes. Enter r (number of successes) as the target success count and p (success probability) as the probability per trial.

For r=5, p=0.3, the calculator shows the distribution of failures before the 5th success. The random variable X represents failure count, not total trials. Mean equals r(1-p)/p, expected failures before r successes. For r=5, p=0.3, expect about 11.67 failures before accumulating 5 successes.

Variance equals r(1-p)/p², measuring variability in failure count. The PMF uses binomial coefficients: C(k+r-1, k) × p^r × (1-p)^k where k is the number of failures. The distribution resembles binomial but extends indefinitely since failure count is unbounded.

When r=1, negative binomial reduces to geometric distribution. Use this for scenarios like "How many defective items before finding 10 acceptable ones?" or "Customer rejections before closing 3 sales?" The calculator handles both specific failure counts P(X=k) and cumulative queries P(X≤k) for "at most k failures."

Hypergeometric Distribution for Sampling Without Replacement

The hypergeometric distribution models sampling without replacement from a finite population containing two types of items. Enter N (population size) as total items, K (success states) as items of the target type, and n (sample size) as the number drawn.

For drawing cards from a 52-card deck, N=52 total cards, K=13 hearts, n=5 cards drawn. The calculator computes probabilities for k hearts in the 5-card hand. Unlike binomial (which assumes replacement), hypergeometric accounts for changing probabilities as items are removed.

Mean equals nK/N, expected successes in the sample. For N=100 items with K=30 defective, drawing n=10 gives mean 3 defective items. Variance equals n×K×(N-K)×(N-n)/(N²×(N-1)), incorporating the finite population correction factor.

The PMF uses the formula: [C(K,k) × C(N-K, n-k)] / C(N,n) where C denotes combinations. Valid k values range from max(0, n-(N-K)) to min(n,K)—you can't draw more hearts than exist or more cards than sampled.

Use hypergeometric for quality control (sampling from production batch), lottery probabilities (drawing winning numbers), or ecological sampling (capturing tagged animals). The distribution approaches binomial as N→∞ with K/N=p constant.

Poisson Distribution for Event Counts

The Poisson distribution models the number of events occurring in a fixed interval when events happen independently at a constant average rate. Enter λ (lambda - average rate) as the expected event count per interval.

For λ=3.5 events per hour, the calculator shows probabilities for 0, 1, 2, 3... events in an hour. P(X=0) = e^(-λ) gives the probability of zero events. The PMF peak occurs near λ—the most likely outcome approximates the average rate.

Mean and variance both equal λ, a unique Poisson property. For λ=5, expect 5 events with variance 5 and standard deviation √5 ≈ 2.236. The distribution is right-skewed for small λ, approaches symmetry as λ increases. When λ>10, normal approximation becomes accurate.

The PMF formula: P(X=k) = (λ^k × e^(-λ)) / k! where k is the event count. The calculator sums probabilities for cumulative queries. P(X≤3) for λ=5 gives probability of 3 or fewer events, useful for capacity planning.

Apply Poisson to phone calls per hour, website visits per day, radioactive decay counts, rare disease cases, customer arrivals, or any scenario with independent events at constant rate. The distribution assumes infinite potential events in the interval with very small individual probabilities.

What Are Discrete Probability Distributions?

A discrete probability distribution describes the probabilities of countable outcomes for a random variable. Unlike continuous distributions which assign probability to intervals, discrete distributions assign probability to specific values: 0 heads, 1 head, 2 heads, etc.

The probability mass function (PMF) gives P(X=k) for each possible value k. PMF values must be non-negative and sum to 1 across all possible outcomes. The PMF completely characterizes the distribution—knowing all individual probabilities determines every property.

Discrete distributions model counted phenomena: number of successes, defects, events, trials, or occurrences. The random variable takes integer values (sometimes all non-negative integers, sometimes a finite range). Examples include coin flips (0 to n heads), website visits (0 to infinity), cards drawn (0 to min(n,K) target cards).

Key properties include mean μ (expected value, center of distribution), variance σ² (spread around mean), and standard deviation σ (variance square root). The cumulative distribution function (CDF) gives P(X≤k), accumulating probabilities from minimum to k.

For comprehensive theory on discrete probability distributions including derivations, properties, and applications, see discrete probability distributions theory.

Choosing the Right Discrete Distribution

Selecting the appropriate distribution requires matching the scenario's structure to distribution assumptions. Discrete Uniform applies when all outcomes are equally likely with known minimum and maximum: dice rolls, random integer selection, lottery numbers.

Binomial suits fixed trial counts with constant success probability and independence: quality control samples, coin flipping, survey yes/no responses, multiple choice guessing. Each trial must be independent with identical success chance.

Geometric models waiting time until first success when trials are independent with constant probability: attempts until sale, rolls until six, iterations until convergence. Use when counting trials, not successes.

Negative Binomial generalizes geometric to multiple successes: failures before r-th success, defects before finding r acceptable items. Reduces to geometric when r=1. Appropriate when counting failures while waiting for fixed success count.

Hypergeometric handles sampling without replacement from finite populations with two categories: card hands, quality sampling from batches, capture-recapture methods. Critical when sample size is substantial relative to population—replacement assumption breaks down.

Poisson applies to rare events in fixed intervals when occurrences are independent with constant rate: phone calls per hour, website hits, accidents per month, rare disease cases. Works when individual event probability is tiny but opportunities are numerous. Approximates binomial when n is large, p is small, and np is moderate.

Understanding PMF vs CDF

The probability mass function (PMF) gives the probability of each exact value: P(X=k). For a die, PMF(3) = 1/6, the probability of rolling exactly 3. PMF bars in calculator charts show these individual probabilities. PMF values must be non-negative and sum to 1.

The cumulative distribution function (CDF) gives P(X≤k), the probability of at most k: all values from minimum through k. For a die, CDF(3) = 3/6 = 0.5, the probability of rolling 1, 2, or 3. CDF is the sum of PMF values up to k.

CDF increases from 0 to 1 as k increases. For discrete distributions, CDF is a step function—flat between integer values, jumping upward at each possible outcome by the PMF value. CDF(k) - CDF(k-1) equals PMF(k), relating the functions.

PMF answers "exactly k" questions while CDF answers "at most k" questions. Calculate P(X>k) as 1 - CDF(k). Calculate P(a < X ≤ b) as CDF(b) - CDF(a). The calculator query options implement these relationships: P(X<k) = CDF(k-1), P(X≤k) = CDF(k), P(X>k) = 1 - CDF(k), P(X≥k) = 1 - CDF(k-1).

The CDF visualizer tools display step functions showing cumulative probability accumulation, complementing the PMF bar charts in these calculators.

Related Probability Tools and Calculators

Continuous Distributions Calculator - Compute probabilities for normal, exponential, and uniform continuous distributions with PDF and CDF.

Probability Distribution Explorer - Interactive visualization comparing multiple distributions side-by-side with parameter adjustments.

Expected Value Calculator - Calculate expected values, variance, and higher moments for discrete and continuous distributions.

CDF Visualizer - Generate cumulative distribution function plots for discrete and continuous distributions.

Normal Distribution Calculator - Dedicated normal distribution tool with z-score calculations and percentile lookups.

Binomial Probability Calculator - Standalone binomial calculator with additional features for hypothesis testing.

Poisson Process Simulator - Simulate Poisson processes over time with arrival visualization and statistics.

Probability Mass Function Plotter - Create custom PMF plots for user-defined discrete distributions.

Statistical Distribution Reference - Comprehensive guide to all probability distributions with formulas, properties, and applications.