Fixed trials with success/failure outcomes and constant probability
Calculate binomial probabilities for n independent trials with constant success probability p. Input the number of trials, success probability, and choose probability type: P(X=k) for exact successes, P(X≤k) for at most k, or P(X≥k) for at least k successes. Displays complete probability mass function, mean μ=np, variance σ²=np(1-p), and distribution chart. Perfect for coin flips, quality control with fixed sample sizes, yes/no surveys, or any scenario with repeated independent trials and binary outcomes. Shows all probabilities from 0 to n successes when viewing full distribution.
Model event counts occurring in fixed time or space intervals
Compute Poisson probabilities given average rate λ (lambda). Ideal for modeling rare events: customer arrivals per hour, defects per unit, emails per day, or radioactive decay counts. Input lambda (average rate) and target number of events k. Calculator provides P(X=k) and cumulative probabilities, plus mean μ=λ, variance σ²=λ (equal to mean), and standard deviation. Displays probability mass function chart showing distribution shape. Use when events occur independently at constant average rate with no upper limit on count.
Number of trials until first success in repeated Bernoulli trials
Calculate geometric probabilities for trials-until-success scenarios. Input success probability p to find the probability of first success occurring on trial k. Shows PMF P(X=k)=(1-p)^(k-1)×p, mean μ=1/p (expected trials until success), variance σ²=(1-p)/p², and distribution visualization. Classic applications include: quality control (items inspected until finding defect), sales (calls until first sale), or games (attempts until winning). Memoryless property means past failures don't affect future probability. Perfect for "how long until" questions with constant success probability.
Trials needed to achieve r successes with failures counted
Generalization of geometric distribution for r successes instead of just one. Input r (target successes), p (success probability per trial), and k (number of failures before achieving r successes). Calculates probability of exactly k failures before r-th success using PMF with binomial coefficient. Shows mean μ=r(1-p)/p, variance σ²=r(1-p)/p², complete probability distribution, and chart. Applications include reliability testing (failures before r components work), customer acquisition (trials to get r conversions), or manufacturing (defects before producing r good units). More flexible than geometric for real-world scenarios requiring multiple successes.
Sampling without replacement from finite population with two types
Calculate probabilities when sampling without replacement—unlike binomial which assumes replacement. Input N (total population), K (success items in population), n (sample size drawn), and k (successes in sample). Uses hypergeometric PMF with three binomial coefficients. Shows mean μ=nK/N, variance (accounting for finite population correction), and full distribution. Essential for: card games (drawing specific cards from deck), quality control (selecting from finite batch), lottery probabilities, or survey sampling from small populations. Key difference from binomial: probability changes with each draw since items aren't replaced.
All integer values between minimum and maximum equally likely
Simplest discrete distribution where every outcome has equal probability. Input minimum value a and maximum value b to calculate uniform probabilities. Each value k in range [a,b] has probability P(X=k)=1/(b-a+1). Shows mean μ=(a+b)/2 (midpoint), variance σ²=((b-a+1)²-1)/12, and bar chart with equal-height bars. Classic example is fair die: a=1, b=6, each outcome probability=1/6. Use for lottery numbers, random selection scenarios, or any situation where all discrete outcomes are equally probable. Foundation for understanding more complex distributions.
Equal probability density across a continuous interval with flat distribution
Calculate probabilities for continuous uniform distribution where every value between minimum a and maximum b has equal probability density. Input bounds a and b, then choose probability type: P(X<x), P(X>x), or P(x₁<X<x₂) for range probabilities. Shows PDF f(x)=1/(b-a) which is constant across the interval, mean μ=(a+b)/2 at the midpoint, and variance σ²=(b-a)²/12. Displays rectangular distribution chart with highlighted probability regions. Perfect for random number generation, modeling situations with no prior information where all values in range are equally likely, or when outcomes are truly random within bounds. Classic example: randomly selecting a point on a line segment.
Bell curve distribution with mean and standard deviation parameters
Most important continuous distribution modeling natural phenomena and measurement errors. Input mean μ (center) and standard deviation σ (spread) to calculate probabilities for any value or range. Computes P(X<x), P(X>x), or P(x₁<X<x₂) with automatic Z-score calculation showing standardized values. Displays classic bell-shaped curve with mean marked and probability regions shaded. Shows 68-95-99.7 rule: 68% within ±1σ, 95% within ±2σ, 99.7% within ±3σ. Essential for statistical inference, hypothesis testing, confidence intervals, and Central Limit Theorem applications. Use when data clusters symmetrically around mean with most values near center and fewer at extremes. Ubiquitous in: heights, test scores, measurement errors, and aggregated random variables.
Models waiting times and time between events in Poisson processes
Calculate probabilities for time until next event occurs given constant average rate λ. Input rate parameter λ (events per time unit) and query value x to find P(X<x), P(X>x), or range probabilities. Shows PDF f(x)=λe^(-λx) with characteristic decreasing exponential curve, mean μ=1/λ (average wait time), and variance σ²=1/λ². Key feature: memoryless property where P(X>s+t|X>s)=P(X>t)—past waiting doesn't affect future probability. Perfect for: time until next customer arrival, component failure times, radioactive decay intervals, time between earthquakes, or service completion times. Complement to Poisson distribution: if events follow Poisson process with rate λ, time between events follows exponential with same λ.