Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Geometric Distribution Explorer


Modify Parameters and See Results

Number of trials until the first success

Parameters

Success Probability (p)
0.30

The probability of success on each trial

Failure Probability (q = 1-p)
0.70

The probability of failure on each trial

Expected Trials (E[X])
3.33

Average number of trials needed to get first success

Statistics

Expected Value
3.3333
Variance
7.7778
Std Deviation
2.7889
Mode
1

Probability Calculator

Key Properties

Real-World Applications

  • Number of coin flips until you get heads
  • Number of sales calls until you make a sale
  • Number of rolls of a die until you get a 6
  • Number of attempts until passing a test
  • Number of products inspected until finding a defect







Adjusting Success Probability

Use the p slider to set the probability of success on each trial. Values range from 0.01 (very rare success) to 0.99 (almost certain success), controlling how quickly you expect to see the first success.

As you increase p, the distribution becomes more concentrated at k = 1, meaning the first success is more likely to occur on the first trial. As p decreases, the distribution spreads out with longer waiting times becoming more probable.

Watch how the expected value E[X] = 1/p changes with the slider. At p = 0.5, expect 2 trials on average. At p = 0.1, expect 10 trials. The relationship is perfectly reciprocal.

Reading the PMF Visualization

The PMF chart shows probability bars starting at k = 1 (first trial). The distribution always has its mode at k = 1, with probabilities decreasing exponentially for higher k values.

Each bar represents P(X = k), the probability that the first success occurs exactly on trial k. The height decreases by a factor of (1-p) for each subsequent trial, creating the characteristic exponentially decaying pattern.

The displayed range extends to about 30 trials by default, but the distribution theoretically continues to infinity. Probabilities become negligible beyond a few multiples of the mean 1/p.

Understanding the CDF Display

The CDF curve shows P(X ≤ k), the probability that success occurs within the first k trials. Unlike the continuous CDF for other distributions, this appears as a step function jumping at each integer value.

The CDF rises quickly when p is large, reaching values near 1 within just a few trials. For small p, the CDF rises slowly, indicating that many trials might be needed before seeing success.

At any point k, the CDF value equals 1 - (1-p)^k, providing a closed-form expression for cumulative probabilities without needing to sum individual terms.

Computing Exact Probabilities

Enter trial number k in the Point Probability calculator to find P(X = k) using the formula p(1-p)^(k-1). This gives the exact probability that the first success occurs on trial k.

The calculation accounts for k-1 failures (each with probability 1-p) followed by one success (probability p). For example, with p = 0.3, the probability of first success on trial 3 is 0.7² × 0.3 = 0.147.

Try different k values to see how probability decays. Each additional trial multiplies the previous probability by (1-p), creating the geometric decay that gives this distribution its name.

Calculating Survival Probabilities

Use P(X > k) to find the probability of needing more than k trials. This "survival probability" equals (1-p)^k - simply raising the failure probability to the kth power.

The memoryless property appears here: P(X > n+k | X > n) = P(X > k). If you've already failed n times, the probability of failing k more times is the same as if you were starting fresh.

P(X ≥ k) includes k itself, computed as (1-p)^(k-1). This subtle difference matters for threshold questions like "at least k trials needed."

Range Probability Calculations

The range calculator finds P(a ≤ X ≤ b), the probability that first success occurs between trials a and b inclusive. This equals the difference in CDF values: F(b) - F(a-1).

Four boundary options handle edge cases:
[a, b] - Both endpoints included
(a, b) - Both endpoints excluded
[a, b) - Include a, exclude b
(a, b] - Exclude a, include b

These calculations help answer practical questions like "What's the probability I'll need between 5 and 10 attempts?"

What is the Geometric Distribution?

The geometric distribution models the number of trials needed until the first success occurs in a sequence of independent Bernoulli trials. It's the discrete analog of the exponential distribution.

Each trial must be independent with constant success probability p. The distribution counts the trial number on which success first appears, starting from trial 1 and extending theoretically to infinity.

Applications include reliability testing (trials until component failure), sales (calls until closing a deal), and quality control (inspections until finding a defect). For detailed theory and proofs, see geometric distribution theory page.

The Memoryless Property Explained

The geometric distribution is the only discrete distribution with the memoryless property: past failures don't affect future probabilities. Mathematically, P(X > n + k | X > n) = P(X > k).

This means if you've already failed n times, the probability distribution for future trials is identical to starting fresh. Each trial is a "clean slate" - previous outcomes provide no information about when success will occur.

This property arises because trials are independent with constant probability. It's what makes the geometric distribution appropriate for modeling truly random processes with no "memory" or aging effects.

Mean, Variance, and Statistics

The mean E[X] = 1/p gives the expected number of trials until first success. With p = 0.2, expect 5 trials on average. The mean is always the reciprocal of the success probability.

The variance equals (1-p)/p², measuring spread around the mean. Higher variance means more uncertainty about when success will occur. The standard deviation (1p)/p2\sqrt{(1-p)/p²} provides a more interpretable measure of spread.

The mode is always 1 regardless of p - the most likely outcome is success on the first trial. However, when p < 0.5, the mean exceeds 2, showing the distribution's right skew.

Related Distributions and Calculators

The negative binomial distribution generalizes the geometric to count trials until the rth success, while geometric specifically handles r = 1. Both share the memoryless property.

The exponential distribution is the continuous version of the geometric, modeling time until an event rather than trial number. It also has the memoryless property.

Related Tools:

Negative Binomial Calculator - Trials until r successes

Binomial Distribution Calculator - Fixed trials, count successes

Exponential Distribution Calculator - Continuous waiting times

Discrete Probability Distributions - Overview and comparisons