A probability function comes in two different forms, depending on whether the random variable takes separate values or ranges over a continuum. These two forms behave differently, but both serve the same purpose: they show how probability is spread out.
1. Probability Mass Function (PMF) — Discrete Case
The PMF is used when the random variable takes separate, countable values (like 0, 1, 2, 3… or the faces of a die).
• p(x) tells us the probability that X equals the value x.
• Each value gets its own probability.
• All the probability values together must add up to 1.
Examples: number of heads, number of arrivals, rolling a die, drawing a card from a finite deck.
2. Probability Density Function (PDF) — Continuous Case
The PDF is used when the random variable ranges over a continuous interval (like measurements, time, distance).
• f(x) is not a probability; it shows how tightly probability is packed around x.
• Actual probabilities come from the area under the curve of f(x).
• The total area under the entire density curve must be 1.
Examples: waiting times, heights, lengths, measurement errors, anything that can take any real value in an interval.
3. How They Relate
• Both PMF and PDF describe where probability is located.
• PMF handles individual points; PDF handles intervals.
• PMF assigns probabilities directly; PDF requires integration to get probabilities.
• They both reflect the same idea: how likely different outcomes are, given the type of random variable we’re working with.
These two forms are simply two versions of the same concept — the probability function — adapted to the nature of the random variable.