Markov's inequality provides an upper bound on the probability that a non-negative random variable exceeds a certain threshold, using only the expected value. For any non-negative random variable X and positive threshold a: P(X ≥ a) ≤ E[X] / a.
How Markov Inequality Works
Markov inequality states that for non-negative X, the probability of X exceeding threshold a cannot be more than the expected value divided by a. This bound becomes tighter as a increases relative to E[X].
The inequality requires X to be non-negative and only provides useful information when a > E[X]. When a ≤ E[X], the bound exceeds 1 and tells us nothing. The bound can be quite loose for many distributions.
Markov inequality is used in algorithm analysis for average-case complexity, queueing theory for service times, and as a building block for more sophisticated concentration inequalities in probability theory.