Although Markov’s inequality always holds under its assumptions, the bounds it provides are often very loose.
Because it uses only the expected value, the inequality ignores how values are distributed around that average. As a result, the bound may be far larger than the true probability, especially when the random variable has light tails or is tightly concentrated.
Markov’s inequality is also uninformative when the threshold is close to the expected value, since the bound may approach or exceed 1. In such cases, it provides little practical insight.
For these reasons, Markov’s inequality is best viewed as a guarantee of what cannot happen too often, rather than a precise estimate of what does happen.