Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Normal Distribution






Normal Distribution: Natural Variability Around a Typical Value


Many real-world measurements arise from the accumulation of many small, independent influences acting together. The probabilistic experiment behind the normal distribution describes observing a quantity that fluctuates continuously around a typical or central value, with deviations in either direction becoming less likely as they grow larger. The random variable represents a measurement, not a count, and extreme values are possible but increasingly rare.



The Probabilistic Experiment Behind normal distribution


The probabilistic experiment underlying the normal distribution arises when an outcome is shaped by many small, independent influences, none of which dominates the result. Each influence nudges the outcome slightly upward or downward, and the final value reflects the combined effect of all these contributions.

This experiment is not defined by repetition of identical trials, but by aggregation. The core assumption is that deviations occur naturally in both directions, are roughly symmetric, and tend to cancel out when added together. Extreme outcomes are possible, but increasingly unlikely because they require many influences to align in the same direction.

The normal distribution emerges as a consequence of stability: when numerous independent factors interact, the resulting variability concentrates around a central value. The spread reflects how strong those individual influences are, while the center reflects their average balance point.

This experiment explains why many natural and measurement-based quantities cluster around a typical value with gradual falloff on both sides.

Example:

Human height results from genetics, nutrition, environment, and random biological variation. No single factor determines the outcome, but together they produce values concentrated around an average, with fewer extremely short or tall individuals.

Notation


XN(μ,σ2)X \sim N(\mu, \sigma^2) — distribution of the random variable (variance notation).

XNormal(μ,σ2)X \sim \text{Normal}(\mu, \sigma^2) — alternative explicit form.

N(μ,σ2)N(\mu, \sigma^2) — used to denote the distribution itself (not the random variable
).

N(0,1)N(0, 1) — the standard normal distribution (μ=0,σ=1\mu = 0, \sigma = 1).

ZN(0,1)Z \sim N(0, 1) — conventional notation for a standard normal random variable
.

Note: Some texts use N(μ,σ)N(\mu, \sigma) with standard deviation instead of variance. Always check which convention is being used. Statistical software often defaults to variance notation.

See All Probability Symbols and Notations

Parameters


μ (mu): mean or center of the distribution, where μR\mu \in \mathbb{R}

σ (sigma): standard deviation, measuring spread around the mean, where σ>0\sigma > 0

The normal distribution is fully characterized by these two parameters.

μ determines the location (where the peak sits on the number line), while σ controls the spread (how wide or narrow the bell curve is).

Variance is σ2\sigma^2, but we typically use σ\sigma as the primary parameter since it's in the same units as the data.

Probability Density Function (PDF) and Support (Range)

Normal (Gaussian) Distribution

Bell-shaped curve, symmetric around mean

Explanation

The normal distribution, also known as the Gaussian distribution, is the most important probability distribution in statistics. Its probability density function is f(x)=1σ2πe(xμ)22σ2f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}, where μ\mu is the mean and σ\sigma is the standard deviation. The expected value is E[X]=μE[X] = \mu and the variance is Var(X)=σ2\text{Var}(X) = \sigma^2. The normal distribution appears naturally in many phenomena due to the Central Limit Theorem. Applications include measurement errors, heights and weights in populations, IQ scores, financial returns, and any process that results from many small independent random effects.


Cumulative Distribution Function (CDF)

Normal Distribution CDF

Visualizing probability accumulation for normal (Gaussian) distribution

Normal (Gaussian) - CDF

S-shaped curve, steepest at mean

CDF Explanation

The cumulative distribution function (CDF) of the normal distribution is F(x)=12[1+erf(xμσ2)]F(x) = \frac{1}{2}\left[1 + \text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right], where erf is the error function. The CDF gives the probability P(Xx)P(X \leq x) that a normally distributed random variable is less than or equal to xx. The S-shaped curve is symmetric around the mean μ\mu, where F(μ)=0.5F(\mu) = 0.5. The curve is steepest at the mean and flattens out in the tails. About 68% of values fall within one standard deviation of the mean (F(μ+σ)F(μσ)0.68F(\mu + \sigma) - F(\mu - \sigma) \approx 0.68), 95% within two standard deviations, and 99.7% within three standard deviations.

Expected Value (mean)


Variance and Standard Deviation


Mode and Median


Quantiles/Percentiles


Moment Generating Function


Real-World Examples and Common Applications


Interactive Calculator


Special Cases


Properties