The spectral decomposition factors a real symmetric matrix as QDQᵀ — an orthogonal matrix of eigenvectors times a diagonal matrix of eigenvalues times the transpose. In outer product form, this becomes a sum of rank-one projections weighted by eigenvalues. The decomposition is the factorization form of the spectral theorem and the foundation of principal component analysis, quadratic form classification, and positive definiteness testing.
where Q is orthogonal (QTQ=QQT=I) and D=diag(λ1,…,λn) is the diagonal matrix of eigenvalues. The columns of Q are orthonormal eigenvectors: Q=[q1q2⋯qn] with Aqi=λiqi and qi⋅qj=δij.
This is the diagonalizationA=PDP−1 specialized to symmetric matrices, where the crucial bonus is that P can be chosen orthogonal — so P−1=PT. The orthogonality of Q is not a convenience; it is a structural guarantee that holds for every real symmetric matrix, regardless of eigenvalue multiplicities.
The Spectral Theorem
The spectral theorem for real symmetric matrices states three facts.
All eigenvalues of a real symmetric matrix are real. No complex eigenvalues can appear.
Eigenvectors corresponding to distinct eigenvalues are orthogonal. If λi=λj, then qi⋅qj=0 automatically — no Gram-Schmidt is needed between different eigenspaces.
Every real symmetric matrix is orthogonally diagonalizable. There always exist n orthonormal eigenvectors forming a basis for Rn, even when eigenvalues are repeated. For a repeated eigenvalue with geometric multiplicity k, the eigenspace is k-dimensional, and Gram-Schmidt within that eigenspace produces k orthonormal eigenvectors.
Together these three facts guarantee that A=QDQT exists for every real symmetric A.
The Outer Product Form
Expanding A=QDQT column by column produces the spectral decomposition in outer product form:
A=λ1q1q1T+λ2q2q2T+⋯+λnqnqnT
Each term λiqiqiT is a rank-one matrix. The matrix qiqiT is the projection matrix onto the line spanned by qi: it sends any vector x to (qi⋅x)qi. Multiplying by λi scales the projection by the eigenvalue.
The spectral decomposition says that A acts by projecting onto each eigenvector direction independently, scaling each projection by the corresponding eigenvalue, and summing the results. There is no interaction between different eigenvector directions — the orthogonality of the qi's ensures complete decoupling.
Computing the Spectral Decomposition
The computation follows the standard eigenvalue workflow, with one additional step for repeated eigenvalues.
For each eigenvalue λi, find the eigenspace by solving (A−λiI)v=0 via row reduction.
If an eigenvalue has multiplicity greater than 1, apply Gram-Schmidt within its eigenspace to produce an orthonormal basis. Eigenvectors from different eigenspaces are already orthogonal — no cross-eigenspace orthogonalization is needed.
Assemble Q (orthonormal eigenvectors as columns) and D (eigenvalues on the diagonal in matching order).
Worked Example
For A=(3113): eigenvalues are λ1=4, λ2=2. Eigenvectors: q1=21(1,1)T, q2=21(1,−1)T. Then A=4⋅21(1111)+2⋅21(1−1−11)=(2222)+(1−1−11)=(3113).
Properties of the Factors
The orthogonality of Q makes every operation on the spectral decomposition cheap and clean.
The inverse is immediate: A−1=QD−1QT=Qdiag(1/λ1,…,1/λn)QT, valid when all eigenvalues are nonzero.
Powers are diagonal: Ak=QDkQT=Qdiag(λ1k,…,λnk)QT.
The matrix exponential is: eAt=QeDtQT=Qdiag(eλ1t,…,eλnt)QT.
The trace is tr(A)=λ1+⋯+λn. The determinant is det(A)=λ1⋯λn. The rank is the number of nonzero eigenvalues.
Every property of A reduces to a property of the diagonal D, mediated by the orthogonal rotation Q. This is the computational power of diagonalization — and for symmetric matrices, the orthogonality of Q ensures numerical stability as well.
Quadratic Forms
A quadratic form f(x)=xTAx with A symmetric can be diagonalized by the change of variables x=Qy:
f=yTQTAQy=yTDy=λ1y12+λ2y22+⋯+λnyn2
In the eigenvector coordinate system, the quadratic form decouples into a sum of independent squared terms. The eigenvectors define the principal axes of the quadratic surface (ellipsoid, hyperboloid, etc.), and the eigenvalues determine the curvature along each axis.
Positive definite (f>0 for x=0) means all λi>0 — the surface is an ellipsoid. Positive semi-definite (f≥0) means all λi≥0. Indefinite (eigenvalues of both signs) means the surface is a hyperboloid — f takes both positive and negative values.
Classification of quadratic forms reduces entirely to checking the signs of the eigenvalues.
Principal Component Analysis
The spectral decomposition of a covariance matrix is the mathematical core of principal component analysis (PCA).
The covariance matrix Σ of a dataset is symmetric positive semi-definite. Its spectral decomposition Σ=QDQT identifies the eigenvectors qi as the principal component directions — the orthogonal axes along which the data varies most — and the eigenvalues λi as the variance captured by each direction.
The first principal component q1 (corresponding to the largest eigenvalue λ1) is the direction of maximum variance. The second q2 is the direction of maximum variance orthogonal to q1, and so on.
Dimensionality reduction follows: projecting the data onto the top k eigenvectors (those with the k largest eigenvalues) captures as much variance as possible in k dimensions. The discarded directions have small eigenvalues and contribute little information. This is PCA — the spectral decomposition applied to the covariance matrix.
Spectral Decomposition vs. General Eigendecomposition
The general eigendecomposition writes a diagonalizable matrix as A=PDP−1 with Pinvertible but not necessarily orthogonal. The spectral decomposition writes a symmetric matrix as A=QDQT with Q orthogonal.
The orthogonality of Q provides three advantages. Inversion is free: Q−1=QT, no computation needed. Multiplication preserves norms: ∥Qx∥=∥x∥, so numerical errors are not amplified. Projections are orthogonal: the rank-one terms qiqiT are orthogonal projection matrices, making the outer product form geometrically transparent.
The spectral decomposition exists only for symmetric matrices (real case) or Hermitian matrices (complex case). For non-symmetric matrices, the eigendecomposition PDP−1 may exist (when the matrix is diagonalizable) but P is not orthogonal, and the computational and geometric advantages are lost.
Spectral Decomposition and SVD
For a symmetric positive semi-definite matrix A (all eigenvalues ≥0), the spectral decomposition and the singular value decomposition coincide. The singular values are the eigenvalues, and U=V=Q: A=QDQT=QΣQT.
For a general symmetric matrix with negative eigenvalues, the relationship requires a sign adjustment. The singular values are the absolute values ∣λi∣, and the signs are absorbed into U or V. If λi<0, the corresponding column of U is negated relative to the corresponding column of V.
The SVD generalizes the spectral decomposition to non-symmetric and non-square matrices. Every property that the spectral decomposition provides for symmetric matrices — rank, pseudoinverse, best low-rank approximation, condition number — the SVD provides for arbitrary matrices. The spectral decomposition is the special case where symmetry allows U and V to coincide.