Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Page Title






A Symmetric Matrix as a Sum of Projections

The spectral decomposition factors a real symmetric matrix as QDQᵀ — an orthogonal matrix of eigenvectors times a diagonal matrix of eigenvalues times the transpose. In outer product form, this becomes a sum of rank-one projections weighted by eigenvalues. The decomposition is the factorization form of the spectral theorem and the foundation of principal component analysis, quadratic form classification, and positive definiteness testing.



What the Spectral Decomposition Is

Every real symmetric matrix AA factors as

A=QDQTA = QDQ^T


where QQ is orthogonal (QTQ=QQT=IQ^TQ = QQ^T = I) and D=diag(λ1,,λn)D = \text{diag}(\lambda_1, \dots, \lambda_n) is the diagonal matrix of eigenvalues. The columns of QQ are orthonormal eigenvectors: Q=[q1  q2    qn]Q = [\mathbf{q}_1 \; \mathbf{q}_2 \; \cdots \; \mathbf{q}_n] with Aqi=λiqiA\mathbf{q}_i = \lambda_i\mathbf{q}_i and qiqj=δij\mathbf{q}_i \cdot \mathbf{q}_j = \delta_{ij}.

This is the diagonalization A=PDP1A = PDP^{-1} specialized to symmetric matrices, where the crucial bonus is that PP can be chosen orthogonal — so P1=PTP^{-1} = P^T. The orthogonality of QQ is not a convenience; it is a structural guarantee that holds for every real symmetric matrix, regardless of eigenvalue multiplicities.

The Spectral Theorem

The spectral theorem for real symmetric matrices states three facts.

All eigenvalues of a real symmetric matrix are real. No complex eigenvalues can appear.

Eigenvectors corresponding to distinct eigenvalues are orthogonal. If λiλj\lambda_i \neq \lambda_j, then qiqj=0\mathbf{q}_i \cdot \mathbf{q}_j = 0 automatically — no Gram-Schmidt is needed between different eigenspaces.

Every real symmetric matrix is orthogonally diagonalizable. There always exist nn orthonormal eigenvectors forming a basis for Rn\mathbb{R}^n, even when eigenvalues are repeated. For a repeated eigenvalue with geometric multiplicity kk, the eigenspace is kk-dimensional, and Gram-Schmidt within that eigenspace produces kk orthonormal eigenvectors.

Together these three facts guarantee that A=QDQTA = QDQ^T exists for every real symmetric AA.

The Outer Product Form

Expanding A=QDQTA = QDQ^T column by column produces the spectral decomposition in outer product form:

A=λ1q1q1T+λ2q2q2T++λnqnqnTA = \lambda_1 \mathbf{q}_1\mathbf{q}_1^T + \lambda_2 \mathbf{q}_2\mathbf{q}_2^T + \cdots + \lambda_n \mathbf{q}_n\mathbf{q}_n^T


Each term λiqiqiT\lambda_i \mathbf{q}_i\mathbf{q}_i^T is a rank-one matrix. The matrix qiqiT\mathbf{q}_i\mathbf{q}_i^T is the projection matrix onto the line spanned by qi\mathbf{q}_i: it sends any vector x\mathbf{x} to (qix)qi(\mathbf{q}_i \cdot \mathbf{x})\mathbf{q}_i. Multiplying by λi\lambda_i scales the projection by the eigenvalue.

The spectral decomposition says that AA acts by projecting onto each eigenvector direction independently, scaling each projection by the corresponding eigenvalue, and summing the results. There is no interaction between different eigenvector directions — the orthogonality of the qi\mathbf{q}_i's ensures complete decoupling.

Computing the Spectral Decomposition

The computation follows the standard eigenvalue workflow, with one additional step for repeated eigenvalues.

Find the eigenvalues by solving the characteristic equation det(AλI)=0\det(A - \lambda I) = 0. All roots are real.

For each eigenvalue λi\lambda_i, find the eigenspace by solving (AλiI)v=0(A - \lambda_i I)\mathbf{v} = \mathbf{0} via row reduction.

If an eigenvalue has multiplicity greater than 11, apply Gram-Schmidt within its eigenspace to produce an orthonormal basis. Eigenvectors from different eigenspaces are already orthogonal — no cross-eigenspace orthogonalization is needed.

Assemble QQ (orthonormal eigenvectors as columns) and DD (eigenvalues on the diagonal in matching order).

Worked Example


For A=(3113)A = \begin{pmatrix} 3 & 1 \\ 1 & 3 \end{pmatrix}: eigenvalues are λ1=4\lambda_1 = 4, λ2=2\lambda_2 = 2. Eigenvectors: q1=12(1,1)T\mathbf{q}_1 = \frac{1}{\sqrt{2}}(1, 1)^T, q2=12(1,1)T\mathbf{q}_2 = \frac{1}{\sqrt{2}}(1, -1)^T. Then A=412(1111)+212(1111)=(2222)+(1111)=(3113)A = 4 \cdot \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} + 2 \cdot \frac{1}{2}\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} 2 & 2 \\ 2 & 2 \end{pmatrix} + \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} 3 & 1 \\ 1 & 3 \end{pmatrix}.

Properties of the Factors

The orthogonality of QQ makes every operation on the spectral decomposition cheap and clean.

The inverse is immediate: A1=QD1QT=Qdiag(1/λ1,,1/λn)QTA^{-1} = QD^{-1}Q^T = Q\,\text{diag}(1/\lambda_1, \dots, 1/\lambda_n)\,Q^T, valid when all eigenvalues are nonzero.

Powers are diagonal: Ak=QDkQT=Qdiag(λ1k,,λnk)QTA^k = QD^kQ^T = Q\,\text{diag}(\lambda_1^k, \dots, \lambda_n^k)\,Q^T.

The matrix exponential is: eAt=QeDtQT=Qdiag(eλ1t,,eλnt)QTe^{At} = Qe^{Dt}Q^T = Q\,\text{diag}(e^{\lambda_1 t}, \dots, e^{\lambda_n t})\,Q^T.

The trace is tr(A)=λ1++λn\text{tr}(A) = \lambda_1 + \cdots + \lambda_n. The determinant is det(A)=λ1λn\det(A) = \lambda_1 \cdots \lambda_n. The rank is the number of nonzero eigenvalues.

Every property of AA reduces to a property of the diagonal DD, mediated by the orthogonal rotation QQ. This is the computational power of diagonalization — and for symmetric matrices, the orthogonality of QQ ensures numerical stability as well.

Quadratic Forms

A quadratic form f(x)=xTAxf(\mathbf{x}) = \mathbf{x}^TA\mathbf{x} with AA symmetric can be diagonalized by the change of variables x=Qy\mathbf{x} = Q\mathbf{y}:

f=yTQTAQy=yTDy=λ1y12+λ2y22++λnyn2f = \mathbf{y}^TQ^TAQ\mathbf{y} = \mathbf{y}^TD\mathbf{y} = \lambda_1 y_1^2 + \lambda_2 y_2^2 + \cdots + \lambda_n y_n^2


In the eigenvector coordinate system, the quadratic form decouples into a sum of independent squared terms. The eigenvectors define the principal axes of the quadratic surface (ellipsoid, hyperboloid, etc.), and the eigenvalues determine the curvature along each axis.

Positive definite (f>0f > 0 for x0\mathbf{x} \neq \mathbf{0}) means all λi>0\lambda_i > 0 — the surface is an ellipsoid. Positive semi-definite (f0f \geq 0) means all λi0\lambda_i \geq 0. Indefinite (eigenvalues of both signs) means the surface is a hyperboloid — ff takes both positive and negative values.

Classification of quadratic forms reduces entirely to checking the signs of the eigenvalues.

Principal Component Analysis

The spectral decomposition of a covariance matrix is the mathematical core of principal component analysis (PCA).

The covariance matrix Σ\Sigma of a dataset is symmetric positive semi-definite. Its spectral decomposition Σ=QDQT\Sigma = QDQ^T identifies the eigenvectors qi\mathbf{q}_i as the principal component directions — the orthogonal axes along which the data varies most — and the eigenvalues λi\lambda_i as the variance captured by each direction.

The first principal component q1\mathbf{q}_1 (corresponding to the largest eigenvalue λ1\lambda_1) is the direction of maximum variance. The second q2\mathbf{q}_2 is the direction of maximum variance orthogonal to q1\mathbf{q}_1, and so on.

Dimensionality reduction follows: projecting the data onto the top kk eigenvectors (those with the kk largest eigenvalues) captures as much variance as possible in kk dimensions. The discarded directions have small eigenvalues and contribute little information. This is PCA — the spectral decomposition applied to the covariance matrix.

Spectral Decomposition vs. General Eigendecomposition

The general eigendecomposition writes a diagonalizable matrix as A=PDP1A = PDP^{-1} with PP invertible but not necessarily orthogonal. The spectral decomposition writes a symmetric matrix as A=QDQTA = QDQ^T with QQ orthogonal.

The orthogonality of QQ provides three advantages. Inversion is free: Q1=QTQ^{-1} = Q^T, no computation needed. Multiplication preserves norms: Qx=x\|Q\mathbf{x}\| = \|\mathbf{x}\|, so numerical errors are not amplified. Projections are orthogonal: the rank-one terms qiqiT\mathbf{q}_i\mathbf{q}_i^T are orthogonal projection matrices, making the outer product form geometrically transparent.

The spectral decomposition exists only for symmetric matrices (real case) or Hermitian matrices (complex case). For non-symmetric matrices, the eigendecomposition PDP1PDP^{-1} may exist (when the matrix is diagonalizable) but PP is not orthogonal, and the computational and geometric advantages are lost.

Spectral Decomposition and SVD

For a symmetric positive semi-definite matrix AA (all eigenvalues 0\geq 0), the spectral decomposition and the singular value decomposition coincide. The singular values are the eigenvalues, and U=V=QU = V = Q: A=QDQT=QΣQTA = QDQ^T = Q\Sigma Q^T.

For a general symmetric matrix with negative eigenvalues, the relationship requires a sign adjustment. The singular values are the absolute values λi|\lambda_i|, and the signs are absorbed into UU or VV. If λi<0\lambda_i < 0, the corresponding column of UU is negated relative to the corresponding column of VV.

The SVD generalizes the spectral decomposition to non-symmetric and non-square matrices. Every property that the spectral decomposition provides for symmetric matrices — rank, pseudoinverse, best low-rank approximation, condition number — the SVD provides for arbitrary matrices. The spectral decomposition is the special case where symmetry allows UU and VV to coincide.