Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Decompositions






Factoring Matrices into Simpler Pieces

A matrix decomposition writes a matrix as a product of simpler matrices — triangular, diagonal, orthogonal, or some combination — whose structure makes subsequent computation cheap. The upfront cost of factoring is repaid every time the factors are used to solve a system, compute eigenvalues, approximate data, or analyze stability. Decompositions are the computational backbone of applied linear algebra.



What a Decomposition Is

A matrix decomposition (or factorization) expresses a matrix AA as a product of two or more matrices with known, exploitable structure. The factors are typically triangular, diagonal, or orthogonal — matrices for which solving systems, computing determinants, and extracting eigenvalues are either trivial or dramatically simplified.

A decomposition does not change the matrix. It reveals the internal structure of AA by splitting it into interpretable components. The product of the factors always reproduces AA exactly — the information is rearranged, not altered.

Every decomposition represents a trade: an upfront cost to compute the factors, followed by cheap operations that use them. The factorization is done once; the payoff is reaped every time the factors are applied.

Why Decompositions Matter

Decompositions transform hard problems into sequences of easy ones.

Solving linear systems: the LU decomposition converts a single factorization into cheap triangular solves for any number of right-hand sides. The Cholesky decomposition does the same at half the cost when the matrix is symmetric positive definite.

Least squares: the QR decomposition avoids squaring the condition number, providing a numerically stable solution to overdetermined systems.

Eigenvalue computation: the QR algorithm — an iterative process built on repeated QR factorizations — is the standard method for finding eigenvalues of general matrices, entirely avoiding the characteristic polynomial.

Dimensionality reduction: the singular value decomposition identifies the most important directions in the data and discards the rest, enabling compression, noise removal, and low-rank approximation.

Numerical stability: orthogonal factors (QQ in QR, UU and VV in SVD) preserve lengths and condition numbers, keeping rounding errors under control throughout the computation.

LU Decomposition

The LU decomposition factors a square matrix as A=LUA = LU (or PA=LUPA = LU with row pivoting), where LL is lower triangular and UU is upper triangular. It captures the entire process of Gaussian elimination in matrix form: UU is the echelon form, and LL stores the multipliers used during elimination.

Solving Ax=bA\mathbf{x} = \mathbf{b} reduces to two triangular substitutions — forward (Ly=bL\mathbf{y} = \mathbf{b}) then backward (Ux=yU\mathbf{x} = \mathbf{y}) — each costing O(n2)O(n^2). The factorization itself costs 23n3\frac{2}{3}n^3, but each additional right-hand side costs only O(n2)O(n^2). For kk systems sharing the same coefficient matrix, LU amortizes the dominant cost over all kk solves.

QR Decomposition

The QR decomposition factors an m×nm \times n matrix as A=QRA = QR, where QQ has orthonormal columns and RR is upper triangular. It is produced by the Gram-Schmidt process, by Householder reflections, or by Givens rotations.

QR is the standard method for least-squares problems: the normal equations ATAx^=ATbA^TA\hat{\mathbf{x}} = A^T\mathbf{b} simplify to the triangular system Rx^=QTbR\hat{\mathbf{x}} = Q^T\mathbf{b}, avoiding the condition-number-squaring that comes from forming ATAA^TA directly.

The QR algorithm for eigenvalues — an iterative scheme that repeatedly factors and recombines — converges to the eigenvalues of a general matrix without computing the characteristic polynomial. It is the dominant eigenvalue algorithm in numerical software.

Cholesky Decomposition

The Cholesky decomposition factors a symmetric positive definite matrix as A=LLTA = LL^T, where LL is lower triangular with positive diagonal entries. It exists if and only if AA is symmetric positive definite — all eigenvalues strictly positive and A=ATA = A^T.

Cholesky exploits symmetry to achieve half the cost of LU: 13n3\frac{1}{3}n^3 versus 23n3\frac{2}{3}n^3. Only the lower triangle of AA is read, and only LL is computed — the upper factor LTL^T is just the transpose.

Cholesky requires no pivoting. Positive definiteness guarantees that every diagonal entry encountered during the algorithm is strictly positive, so no zero pivots can occur. If the algorithm breaks down (a non-positive value under the square root), the matrix is not positive definite — the decomposition doubles as a definiteness test.

Spectral Decomposition

The spectral decomposition factors a real symmetric matrix as A=QDQTA = QDQ^T, where QQ is orthogonal and DD is diagonal. This is the diagonalization of a symmetric matrix, with the additional guarantee that the diagonalizing matrix QQ is orthogonal — not merely invertible.

In outer product form, A=λ1q1q1T+λ2q2q2T++λnqnqnTA = \lambda_1 \mathbf{q}_1\mathbf{q}_1^T + \lambda_2 \mathbf{q}_2\mathbf{q}_2^T + \cdots + \lambda_n \mathbf{q}_n\mathbf{q}_n^T. Each term is the eigenvalue times the projection matrix onto the corresponding eigenvector direction. The matrix is decomposed into a sum of independent rank-one projections, weighted by eigenvalues.

The spectral decomposition is the foundation of quadratic form analysis, principal component analysis, and the classification of symmetric matrices as positive definite, positive semi-definite, or indefinite.

Singular Value Decomposition

The singular value decomposition factors any m×nm \times n matrix as A=UΣVTA = U\Sigma V^T, where UU and VV are orthogonal and Σ\Sigma is diagonal with non-negative entries (the singular values). It exists for every matrix of every shape and every rank.

The SVD is the most informative factorization in linear algebra. The number of nonzero singular values equals the rank. The columns of UU and VV provide orthonormal bases for all four fundamental subspaces. The pseudoinverse is A+=VΣ+UTA^+ = V\Sigma^+ U^T. The best rank-kk approximation to AA is obtained by truncating the SVD at kk terms. The condition number is σ1/σr\sigma_1/\sigma_r.

Geometrically, every linear transformation is a rotation (VTV^T), followed by a coordinate-axis scaling (Σ\Sigma), followed by another rotation (UU). The SVD makes this decomposition explicit.

Choosing the Right Decomposition

The choice of decomposition depends on the matrix structure and the question being asked.

For a square invertible system with one or many right-hand sides, LU is the standard choice — factor once, solve cheaply. For a symmetric positive definite system, Cholesky is faster and more stable. For a least-squares problem or an overdetermined system, QR avoids condition-number issues. For eigenvalues of a symmetric matrix, the spectral decomposition gives real eigenvalues and orthogonal eigenvectors. For rank determination, the pseudoinverse, low-rank approximation, or any problem where the matrix may be rectangular or rank-deficient, the SVD is the universal tool.

More specialized decompositions exist for more specialized structures. Banded matrices have banded LU factors. Sparse matrices benefit from fill-reducing permutations. Iterative methods (Krylov subspace methods) avoid explicit factorization entirely for very large systems. But the five decompositions on this page cover the vast majority of problems encountered in a standard linear algebra course and in applied work.

Relationships Between Decompositions

The five decompositions are not independent — each can be viewed as a refinement or generalization of another.

LU is Gaussian elimination recorded as a matrix product. QR is Gram-Schmidt recorded as a matrix product. Cholesky is the symmetric-positive-definite specialization of LU: when A=ATA = A^T with positive eigenvalues, the factorization A=LDLTA = LDL^T compresses into A=LLTA = LL^T by absorbing D\sqrt{D} into LL.

The spectral decomposition is eigendecomposition restricted to symmetric matrices, where the eigenvector matrix is guaranteed to be orthogonal. The SVD generalizes the spectral decomposition to non-symmetric and non-square matrices: for a symmetric positive semi-definite matrix, the SVD and spectral decomposition coincide.

Each decomposition occupies a specific niche — defined by the matrix structure it requires and the information it provides — but they are all connected through the common themes of triangularity, orthogonality, and diagonality.