A diagonalizable matrix can be factored as PDP⁻¹, where D is the diagonal matrix of eigenvalues and P is the matrix of eigenvectors. This factorization strips away the complexity of the original matrix, reducing powers, exponentials, and differential equations to operations on individual eigenvalues. Diagonalization is possible when and only when the eigenvectors form a basis — and for symmetric matrices, this is always the case.
What Diagonalization Means
An n×nmatrixA is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that
A=PDP−1
The columns of P are eigenvectors of A. The diagonal entries of D are the corresponding eigenvalues, in the same order. The factorization says that in the basis of eigenvectors, the transformation acts by pure scaling along each axis — the most transparent possible description.
Equivalently, A is diagonalizable if and only if Rn has a basis consisting entirely of eigenvectors of A. The matrix P converts between the standard basis and this eigenvector basis, and D is the matrix of the transformation in the eigenvector basis.
The definitive condition is: A is diagonalizable if and only if for every eigenvalue, the geometric multiplicity equals the algebraic multiplicity (mg(λ)=ma(λ)).
A sufficient condition that is easier to check: if A has n distinct eigenvalues, it is automatically diagonalizable. Eigenvectors for distinct eigenvalues are linearly independent, so n distinct eigenvalues produce n independent eigenvectors — exactly enough for a basis.
When eigenvalues repeat, diagonalizability depends on the eigenspaces. A repeated eigenvalue λ with algebraic multiplicity k must have a k-dimensional eigenspace. If the eigenspace falls short — dimension less than k — there are not enough eigenvectors, and the matrix cannot be diagonalized.
Example of Failure
A=(2012) has eigenvalue λ=2 with ma=2, but A−2I=(0010) has null space of dimension 1. Only one independent eigenvector exists, so P cannot be built. The matrix is defective.
Matrix Powers
The primary computational payoff of diagonalization is the simplification of matrix powers:
Ak=PDkP−1=Pdiag(λ1k,λ2k,…,λnk)P−1
Raising a diagonal matrix to a power means raising each diagonal entry independently. The entire cost of Ak, for any k, is one matrix inversion and two matrix multiplications — the same cost regardless of whether k is 2 or 2 million.
Worked Example
Using the diagonalization from section 2: P−1=31(1−211).
Without diagonalization, computing A4 requires three sequential matrix multiplications.
Systems of Differential Equations
The linear system x′=Ax has a clean solution when A is diagonalizable. In the eigenvector basis, the system decouples into n independent scalar equations yi′=λiyi, each with solution yi(t)=cieλit.
Converting back to the original basis, the general solution is
x(t)=c1eλ1tv1+c2eλ2tv2+⋯+cneλntvn
Each eigenvalue determines the behavior along its eigenvector direction. Positive eigenvalues produce exponential growth, negative eigenvalues produce decay, and zero eigenvalues produce constant components. Complex eigenvalues produce oscillatory terms involving sines and cosines modulated by exponential envelopes.
The constants c1,…,cn are determined by the initial condition x(0): express x(0) as a linear combination of the eigenvectors and read off the coefficients.
Recurrence Relations
The discrete system xn+1=Axn has solution xn=Anx0. When A is diagonalizable, this becomes
xn=PDnP−1x0=c1λ1nv1+c2λ2nv2+⋯+cnλnnvn
The dominant eigenvalue — the eigenvalue with the largest absolute value — determines the long-term growth rate. As n→∞, the term ciλinvi with the largest ∣λi∣ dominates all others.
The Fibonacci sequence provides a classic application. The recurrence Fn+1=Fn+Fn−1 translates to (Fn+1Fn)=(1110)n(10). The matrix has eigenvalues ϕ=21+5 and ϕ^=21−5. Diagonalization gives the Binet formula: Fn=5ϕn−ϕ^n, a closed-form expression for the n-th Fibonacci number.
The Spectral Theorem for Symmetric Matrices
Every real symmetric matrix is diagonalizable. This is guaranteed — no conditions need to be checked.
The result is stronger than ordinary diagonalizability. The diagonalizing matrix P can be chosen orthogonal (P−1=PT), giving
A=QDQT
where Q is orthogonal with columns forming an orthonormal basis of eigenvectors, and D is diagonal with real eigenvalues.
Each term λiqiqiT is the eigenvalue times the projection matrix onto the eigenspace. The matrix A is decomposed into a sum of rank-one projections, weighted by eigenvalues.
The spectral theorem is the most powerful diagonalization result in real linear algebra. It guarantees real eigenvalues, orthogonal eigenvectors, and a decomposition that simultaneously diagonalizes and orthogonalizes.
Matrix Exponential
For a diagonalizable matrix, the matrix exponential eAt — central to solving x′=Ax — has an explicit form:
eAt=PeDtP−1=Pdiag(eλ1t,eλ2t,…,eλnt)P−1
The exponential of a diagonal matrix is the diagonal matrix of exponentials. The full matrix exponential is computed from n scalar exponentials, one per eigenvalue.
The solution to x′=Ax with initial condition x(0)=x0 is then x(t)=eAtx0. This is the matrix-level analogue of the scalar solution x(t)=eatx0 to x′=ax.
When A has complex eigenvalues a±bi, the exponentials e(a±bi)t=eat(cosbt±isinbt) combine in conjugate pairs to produce real oscillatory terms eatcosbt and eatsinbt in the final solution.
When Diagonalization Fails
When a matrix is not diagonalizable — when some eigenvalue has geometric multiplicity strictly less than its algebraic multiplicity — the best achievable form under similarity is the Jordan normal form.
The Jordan form is block diagonal, with each block a Jordan block:
Jk(λ)=λ0⋮001λ01⋱⋯⋯⋱λ00⋮1λ
A k×k Jordan block has the eigenvalue λ on the diagonal and ones on the superdiagonal. A diagonalizable eigenvalue contributes 1×1 Jordan blocks. A defective eigenvalue contributes blocks larger than 1×1.
The Jordan form is unique up to the ordering of blocks and is the canonical representative of the similarity class. Powers and exponentials of Jordan blocks can still be computed explicitly, but the formulas involve polynomial correction terms (tkeλt instead of just eλt) reflecting the defective structure. The full Jordan theory belongs to advanced linear algebra.
Diagonalizability at a Glance
Several quick tests determine or suggest diagonalizability.
A matrix with n distinct eigenvalues is always diagonalizable — distinctness forces independence of eigenvectors.
A real symmetric matrix is always diagonalizable, and orthogonally so. This is the spectral theorem.
A matrix satisfying mg(λ)=ma(λ) for every eigenvalue is diagonalizable. This is the definitive necessary and sufficient condition.
A matrix with any eigenvalue where mg<ma is not diagonalizable. The shortfall means there are not enough eigenvectors to form a basis.
Matrices that are already diagonal are trivially diagonalizable (P=I). The identity matrix, all scalar matrices cI, and all diagonal matrices fall here.
The zero matrix is diagonalizable (it is already diagonal with all eigenvalues zero). A nilpotent matrix is diagonalizable if and only if it is the zero matrix — any other nilpotent matrix is defective.