Most vectors change direction when multiplied by a matrix. The rare exceptions — vectors that emerge scaled but not rotated — are eigenvectors, and their scaling factors are eigenvalues. These special directions and values encode the deepest structural information about a linear transformation: how it stretches, compresses, and orients space. They are the key to diagonalization, stability analysis, and the spectral theory that unifies much of applied mathematics.
The Core Idea
When a matrixA multiplies a vector x, the result Ax is usually a vector pointing in a completely different direction. But for certain special vectors, the output Ax points in the same direction as x — it is simply a scaled copy.
These are the vectors that the linear transformationx↦Ax stretches, compresses, or reverses without deflecting. They are the "natural axes" of the transformation, and the scaling factors measure exactly how the transformation acts along each such axis. Finding these directions and factors is the eigenvalue problem.
Definition
For an n×n matrix A, a nonzero vector v is an eigenvector of A if
Av=λv
for some scalar λ. The scalar λ is the corresponding eigenvalue.
The requirement that v=0 is essential. The zero vector trivially satisfies A0=λ0 for every λ, so it carries no information and is excluded by convention.
The eigenvalue λ can be any real number, including zero. When λ=0, the equation Av=0 says that v is in the null space of A — the transformation annihilates that direction entirely.
Only square matrices have eigenvalues. The equation Av=λv requires Av and λv to live in the same space, which demands that A maps Rn to Rn.
Rewriting as a Homogeneous System
The eigenvalue equation Av=λv rearranges to
(A−λI)v=0
This is a homogeneous linear system with coefficient matrix A−λI. Eigenvectors are its nontrivial solutions. Such solutions exist if and only if A−λI is singular — that is, if and only if
det(A−λI)=0
This determinant condition is the characteristic equation. It converts the eigenvalue problem from a geometric question ("which directions survive?") into an algebraic one ("which values of λ make this determinant vanish?"). The characteristic equation is a polynomial of degree n in λ, and its roots are the eigenvalues.
Eigenspaces
For a given eigenvalue λ, the eigenspace is the set of all vectors satisfying (A−λI)v=0, together with the zero vector:
Eλ=Null(A−λI)
The eigenspace is a subspace of Rn. It contains the zero vector and is closed under addition and scalar multiplication — any linear combination of eigenvectors for the same eigenvalue is again an eigenvector for that eigenvalue (or the zero vector).
The dimension of the eigenspace is called the geometric multiplicity of λ. Finding a basis for the eigenspace is a standard null-space computation: row reduce A−λI and extract the general solution in parametric form. Each free variable contributes one basis vector to the eigenspace.
Geometric Meaning
An eigenvector v defines a direction that the transformationx↦Ax maps to itself. The eigenvalue λ determines what happens along that direction.
When λ>1, the direction is stretched — the transformation pushes vectors outward along v. When 0<λ<1, the direction is compressed. When λ<0, the direction is reversed and scaled by ∣λ∣ — the vector flips through the origin. When λ=1, the vector is completely fixed. When λ=0, the vector is annihilated — that direction is collapsed to the origin.
Quick Examples
For the matrix (300−2), the standard basis vectors are eigenvectors: e1 has eigenvalue 3 (stretched) and e2 has eigenvalue −2 (reversed and doubled). A reflection across a line has eigenvalue +1 for vectors on the line and −1 for vectors perpendicular to it. A projection has eigenvalue 1 for vectors in the target subspace and 0 for vectors in its orthogonal complement.
Examples
For a diagonal matrix D=diag(d1,…,dn), the eigenvalues are the diagonal entries d1,…,dn and the eigenvectors are the standard basis vectors e1,…,en. The eigenvector-eigenvalue structure is visible by inspection.
For a triangular matrix, the eigenvalues are still the diagonal entries (since det(A−λI) is the product of the diagonal entries of A−λI), but the eigenvectors generally require computation.
Worked Example
For A=(4123), the characteristic equation is det(A−λI)=(4−λ)(3−λ)−2=λ2−7λ+10=(λ−2)(λ−5)=0. The eigenvalues are λ1=2 and λ2=5.
For λ1=2: (A−2I)v=(2121)v=0 gives v1=(−1,1)T.
For λ2=5: (A−5I)v=(−112−2)v=0 gives v2=(2,1)T.
Rotation by 90° in R2 has matrix (01−10) and characteristic polynomial λ2+1=0. No real direction survives a quarter-turn — the eigenvalues are ±i, which are complex.
Eigenvalues and Matrix Properties
Two of the most basic matrix invariants are direct functions of the eigenvalues.
The trace equals the sum of the eigenvalues: tr(A)=λ1+λ2+⋯+λn, counted with algebraic multiplicity. This follows from the relationship between the coefficients of the characteristic polynomial and the trace.
The determinant equals the product of the eigenvalues: det(A)=λ1λ2⋯λn. This follows from evaluating the characteristic polynomial at λ=0.
Together these two identities connect the simplest diagonal entry sum and the global scaling factor to the eigenvalue spectrum. They immediately imply that A is invertible if and only if no eigenvalue is zero, and singular if and only if at least one eigenvalue vanishes.
The full set of properties — including eigenvalues of powers, inverses, transposes, and special matrix types — is developed on its own page.
Why Eigenvalues Matter
Diagonalization is the most immediate application. If A has nlinearly independent eigenvectors, it can be written as A=PDP−1 where D is the diagonal matrix of eigenvalues. This factorization reduces matrix powers to Ak=PDkP−1 — raising a diagonal matrix to a power means raising each diagonal entry independently.
In dynamical systems, eigenvalues determine long-term behavior. The system xn+1=Axn grows, decays, or oscillates depending on whether the eigenvalues have absolute value greater than, less than, or equal to 1. The system x′=Ax has solutions involving eλt, so the real parts of the eigenvalues determine exponential growth or decay and the imaginary parts determine oscillation frequency.
In statistics, the eigenvectors of a covariance matrix point in the directions of maximum variance — this is the foundation of principal component analysis. In physics, the eigenvectors of a Hamiltonian or stiffness matrix correspond to natural modes of vibration. In graph theory, eigenvalues of adjacency and Laplacian matrices encode connectivity and clustering structure.
The eigenvalue decomposition is arguably the single most important factorization in applied mathematics. Virtually every iterative algorithm, stability criterion, and spectral method in scientific computing traces back to it.