Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Eigenvalues and Eigenvectors






The Directions a Matrix Preserves

Most vectors change direction when multiplied by a matrix. The rare exceptions — vectors that emerge scaled but not rotated — are eigenvectors, and their scaling factors are eigenvalues. These special directions and values encode the deepest structural information about a linear transformation: how it stretches, compresses, and orients space. They are the key to diagonalization, stability analysis, and the spectral theory that unifies much of applied mathematics.



The Core Idea

When a matrix AA multiplies a vector x\mathbf{x}, the result AxA\mathbf{x} is usually a vector pointing in a completely different direction. But for certain special vectors, the output AxA\mathbf{x} points in the same direction as x\mathbf{x} — it is simply a scaled copy.

These are the vectors that the linear transformation xAx\mathbf{x} \mapsto A\mathbf{x} stretches, compresses, or reverses without deflecting. They are the "natural axes" of the transformation, and the scaling factors measure exactly how the transformation acts along each such axis. Finding these directions and factors is the eigenvalue problem.

Definition

For an n×nn \times n matrix AA, a nonzero vector v\mathbf{v} is an eigenvector of AA if

Av=λvA\mathbf{v} = \lambda\mathbf{v}


for some scalar λ\lambda. The scalar λ\lambda is the corresponding eigenvalue.

The requirement that v0\mathbf{v} \neq \mathbf{0} is essential. The zero vector trivially satisfies A0=λ0A\mathbf{0} = \lambda\mathbf{0} for every λ\lambda, so it carries no information and is excluded by convention.

The eigenvalue λ\lambda can be any real number, including zero. When λ=0\lambda = 0, the equation Av=0A\mathbf{v} = \mathbf{0} says that v\mathbf{v} is in the null space of AA — the transformation annihilates that direction entirely.

Only square matrices have eigenvalues. The equation Av=λvA\mathbf{v} = \lambda\mathbf{v} requires AvA\mathbf{v} and λv\lambda\mathbf{v} to live in the same space, which demands that AA maps Rn\mathbb{R}^n to Rn\mathbb{R}^n.

Rewriting as a Homogeneous System

The eigenvalue equation Av=λvA\mathbf{v} = \lambda\mathbf{v} rearranges to

(AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}


This is a homogeneous linear system with coefficient matrix AλIA - \lambda I. Eigenvectors are its nontrivial solutions. Such solutions exist if and only if AλIA - \lambda I is singular — that is, if and only if

det(AλI)=0\det(A - \lambda I) = 0


This determinant condition is the characteristic equation. It converts the eigenvalue problem from a geometric question ("which directions survive?") into an algebraic one ("which values of λ\lambda make this determinant vanish?"). The characteristic equation is a polynomial of degree nn in λ\lambda, and its roots are the eigenvalues.

Eigenspaces

For a given eigenvalue λ\lambda, the eigenspace is the set of all vectors satisfying (AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}, together with the zero vector:

Eλ=Null(AλI)E_\lambda = \text{Null}(A - \lambda I)


The eigenspace is a subspace of Rn\mathbb{R}^n. It contains the zero vector and is closed under addition and scalar multiplication — any linear combination of eigenvectors for the same eigenvalue is again an eigenvector for that eigenvalue (or the zero vector).

The dimension of the eigenspace is called the geometric multiplicity of λ\lambda. Finding a basis for the eigenspace is a standard null-space computation: row reduce AλIA - \lambda I and extract the general solution in parametric form. Each free variable contributes one basis vector to the eigenspace.

Geometric Meaning

An eigenvector v\mathbf{v} defines a direction that the transformation xAx\mathbf{x} \mapsto A\mathbf{x} maps to itself. The eigenvalue λ\lambda determines what happens along that direction.

When λ>1\lambda > 1, the direction is stretched — the transformation pushes vectors outward along v\mathbf{v}. When 0<λ<10 < \lambda < 1, the direction is compressed. When λ<0\lambda < 0, the direction is reversed and scaled by λ|\lambda| — the vector flips through the origin. When λ=1\lambda = 1, the vector is completely fixed. When λ=0\lambda = 0, the vector is annihilated — that direction is collapsed to the origin.

Quick Examples


For the matrix (3002)\begin{pmatrix} 3 & 0 \\ 0 & -2 \end{pmatrix}, the standard basis vectors are eigenvectors: e1\mathbf{e}_1 has eigenvalue 33 (stretched) and e2\mathbf{e}_2 has eigenvalue 2-2 (reversed and doubled). A reflection across a line has eigenvalue +1+1 for vectors on the line and 1-1 for vectors perpendicular to it. A projection has eigenvalue 11 for vectors in the target subspace and 00 for vectors in its orthogonal complement.

Examples

For a diagonal matrix D=diag(d1,,dn)D = \text{diag}(d_1, \dots, d_n), the eigenvalues are the diagonal entries d1,,dnd_1, \dots, d_n and the eigenvectors are the standard basis vectors e1,,en\mathbf{e}_1, \dots, \mathbf{e}_n. The eigenvector-eigenvalue structure is visible by inspection.

For a triangular matrix, the eigenvalues are still the diagonal entries (since det(AλI)\det(A - \lambda I) is the product of the diagonal entries of AλIA - \lambda I), but the eigenvectors generally require computation.

Worked Example


For A=(4213)A = \begin{pmatrix} 4 & 2 \\ 1 & 3 \end{pmatrix}, the characteristic equation is det(AλI)=(4λ)(3λ)2=λ27λ+10=(λ2)(λ5)=0\det(A - \lambda I) = (4 - \lambda)(3 - \lambda) - 2 = \lambda^2 - 7\lambda + 10 = (\lambda - 2)(\lambda - 5) = 0. The eigenvalues are λ1=2\lambda_1 = 2 and λ2=5\lambda_2 = 5.

For λ1=2\lambda_1 = 2: (A2I)v=(2211)v=0(A - 2I)\mathbf{v} = \begin{pmatrix} 2 & 2 \\ 1 & 1 \end{pmatrix}\mathbf{v} = \mathbf{0} gives v1=(1,1)T\mathbf{v}_1 = (-1, 1)^T.

For λ2=5\lambda_2 = 5: (A5I)v=(1212)v=0(A - 5I)\mathbf{v} = \begin{pmatrix} -1 & 2 \\ 1 & -2 \end{pmatrix}\mathbf{v} = \mathbf{0} gives v2=(2,1)T\mathbf{v}_2 = (2, 1)^T.

Rotation by 90°90° in R2\mathbb{R}^2 has matrix (0110)\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} and characteristic polynomial λ2+1=0\lambda^2 + 1 = 0. No real direction survives a quarter-turn — the eigenvalues are ±i\pm i, which are complex.

Eigenvalues and Matrix Properties

Two of the most basic matrix invariants are direct functions of the eigenvalues.

The trace equals the sum of the eigenvalues: tr(A)=λ1+λ2++λn\text{tr}(A) = \lambda_1 + \lambda_2 + \cdots + \lambda_n, counted with algebraic multiplicity. This follows from the relationship between the coefficients of the characteristic polynomial and the trace.

The determinant equals the product of the eigenvalues: det(A)=λ1λ2λn\det(A) = \lambda_1 \lambda_2 \cdots \lambda_n. This follows from evaluating the characteristic polynomial at λ=0\lambda = 0.

Together these two identities connect the simplest diagonal entry sum and the global scaling factor to the eigenvalue spectrum. They immediately imply that AA is invertible if and only if no eigenvalue is zero, and singular if and only if at least one eigenvalue vanishes.

The full set of properties — including eigenvalues of powers, inverses, transposes, and special matrix types — is developed on its own page.

Why Eigenvalues Matter

Diagonalization is the most immediate application. If AA has nn linearly independent eigenvectors, it can be written as A=PDP1A = PDP^{-1} where DD is the diagonal matrix of eigenvalues. This factorization reduces matrix powers to Ak=PDkP1A^k = PD^kP^{-1} — raising a diagonal matrix to a power means raising each diagonal entry independently.

In dynamical systems, eigenvalues determine long-term behavior. The system xn+1=Axn\mathbf{x}_{n+1} = A\mathbf{x}_n grows, decays, or oscillates depending on whether the eigenvalues have absolute value greater than, less than, or equal to 11. The system x=Ax\mathbf{x}' = A\mathbf{x} has solutions involving eλte^{\lambda t}, so the real parts of the eigenvalues determine exponential growth or decay and the imaginary parts determine oscillation frequency.

In statistics, the eigenvectors of a covariance matrix point in the directions of maximum variance — this is the foundation of principal component analysis. In physics, the eigenvectors of a Hamiltonian or stiffness matrix correspond to natural modes of vibration. In graph theory, eigenvalues of adjacency and Laplacian matrices encode connectivity and clustering structure.

The eigenvalue decomposition is arguably the single most important factorization in applied mathematics. Virtually every iterative algorithm, stability criterion, and spectral method in scientific computing traces back to it.