Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Properties of Eigenvalue and Eigenvector






How Eigenvalues Behave Under Matrix Operations

Eigenvalues interact in predictable ways with the trace, determinant, transpose, inverse, powers, and special matrix structures. These relationships provide shortcuts for computing eigenvalues, constraints on what eigenvalues are possible for a given matrix type, and structural connections between the eigenvalue spectrum and the algebraic properties of the matrix.



Trace and Eigenvalues

The trace of AA equals the sum of its eigenvalues, counted with algebraic multiplicity:

tr(A)=λ1+λ2++λn\text{tr}(A) = \lambda_1 + \lambda_2 + \cdots + \lambda_n


This follows from the characteristic polynomial. The coefficient of λn1\lambda^{n-1} in p(λ)=det(AλI)p(\lambda) = \det(A - \lambda I) is (1)n1tr(A)(-1)^{n-1}\text{tr}(A), and by Vieta's formulas, the sum of the roots of a degree-nn polynomial equals (up to sign) the coefficient of the (n1)(n-1)-th power term.

The trace provides a quick consistency check. A 3×33 \times 3 matrix with diagonal entries 7,2,47, -2, 4 has trace 99. If the eigenvalues are computed as 5,3,15, 3, 1, the sum is 99 — consistent. If the sum does not match the trace, a computation error has occurred.

Determinant and Eigenvalues

The determinant of AA equals the product of its eigenvalues:

det(A)=λ1λ2λn\det(A) = \lambda_1 \cdot \lambda_2 \cdots \lambda_n


This follows from evaluating the characteristic polynomial at λ=0\lambda = 0: p(0)=det(A0I)=det(A)p(0) = \det(A - 0 \cdot I) = \det(A), and since the roots of pp are λ1,,λn\lambda_1, \dots, \lambda_n, the constant term is (up to sign) their product.

The most immediate consequence is that AA is invertible if and only if no eigenvalue is zero. A single vanishing eigenvalue makes the product zero, collapsing the determinant and rendering AA singular. Conversely, all eigenvalues nonzero means det(A)0\det(A) \neq 0 and AA is invertible.

Algebraic and Geometric Multiplicity

Every eigenvalue λ\lambda has two multiplicity measures. The algebraic multiplicity ma(λ)m_a(\lambda) is the number of times λ\lambda appears as a root of the characteristic polynomial. The geometric multiplicity mg(λ)m_g(\lambda) is the dimension of the eigenspace Eλ=Null(AλI)E_\lambda = \text{Null}(A - \lambda I).

These two numbers always satisfy

1mg(λ)ma(λ)1 \leq m_g(\lambda) \leq m_a(\lambda)


The geometric multiplicity is at least 11 because the eigenspace must contain at least one nonzero eigenvector. It cannot exceed the algebraic multiplicity — a fact whose proof requires the Jordan normal form or the theory of invariant subspaces.

When mg=mam_g = m_a for every eigenvalue, the matrix is diagonalizable — there are enough independent eigenvectors to form a basis. When mg<mam_g < m_a for any eigenvalue, the matrix is defective and cannot be diagonalized.

For A=(2102)A = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}, the eigenvalue λ=2\lambda = 2 has ma=2m_a = 2 but mg=1m_g = 1 (the eigenspace is one-dimensional, spanned by (1,0)(1, 0)). This matrix is not diagonalizable.

Eigenvalues of the Inverse

If λ\lambda is an eigenvalue of an invertible matrix AA with eigenvector v\mathbf{v}, then 1/λ1/\lambda is an eigenvalue of A1A^{-1} with the same eigenvector.

The proof is one line: Av=λvA\mathbf{v} = \lambda\mathbf{v} implies v=λA1v\mathbf{v} = \lambda A^{-1}\mathbf{v}, so A1v=(1/λ)vA^{-1}\mathbf{v} = (1/\lambda)\mathbf{v}.

The eigenvalues of A1A^{-1} are the reciprocals of the eigenvalues of AA, and the eigenvectors are unchanged. This requires λ0\lambda \neq 0, which is guaranteed by the invertibility of AA.

If AA has eigenvalues 2,3,52, -3, 5, then A1A^{-1} has eigenvalues 1/2,1/3,1/51/2, -1/3, 1/5. The trace of A1A^{-1} is 1/21/3+1/5=11/301/2 - 1/3 + 1/5 = 11/30, and det(A1)=1/(2(3)5)=1/30\det(A^{-1}) = 1/(2 \cdot (-3) \cdot 5) = -1/30.

Eigenvalues of Powers and Polynomials

If Av=λvA\mathbf{v} = \lambda\mathbf{v}, then Akv=λkvA^k\mathbf{v} = \lambda^k\mathbf{v} for every positive integer kk. The proof is induction: Ak+1v=A(Akv)=A(λkv)=λkAv=λk+1vA^{k+1}\mathbf{v} = A(A^k\mathbf{v}) = A(\lambda^k\mathbf{v}) = \lambda^k A\mathbf{v} = \lambda^{k+1}\mathbf{v}.

The eigenvectors are preserved; only the eigenvalues change by raising to the kk-th power.

More generally, if q(λ)=c0+c1λ++cmλmq(\lambda) = c_0 + c_1\lambda + \cdots + c_m\lambda^m is any polynomial, then q(A)q(A) has eigenvalues q(λi)q(\lambda_i) with the same eigenvectors:

q(A)v=(c0I+c1A++cmAm)v=(c0+c1λ++cmλm)v=q(λ)vq(A)\mathbf{v} = (c_0 I + c_1 A + \cdots + c_m A^m)\mathbf{v} = (c_0 + c_1\lambda + \cdots + c_m\lambda^m)\mathbf{v} = q(\lambda)\mathbf{v}


If AA has eigenvalue 33, then 2A2A+4I2A^2 - A + 4I has eigenvalue 2(9)3+4=192(9) - 3 + 4 = 19 for the same eigenvector.

Eigenvalue Shifting

Adding a scalar multiple of the identity to AA shifts every eigenvalue by that scalar while leaving the eigenvectors unchanged.

If Av=λvA\mathbf{v} = \lambda\mathbf{v}, then (A+cI)v=Av+cv=(λ+c)v(A + cI)\mathbf{v} = A\mathbf{v} + c\mathbf{v} = (\lambda + c)\mathbf{v}.

The eigenvalues of A+cIA + cI are λ1+c,λ2+c,,λn+c\lambda_1 + c, \lambda_2 + c, \dots, \lambda_n + c. Scaling works similarly: cAcA has eigenvalues cλ1,cλ2,,cλnc\lambda_1, c\lambda_2, \dots, c\lambda_n with the same eigenvectors.

These operations are useful in practice. Adding cIcI can shift all eigenvalues to be positive (making a matrix positive definite for numerical purposes), or shift a known eigenvalue to zero (making Aλ0IA - \lambda_0 I singular, which is exactly how the eigenvalue equation is set up).

Eigenvalues of the Transpose

A matrix AA and its transpose ATA^T have the same eigenvalues. The characteristic polynomials are identical:

det(ATλI)=det((AλI)T)=det(AλI)\det(A^T - \lambda I) = \det((A - \lambda I)^T) = \det(A - \lambda I)


The second equality uses transpose invariance of the determinant.

The eigenvectors are generally different. If v\mathbf{v} is a right eigenvector of AA (Av=λvA\mathbf{v} = \lambda\mathbf{v}), the corresponding left eigenvector w\mathbf{w} satisfies wTA=λwT\mathbf{w}^T A = \lambda \mathbf{w}^T, which is the same as ATw=λwA^T\mathbf{w} = \lambda\mathbf{w}. So the left eigenvectors of AA are the (right) eigenvectors of ATA^T. The eigenvalues match, but the directions are different.

Eigenvalues of Special Matrix Types

The structure of a matrix constrains which eigenvalues are possible.

Diagonal and triangular matrices have their eigenvalues on the diagonal — immediately visible.

Real symmetric matrices have all real eigenvalues, and eigenvectors for distinct eigenvalues are orthogonal. This is the spectral theorem's prerequisite.

Real skew-symmetric matrices have eigenvalues that are zero or purely imaginary — they come in conjugate pairs ±bi\pm bi.

Orthogonal matrices have eigenvalues on the unit circle: λ=1|\lambda| = 1. For real orthogonal matrices, real eigenvalues are restricted to ±1\pm 1, and complex eigenvalues come in conjugate pairs of modulus 11.

Idempotent matrices (A2=AA^2 = A) have eigenvalues satisfying λ2=λ\lambda^2 = \lambda, so λ=0\lambda = 0 or λ=1\lambda = 1.

Nilpotent matrices (Ak=0A^k = 0) have all eigenvalues equal to zero.

Involutory matrices (A2=IA^2 = I) have eigenvalues satisfying λ2=1\lambda^2 = 1, so λ=±1\lambda = \pm 1.

Positive definite symmetric matrices have all eigenvalues strictly positive.

Independence of Eigenvectors

Eigenvectors corresponding to distinct eigenvalues are always linearly independent.

The proof proceeds by induction. For a single eigenvector, independence is trivial (one nonzero vector is independent). Suppose {v1,,vk1}\{\mathbf{v}_1, \dots, \mathbf{v}_{k-1}\} are independent eigenvectors with distinct eigenvalues. If c1v1++ckvk=0c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0}, multiply both sides by AA to get c1λ1v1++ckλkvk=0c_1\lambda_1\mathbf{v}_1 + \cdots + c_k\lambda_k\mathbf{v}_k = \mathbf{0}. Subtract λk\lambda_k times the original equation: c1(λ1λk)v1++ck1(λk1λk)vk1=0c_1(\lambda_1 - \lambda_k)\mathbf{v}_1 + \cdots + c_{k-1}(\lambda_{k-1} - \lambda_k)\mathbf{v}_{k-1} = \mathbf{0}. By the induction hypothesis, all coefficients ci(λiλk)=0c_i(\lambda_i - \lambda_k) = 0. Since the eigenvalues are distinct, λiλk0\lambda_i - \lambda_k \neq 0, forcing ci=0c_i = 0 for all i<ki < k. Then the original equation gives ckvk=0c_k\mathbf{v}_k = \mathbf{0}, so ck=0c_k = 0.

The immediate consequence: a matrix with nn distinct eigenvalues has nn independent eigenvectors and is automatically diagonalizable. Distinctness of eigenvalues is a sufficient condition for diagonalizability, though not a necessary one.

Similar Matrices and Spectral Invariants

Similar matrices share every spectral property: eigenvalues, algebraic multiplicities, geometric multiplicities, and the characteristic polynomial are all identical.

If B=P1APB = P^{-1}AP and v\mathbf{v} is an eigenvector of AA with eigenvalue λ\lambda, then P1vP^{-1}\mathbf{v} is an eigenvector of BB with the same eigenvalue: B(P1v)=P1AP(P1v)=P1Av=P1λv=λ(P1v)B(P^{-1}\mathbf{v}) = P^{-1}AP(P^{-1}\mathbf{v}) = P^{-1}A\mathbf{v} = P^{-1}\lambda\mathbf{v} = \lambda(P^{-1}\mathbf{v}).

The eigenvalues stay the same; the eigenvectors transform by P1P^{-1}. This is consistent with the interpretation that eigenvalues are properties of the transformation, not of the matrix. Changing the basis changes the matrix and the eigenvector coordinates, but the eigenvalues — the intrinsic scaling factors — are invariant.

The trace, determinant, and rank are all derivable from the eigenvalues, so their invariance under similarity is a corollary of eigenvalue invariance.