Eigenvalues interact in predictable ways with the trace, determinant, transpose, inverse, powers, and special matrix structures. These relationships provide shortcuts for computing eigenvalues, constraints on what eigenvalues are possible for a given matrix type, and structural connections between the eigenvalue spectrum and the algebraic properties of the matrix.
Trace and Eigenvalues
The trace of A equals the sum of its eigenvalues, counted with algebraic multiplicity:
tr(A)=λ1+λ2+⋯+λn
This follows from the characteristic polynomial. The coefficient of λn−1 in p(λ)=det(A−λI) is (−1)n−1tr(A), and by Vieta's formulas, the sum of the roots of a degree-n polynomial equals (up to sign) the coefficient of the (n−1)-th power term.
The trace provides a quick consistency check. A 3×3 matrix with diagonal entries 7,−2,4 has trace 9. If the eigenvalues are computed as 5,3,1, the sum is 9 — consistent. If the sum does not match the trace, a computation error has occurred.
Determinant and Eigenvalues
The determinant of A equals the product of its eigenvalues:
det(A)=λ1⋅λ2⋯λn
This follows from evaluating the characteristic polynomial at λ=0: p(0)=det(A−0⋅I)=det(A), and since the roots of p are λ1,…,λn, the constant term is (up to sign) their product.
The most immediate consequence is that A is invertible if and only if no eigenvalue is zero. A single vanishing eigenvalue makes the product zero, collapsing the determinant and rendering A singular. Conversely, all eigenvalues nonzero means det(A)=0 and A is invertible.
Algebraic and Geometric Multiplicity
Every eigenvalue λ has two multiplicity measures. The algebraic multiplicity ma(λ) is the number of times λ appears as a root of the characteristic polynomial. The geometric multiplicity mg(λ) is the dimension of the eigenspace Eλ=Null(A−λI).
These two numbers always satisfy
1≤mg(λ)≤ma(λ)
The geometric multiplicity is at least 1 because the eigenspace must contain at least one nonzero eigenvector. It cannot exceed the algebraic multiplicity — a fact whose proof requires the Jordan normal form or the theory of invariant subspaces.
When mg=ma for every eigenvalue, the matrix is diagonalizable — there are enough independent eigenvectors to form a basis. When mg<ma for any eigenvalue, the matrix is defective and cannot be diagonalized.
For A=(2012), the eigenvalue λ=2 has ma=2 but mg=1 (the eigenspace is one-dimensional, spanned by (1,0)). This matrix is not diagonalizable.
Eigenvalues of the Inverse
If λ is an eigenvalue of an invertible matrix A with eigenvector v, then 1/λ is an eigenvalue of A−1 with the same eigenvector.
The proof is one line: Av=λv implies v=λA−1v, so A−1v=(1/λ)v.
The eigenvalues of A−1 are the reciprocals of the eigenvalues of A, and the eigenvectors are unchanged. This requires λ=0, which is guaranteed by the invertibility of A.
If A has eigenvalues 2,−3,5, then A−1 has eigenvalues 1/2,−1/3,1/5. The trace of A−1 is 1/2−1/3+1/5=11/30, and det(A−1)=1/(2⋅(−3)⋅5)=−1/30.
Eigenvalues of Powers and Polynomials
If Av=λv, then Akv=λkv for every positive integer k. The proof is induction: Ak+1v=A(Akv)=A(λkv)=λkAv=λk+1v.
The eigenvectors are preserved; only the eigenvalues change by raising to the k-th power.
More generally, if q(λ)=c0+c1λ+⋯+cmλm is any polynomial, then q(A) has eigenvalues q(λi) with the same eigenvectors:
If A has eigenvalue 3, then 2A2−A+4I has eigenvalue 2(9)−3+4=19 for the same eigenvector.
Eigenvalue Shifting
Adding a scalar multiple of the identity to A shifts every eigenvalue by that scalar while leaving the eigenvectors unchanged.
If Av=λv, then (A+cI)v=Av+cv=(λ+c)v.
The eigenvalues of A+cI are λ1+c,λ2+c,…,λn+c. Scaling works similarly: cA has eigenvalues cλ1,cλ2,…,cλn with the same eigenvectors.
These operations are useful in practice. Adding cI can shift all eigenvalues to be positive (making a matrix positive definite for numerical purposes), or shift a known eigenvalue to zero (making A−λ0I singular, which is exactly how the eigenvalue equation is set up).
Eigenvalues of the Transpose
A matrix A and its transposeAT have the same eigenvalues. The characteristic polynomials are identical:
The eigenvectors are generally different. If v is a right eigenvector of A (Av=λv), the corresponding left eigenvector w satisfies wTA=λwT, which is the same as ATw=λw. So the left eigenvectors of A are the (right) eigenvectors of AT. The eigenvalues match, but the directions are different.
Eigenvalues of Special Matrix Types
The structure of a matrix constrains which eigenvalues are possible.
Diagonal and triangular matrices have their eigenvalues on the diagonal — immediately visible.
Real symmetric matrices have all real eigenvalues, and eigenvectors for distinct eigenvalues are orthogonal. This is the spectral theorem's prerequisite.
Real skew-symmetric matrices have eigenvalues that are zero or purely imaginary — they come in conjugate pairs ±bi.
Orthogonal matrices have eigenvalues on the unit circle: ∣λ∣=1. For real orthogonal matrices, real eigenvalues are restricted to ±1, and complex eigenvalues come in conjugate pairs of modulus 1.
Idempotent matrices (A2=A) have eigenvalues satisfying λ2=λ, so λ=0 or λ=1.
Nilpotent matrices (Ak=0) have all eigenvalues equal to zero.
Involutory matrices (A2=I) have eigenvalues satisfying λ2=1, so λ=±1.
Positive definite symmetric matrices have all eigenvalues strictly positive.
Independence of Eigenvectors
Eigenvectors corresponding to distinct eigenvalues are always linearly independent.
The proof proceeds by induction. For a single eigenvector, independence is trivial (one nonzero vector is independent). Suppose {v1,…,vk−1} are independent eigenvectors with distinct eigenvalues. If c1v1+⋯+ckvk=0, multiply both sides by A to get c1λ1v1+⋯+ckλkvk=0. Subtract λk times the original equation: c1(λ1−λk)v1+⋯+ck−1(λk−1−λk)vk−1=0. By the induction hypothesis, all coefficients ci(λi−λk)=0. Since the eigenvalues are distinct, λi−λk=0, forcing ci=0 for all i<k. Then the original equation gives ckvk=0, so ck=0.
The immediate consequence: a matrix with n distinct eigenvalues has n independent eigenvectors and is automatically diagonalizable. Distinctness of eigenvalues is a sufficient condition for diagonalizability, though not a necessary one.
Similar Matrices and Spectral Invariants
Similar matrices share every spectral property: eigenvalues, algebraic multiplicities, geometric multiplicities, and the characteristic polynomial are all identical.
If B=P−1AP and v is an eigenvector of A with eigenvalue λ, then P−1v is an eigenvector of B with the same eigenvalue: B(P−1v)=P−1AP(P−1v)=P−1Av=P−1λv=λ(P−1v).
The eigenvalues stay the same; the eigenvectors transform by P−1. This is consistent with the interpretation that eigenvalues are properties of the transformation, not of the matrix. Changing the basis changes the matrix and the eigenvector coordinates, but the eigenvalues — the intrinsic scaling factors — are invariant.
The trace, determinant, and rank are all derivable from the eigenvalues, so their invariance under similarity is a corollary of eigenvalue invariance.