Beyond characterizing invertibility, determinants provide explicit closed-form tools for solving systems, computing inverses, and testing function independence. Each formula trades computational efficiency for structural transparency — the expressions are exact, symbolic, and reveal how solutions depend on the entries of the matrix.
Cramer's Rule
Given a linear systemAx=b where A is n×n with det(A)=0, Cramer's rule expresses each component of the solution directly as a ratio of determinants:
xi=det(A)det(Ai)
where Ai is the matrix formed by replacing column i of A with the right-hand side vector b. Every other column stays in place.
2×2 Example
For the system
(3125)(x1x2)=(87)
the coefficient determinant is det(A)=3⋅5−2⋅1=13. Replacing column 1 with b:
det(A1)=det(8725)=40−14=26
Replacing column 2 with b:
det(A2)=det(3187)=21−8=13
So x1=26/13=2 and x2=13/13=1.
3×3 Example
1−12031210x1x2x3=503
The coefficient determinant is det(A)=1(0−1)−0+2(−1−6)=−1−14=−15. The three modified determinants are:
The solution is x1=−23/(−15)=23/15, x2=1/(−15)=−1/15, x3=−26/(−15)=26/15.
Theoretical Significance
Cramer's rule proves that each solution component is a rational function of the matrix entries and the right-hand side entries. This has consequences in pure algebra and in sensitivity analysis, where it shows how solutions respond to perturbations in the data. As a computational method, however, it requires n+1 determinant evaluations, making it far more expensive than Gaussian elimination for large systems.
The Inverse via the Adjugate
The adjugate identityA⋅adj(A)=det(A)⋅I immediately gives an explicit formula for the inverse when det(A)=0:
A−1=det(A)1adj(A)
Every entry of A−1 is expressed as a cofactor of A divided by det(A).
The 2×2 Case
For A=(acbd), the cofactor matrix is (d−b−ca), and transposing gives adj(A)=(d−c−ba). The inverse is
A−1=ad−bc1(d−c−ba)
This is the familiar swap-the-diagonal, negate-the-off-diagonal formula that appears in every introductory linear algebra course.
3×3 Worked Example
For A=102210031, first compute det(A) by expanding along the first row:
The adjugate is the transpose of the cofactor matrix:
adj(A)=16−2−2146−31
So A−1=13116−2−2146−31.
Verification: A⋅A−1 should produce the identity. The (1,1) entry is 131(1⋅1+2⋅6+0⋅(−2))=1313=1. The (1,2) entry is 131(1⋅(−2)+2⋅1+0⋅4)=130=0. The remaining entries check out similarly.
Practical Assessment
The adjugate formula writes every entry of the inverse as an explicit ratio of cofactors and the determinant. This is valuable for symbolic work — it shows exactly how each entry of A−1 depends on the entries of A. For numerical computation on matrices larger than 3×3, row reduction is vastly more efficient.
The Cross Product as a Determinant
The cross product of two vectors a=(a1,a2,a3) and b=(b1,b2,b3) in R3 can be computed as a symbolic 3×3 determinant:
a×b=deti^a1b1j^a2b2k^a3b3
Expanding along the first row using the cofactor formula:
Each component of the resulting vector is a 2×2 minor — the sub-determinant obtained by deleting the appropriate row and column from the lower two rows.
This is a formal rather than literal use of the determinant. The first row contains basis vectors, not numbers, so the "determinant" is not a scalar but a vector. The cofactor expansion still applies mechanically, and the alternating signs +,−,+ produce the correct cross product components.
The magnitude is ∣a×b∣=4+256+16=276=269. This equals the area of the parallelogram spanned by a and b, connecting the cross product back to the geometric interpretation of the determinant as an area measure.
The Characteristic Polynomial
For an n×n matrix A, the characteristic polynomial is defined as
p(λ)=det(A−λI)
This is a polynomial of degree n in the variable λ. Its roots are the eigenvalues of A — the scalars λ for which the matrix A−λI becomes singular.
2×2 Example
For A=(4213):
A−λI=(4−λ213−λ)
p(λ)=(4−λ)(3−λ)−2=λ2−7λ+10=(λ−2)(λ−5)
The eigenvalues are λ=2 and λ=5.
3×3 Example
For A=200130011:
This is upper triangular, so A−λI is also upper triangular with diagonal entries 2−λ, 3−λ, 1−λ. The determinant of a triangular matrix is the product of its diagonal entries:
p(λ)=(2−λ)(3−λ)(1−λ)
The eigenvalues are λ=1,2,3 — they sit directly on the diagonal, which is always the case for triangular matrices.
Two Identities
Setting λ=0 in the characteristic polynomial gives p(0)=det(A), which means the constant term of the characteristic polynomial is the determinant. Since the roots of p are the eigenvalues λ1,…,λn, this yields
det(A)=λ1λ2⋯λn
The determinant equals the product of all eigenvalues, counted with algebraic multiplicity. A second identity connects the coefficient of λn−1 to the trace:
λ1+λ2+⋯+λn=tr(A)
Together, these two identities link the determinant and trace to the eigenvalue spectrum of the matrix.
The Wronskian
The Wronskian extends the determinant's role as a linear independence test from vectors to functions. Given n functions f1,f2,…,fn, each differentiable at least n−1 times, the Wronskian is
Each column corresponds to one function, and each row raises the order of differentiation by one. The result is a function of x, not a constant.
The Independence Test
If W(f1,…,fn)(x0)=0 at some point x0, then the functions f1,…,fn are linearly independent on any interval containing x0. The logic mirrors the matrix case: a nonzero determinant means the "columns" — here the function-derivative profiles — are not proportional.
The converse requires care. A Wronskian that vanishes everywhere does not automatically imply dependence unless the functions are known to be solutions of a single linear ordinary differential equation. Without that structural assumption, counterexamples exist.
Worked Example
Take f1=ex, f2=e2x, f3=e3x. The Wronskian matrix is
exexexe2x2e2x4e2xe3x3e3x9e3x
Factoring ex from column 1, e2x from column 2, and e3x from column 3:
W=e6xdet111124139
The remaining matrix is a Vandermonde matrix with nodes 1,2,3. Its determinant is (2−1)(3−1)(3−2)=1⋅2⋅1=2. So W=2e6x, which is nonzero for all x, confirming that ex,e2x,e3x are linearly independent.
Context
The Wronskian arises most naturally in the theory of linear ordinary differential equations, where it determines whether a proposed set of solutions forms a fundamental system. Abel's identity gives a differential equation for the Wronskian itself, relating its evolution to the coefficient in the ODE. These developments belong to differential equations rather than linear algebra, but the underlying mechanism — testing independence via a determinant — is purely algebraic.
Vandermonde and Structured Determinants
Certain matrices with patterned entries have determinants that admit elegant closed-form expressions. The most important of these is the Vandermonde matrix.
The Vandermonde Determinant
An n×n Vandermonde matrix is built from n distinct nodes x1,x2,…,xn:
The product runs over all pairs with j>i, so it contains (2n) factors. Each factor is a difference between two nodes.
3×3 Verification
For nodes x1=1, x2=2, x3=4:
V=1111241416
Direct expansion: det(V)=1(32−16)−1(16−4)+1(4−2)=16−12+2=6.
The product formula: (x2−x1)(x3−x1)(x3−x2)=(2−1)(4−1)(4−2)=1⋅3⋅2=6.
Why It Matters
The Vandermonde determinant is nonzero precisely when all nodes are distinct. This guarantees that a polynomial of degree at most n−1 is uniquely determined by its values at n distinct points — the theoretical foundation of polynomial interpolation. It also appears in the theory of symmetric polynomials and in the derivation of various discrete orthogonality relations.
Other Structured Determinants
Several other matrix families have known determinant formulas. Circulant matrices, built from cyclic shifts of a single row, have determinants expressible through the discrete Fourier transform: if the first row is (c0,c1,…,cn−1), then det(C)=∏k=0n−1p(ωk) where p(x)=c0+c1x+⋯+cn−1xn−1 and ω=e2πi/n is a primitive n-th root of unity.
Hilbert matrices, with entries Hij=i+j−11, have a closed-form determinant involving products of factorials. These matrices are notoriously ill-conditioned — their determinants shrink rapidly as n grows, reflecting extreme sensitivity to perturbation.
Tridiagonal matrices, with nonzero entries only on the main diagonal and the two adjacent diagonals, have determinants satisfying a three-term recurrence: if Dn denotes the determinant of the n×n tridiagonal matrix, then Dn=anDn−1−bncnDn−2, where an is the n-th diagonal entry and bn,cn are the adjacent off-diagonal entries. This recurrence allows O(n) computation, much faster than general methods.
Each of these families illustrates the same principle: when a matrix has special structure, its determinant often has a formula that exploits that structure directly, bypassing both cofactor expansion and row reduction.