Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Applications of Determinants






Solving Problems with Determinants

Beyond characterizing invertibility, determinants provide explicit closed-form tools for solving systems, computing inverses, and testing function independence. Each formula trades computational efficiency for structural transparency — the expressions are exact, symbolic, and reveal how solutions depend on the entries of the matrix.



Cramer's Rule

Given a linear system Ax=bAx = \mathbf{b} where AA is n×nn \times n with det(A)0\det(A) \neq 0, Cramer's rule expresses each component of the solution directly as a ratio of determinants:

xi=det(Ai)det(A)x_i = \frac{\det(A_i)}{\det(A)}


where AiA_i is the matrix formed by replacing column ii of AA with the right-hand side vector b\mathbf{b}. Every other column stays in place.

2×2 Example


For the system

(3215)(x1x2)=(87)\begin{pmatrix} 3 & 2 \\ 1 & 5 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 8 \\ 7 \end{pmatrix}


the coefficient determinant is det(A)=3521=13\det(A) = 3 \cdot 5 - 2 \cdot 1 = 13. Replacing column 11 with b\mathbf{b}:

det(A1)=det(8275)=4014=26\det(A_1) = \det\begin{pmatrix} 8 & 2 \\ 7 & 5 \end{pmatrix} = 40 - 14 = 26


Replacing column 22 with b\mathbf{b}:

det(A2)=det(3817)=218=13\det(A_2) = \det\begin{pmatrix} 3 & 8 \\ 1 & 7 \end{pmatrix} = 21 - 8 = 13


So x1=26/13=2x_1 = 26/13 = 2 and x2=13/13=1x_2 = 13/13 = 1.

3×3 Example


(102131210)(x1x2x3)=(503)\begin{pmatrix} 1 & 0 & 2 \\ -1 & 3 & 1 \\ 2 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 5 \\ 0 \\ 3 \end{pmatrix}


The coefficient determinant is det(A)=1(01)0+2(16)=114=15\det(A) = 1(0 - 1) - 0 + 2(-1 - 6) = -1 - 14 = -15. The three modified determinants are:

det(A1)=det(502031310)=5(01)0+2(09)=518=23\det(A_1) = \det\begin{pmatrix} 5 & 0 & 2 \\ 0 & 3 & 1 \\ 3 & 1 & 0 \end{pmatrix} = 5(0 - 1) - 0 + 2(0 - 9) = -5 - 18 = -23


det(A2)=det(152101230)=1(03)5(02)+2(30)=3+106=1\det(A_2) = \det\begin{pmatrix} 1 & 5 & 2 \\ -1 & 0 & 1 \\ 2 & 3 & 0 \end{pmatrix} = 1(0 - 3) - 5(0 - 2) + 2(-3 - 0) = -3 + 10 - 6 = 1


det(A3)=det(105130213)=1(90)0+5(16)=935=26\det(A_3) = \det\begin{pmatrix} 1 & 0 & 5 \\ -1 & 3 & 0 \\ 2 & 1 & 3 \end{pmatrix} = 1(9 - 0) - 0 + 5(-1 - 6) = 9 - 35 = -26


The solution is x1=23/(15)=23/15x_1 = -23/(-15) = 23/15, x2=1/(15)=1/15x_2 = 1/(-15) = -1/15, x3=26/(15)=26/15x_3 = -26/(-15) = 26/15.

Theoretical Significance


Cramer's rule proves that each solution component is a rational function of the matrix entries and the right-hand side entries. This has consequences in pure algebra and in sensitivity analysis, where it shows how solutions respond to perturbations in the data. As a computational method, however, it requires n+1n + 1 determinant evaluations, making it far more expensive than Gaussian elimination for large systems.

The Inverse via the Adjugate

The adjugate identity Aadj(A)=det(A)IA \cdot \operatorname{adj}(A) = \det(A) \cdot I immediately gives an explicit formula for the inverse when det(A)0\det(A) \neq 0:

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A)


Every entry of A1A^{-1} is expressed as a cofactor of AA divided by det(A)\det(A).

The 2×2 Case


For A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, the cofactor matrix is (dcba)\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}, and transposing gives adj(A)=(dbca)\operatorname{adj}(A) = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}. The inverse is

A1=1adbc(dbca)A^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}


This is the familiar swap-the-diagonal, negate-the-off-diagonal formula that appears in every introductory linear algebra course.

3×3 Worked Example


For A=(120013201)A = \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \\ 2 & 0 & 1 \end{pmatrix}, first compute det(A)\det(A) by expanding along the first row:

det(A)=1(10)2(06)+0=1+12=13\det(A) = 1(1 - 0) - 2(0 - 6) + 0 = 1 + 12 = 13


The nine cofactors are:

C11=+(1130)=1,C12=(0132)=6,C13=+(0012)=2C_{11} = +(1 \cdot 1 - 3 \cdot 0) = 1, \quad C_{12} = -(0 \cdot 1 - 3 \cdot 2) = 6, \quad C_{13} = +(0 \cdot 0 - 1 \cdot 2) = -2


C21=(2100)=2,C22=+(1102)=1,C23=(1022)=4C_{21} = -(2 \cdot 1 - 0 \cdot 0) = -2, \quad C_{22} = +(1 \cdot 1 - 0 \cdot 2) = 1, \quad C_{23} = -(1 \cdot 0 - 2 \cdot 2) = 4


C31=+(2301)=6,C32=(1300)=3,C33=+(1120)=1C_{31} = +(2 \cdot 3 - 0 \cdot 1) = 6, \quad C_{32} = -(1 \cdot 3 - 0 \cdot 0) = -3, \quad C_{33} = +(1 \cdot 1 - 2 \cdot 0) = 1


The adjugate is the transpose of the cofactor matrix:

adj(A)=(126613241)\operatorname{adj}(A) = \begin{pmatrix} 1 & -2 & 6 \\ 6 & 1 & -3 \\ -2 & 4 & 1 \end{pmatrix}


So A1=113(126613241)A^{-1} = \frac{1}{13} \begin{pmatrix} 1 & -2 & 6 \\ 6 & 1 & -3 \\ -2 & 4 & 1 \end{pmatrix}.

Verification: AA1A \cdot A^{-1} should produce the identity. The (1,1)(1,1) entry is 113(11+26+0(2))=1313=1\frac{1}{13}(1 \cdot 1 + 2 \cdot 6 + 0 \cdot (-2)) = \frac{13}{13} = 1. The (1,2)(1,2) entry is 113(1(2)+21+04)=013=0\frac{1}{13}(1 \cdot (-2) + 2 \cdot 1 + 0 \cdot 4) = \frac{0}{13} = 0. The remaining entries check out similarly.

Practical Assessment


The adjugate formula writes every entry of the inverse as an explicit ratio of cofactors and the determinant. This is valuable for symbolic work — it shows exactly how each entry of A1A^{-1} depends on the entries of AA. For numerical computation on matrices larger than 3×33 \times 3, row reduction is vastly more efficient.

The Cross Product as a Determinant

The cross product of two vectors a=(a1,a2,a3)\mathbf{a} = (a_1, a_2, a_3) and b=(b1,b2,b3)\mathbf{b} = (b_1, b_2, b_3) in R3\mathbb{R}^3 can be computed as a symbolic 3×33 \times 3 determinant:

a×b=det(i^j^k^a1a2a3b1b2b3)\mathbf{a} \times \mathbf{b} = \det\begin{pmatrix} \hat{\mathbf{i}} & \hat{\mathbf{j}} & \hat{\mathbf{k}} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{pmatrix}


Expanding along the first row using the cofactor formula:

a×b=i^(a2b3a3b2)j^(a1b3a3b1)+k^(a1b2a2b1)\mathbf{a} \times \mathbf{b} = \hat{\mathbf{i}}(a_2 b_3 - a_3 b_2) - \hat{\mathbf{j}}(a_1 b_3 - a_3 b_1) + \hat{\mathbf{k}}(a_1 b_2 - a_2 b_1)


Each component of the resulting vector is a 2×22 \times 2 minor — the sub-determinant obtained by deleting the appropriate row and column from the lower two rows.

This is a formal rather than literal use of the determinant. The first row contains basis vectors, not numbers, so the "determinant" is not a scalar but a vector. The cofactor expansion still applies mechanically, and the alternating signs +,,++, -, + produce the correct cross product components.

Worked Example


For a=(2,1,3)\mathbf{a} = (2, -1, 3) and b=(4,0,2)\mathbf{b} = (4, 0, -2):

a×b=det(i^j^k^213402)\mathbf{a} \times \mathbf{b} = \det\begin{pmatrix} \hat{\mathbf{i}} & \hat{\mathbf{j}} & \hat{\mathbf{k}} \\ 2 & -1 & 3 \\ 4 & 0 & -2 \end{pmatrix}


=i^((1)(2)(3)(0))j^((2)(2)(3)(4))+k^((2)(0)(1)(4))= \hat{\mathbf{i}}((-1)(-2) - (3)(0)) - \hat{\mathbf{j}}((2)(-2) - (3)(4)) + \hat{\mathbf{k}}((2)(0) - (-1)(4))


=i^(2)j^(16)+k^(4)=(2,16,4)= \hat{\mathbf{i}}(2) - \hat{\mathbf{j}}(-16) + \hat{\mathbf{k}}(4) = (2, 16, 4)


The magnitude is a×b=4+256+16=276=269|\mathbf{a} \times \mathbf{b}| = \sqrt{4 + 256 + 16} = \sqrt{276} = 2\sqrt{69}. This equals the area of the parallelogram spanned by a\mathbf{a} and b\mathbf{b}, connecting the cross product back to the geometric interpretation of the determinant as an area measure.

The Characteristic Polynomial

For an n×nn \times n matrix AA, the characteristic polynomial is defined as

p(λ)=det(AλI)p(\lambda) = \det(A - \lambda I)


This is a polynomial of degree nn in the variable λ\lambda. Its roots are the eigenvalues of AA — the scalars λ\lambda for which the matrix AλIA - \lambda I becomes singular.

2×2 Example


For A=(4123)A = \begin{pmatrix} 4 & 1 \\ 2 & 3 \end{pmatrix}:

AλI=(4λ123λ)A - \lambda I = \begin{pmatrix} 4 - \lambda & 1 \\ 2 & 3 - \lambda \end{pmatrix}


p(λ)=(4λ)(3λ)2=λ27λ+10=(λ2)(λ5)p(\lambda) = (4 - \lambda)(3 - \lambda) - 2 = \lambda^2 - 7\lambda + 10 = (\lambda - 2)(\lambda - 5)


The eigenvalues are λ=2\lambda = 2 and λ=5\lambda = 5.

3×3 Example


For A=(210031001)A = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 3 & 1 \\ 0 & 0 & 1 \end{pmatrix}:

This is upper triangular, so AλIA - \lambda I is also upper triangular with diagonal entries 2λ2 - \lambda, 3λ3 - \lambda, 1λ1 - \lambda. The determinant of a triangular matrix is the product of its diagonal entries:

p(λ)=(2λ)(3λ)(1λ)p(\lambda) = (2 - \lambda)(3 - \lambda)(1 - \lambda)


The eigenvalues are λ=1,2,3\lambda = 1, 2, 3 — they sit directly on the diagonal, which is always the case for triangular matrices.

Two Identities


Setting λ=0\lambda = 0 in the characteristic polynomial gives p(0)=det(A)p(0) = \det(A), which means the constant term of the characteristic polynomial is the determinant. Since the roots of pp are the eigenvalues λ1,,λn\lambda_1, \dots, \lambda_n, this yields

det(A)=λ1λ2λn\det(A) = \lambda_1 \lambda_2 \cdots \lambda_n


The determinant equals the product of all eigenvalues, counted with algebraic multiplicity. A second identity connects the coefficient of λn1\lambda^{n-1} to the trace:

λ1+λ2++λn=tr(A)\lambda_1 + \lambda_2 + \cdots + \lambda_n = \operatorname{tr}(A)


Together, these two identities link the determinant and trace to the eigenvalue spectrum of the matrix.

The Wronskian

The Wronskian extends the determinant's role as a linear independence test from vectors to functions. Given nn functions f1,f2,,fnf_1, f_2, \dots, f_n, each differentiable at least n1n - 1 times, the Wronskian is

W(f1,,fn)(x)=det(f1(x)f2(x)fn(x)f1(x)f2(x)fn(x)f1(n1)(x)f2(n1)(x)fn(n1)(x))W(f_1, \dots, f_n)(x) = \det\begin{pmatrix} f_1(x) & f_2(x) & \cdots & f_n(x) \\ f_1'(x) & f_2'(x) & \cdots & f_n'(x) \\ \vdots & \vdots & \ddots & \vdots \\ f_1^{(n-1)}(x) & f_2^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x) \end{pmatrix}


Each column corresponds to one function, and each row raises the order of differentiation by one. The result is a function of xx, not a constant.

The Independence Test


If W(f1,,fn)(x0)0W(f_1, \dots, f_n)(x_0) \neq 0 at some point x0x_0, then the functions f1,,fnf_1, \dots, f_n are linearly independent on any interval containing x0x_0. The logic mirrors the matrix case: a nonzero determinant means the "columns" — here the function-derivative profiles — are not proportional.

The converse requires care. A Wronskian that vanishes everywhere does not automatically imply dependence unless the functions are known to be solutions of a single linear ordinary differential equation. Without that structural assumption, counterexamples exist.

Worked Example


Take f1=exf_1 = e^x, f2=e2xf_2 = e^{2x}, f3=e3xf_3 = e^{3x}. The Wronskian matrix is

(exe2xe3xex2e2x3e3xex4e2x9e3x)\begin{pmatrix} e^x & e^{2x} & e^{3x} \\ e^x & 2e^{2x} & 3e^{3x} \\ e^x & 4e^{2x} & 9e^{3x} \end{pmatrix}


Factoring exe^x from column 11, e2xe^{2x} from column 22, and e3xe^{3x} from column 33:

W=e6xdet(111123149)W = e^{6x} \det\begin{pmatrix} 1 & 1 & 1 \\ 1 & 2 & 3 \\ 1 & 4 & 9 \end{pmatrix}


The remaining matrix is a Vandermonde matrix with nodes 1,2,31, 2, 3. Its determinant is (21)(31)(32)=121=2(2 - 1)(3 - 1)(3 - 2) = 1 \cdot 2 \cdot 1 = 2. So W=2e6xW = 2e^{6x}, which is nonzero for all xx, confirming that ex,e2x,e3xe^x, e^{2x}, e^{3x} are linearly independent.

Context


The Wronskian arises most naturally in the theory of linear ordinary differential equations, where it determines whether a proposed set of solutions forms a fundamental system. Abel's identity gives a differential equation for the Wronskian itself, relating its evolution to the coefficient in the ODE. These developments belong to differential equations rather than linear algebra, but the underlying mechanism — testing independence via a determinant — is purely algebraic.

Vandermonde and Structured Determinants

Certain matrices with patterned entries have determinants that admit elegant closed-form expressions. The most important of these is the Vandermonde matrix.

The Vandermonde Determinant


An n×nn \times n Vandermonde matrix is built from nn distinct nodes x1,x2,,xnx_1, x_2, \dots, x_n:

V=(1x1x12x1n11x2x22x2n11xnxn2xnn1)V = \begin{pmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \end{pmatrix}


Its determinant has the closed form

det(V)=1i<jn(xjxi)\det(V) = \prod_{1 \leq i < j \leq n} (x_j - x_i)


The product runs over all pairs with j>ij > i, so it contains (n2)\binom{n}{2} factors. Each factor is a difference between two nodes.

3×3 Verification


For nodes x1=1x_1 = 1, x2=2x_2 = 2, x3=4x_3 = 4:

V=(1111241416)V = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 2 & 4 \\ 1 & 4 & 16 \end{pmatrix}


Direct expansion: det(V)=1(3216)1(164)+1(42)=1612+2=6\det(V) = 1(32 - 16) - 1(16 - 4) + 1(4 - 2) = 16 - 12 + 2 = 6.

The product formula: (x2x1)(x3x1)(x3x2)=(21)(41)(42)=132=6(x_2 - x_1)(x_3 - x_1)(x_3 - x_2) = (2 - 1)(4 - 1)(4 - 2) = 1 \cdot 3 \cdot 2 = 6.

Why It Matters


The Vandermonde determinant is nonzero precisely when all nodes are distinct. This guarantees that a polynomial of degree at most n1n - 1 is uniquely determined by its values at nn distinct points — the theoretical foundation of polynomial interpolation. It also appears in the theory of symmetric polynomials and in the derivation of various discrete orthogonality relations.

Other Structured Determinants


Several other matrix families have known determinant formulas. Circulant matrices, built from cyclic shifts of a single row, have determinants expressible through the discrete Fourier transform: if the first row is (c0,c1,,cn1)(c_0, c_1, \dots, c_{n-1}), then det(C)=k=0n1p(ωk)\det(C) = \prod_{k=0}^{n-1} p(\omega^k) where p(x)=c0+c1x++cn1xn1p(x) = c_0 + c_1 x + \cdots + c_{n-1} x^{n-1} and ω=e2πi/n\omega = e^{2\pi i/n} is a primitive nn-th root of unity.

Hilbert matrices, with entries Hij=1i+j1H_{ij} = \frac{1}{i + j - 1}, have a closed-form determinant involving products of factorials. These matrices are notoriously ill-conditioned — their determinants shrink rapidly as nn grows, reflecting extreme sensitivity to perturbation.

Tridiagonal matrices, with nonzero entries only on the main diagonal and the two adjacent diagonals, have determinants satisfying a three-term recurrence: if DnD_n denotes the determinant of the n×nn \times n tridiagonal matrix, then Dn=anDn1bncnDn2D_n = a_n D_{n-1} - b_n c_n D_{n-2}, where ana_n is the nn-th diagonal entry and bn,cnb_n, c_n are the adjacent off-diagonal entries. This recurrence allows O(n)O(n) computation, much faster than general methods.

Each of these families illustrates the same principle: when a matrix has special structure, its determinant often has a formula that exploits that structure directly, bypassing both cofactor expansion and row reduction.