Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Minors, Cofactors, and the Adjugate






Expanding Along Any Row or Column

The cofactor expansion generalizes the recursive pattern seen in the 3×3 case to matrices of arbitrary size. By systematically pairing each entry with a signed sub-determinant, it reduces an n×n determinant to n determinants of size (n−1)×(n−1), with complete freedom in choosing which row or column drives the expansion.



Minors

Given an n×nn \times n matrix AA, the (i,j)(i,j) minor MijM_{ij} is the determinant of the (n1)×(n1)(n-1) \times (n-1) submatrix that remains after removing row ii and column jj. The minor is itself a determinant — a number, not a matrix.

For a 3×33 \times 3 matrix

A=(251032416)A = \begin{pmatrix} 2 & 5 & 1 \\ 0 & 3 & -2 \\ 4 & 1 & 6 \end{pmatrix}


there are nine minors, one for each entry. Deleting row 11 and column 11 leaves (3216)\begin{pmatrix} 3 & -2 \\ 1 & 6 \end{pmatrix}, so M11=36(2)1=20M_{11} = 3 \cdot 6 - (-2) \cdot 1 = 20. Deleting row 11 and column 22 leaves (0246)\begin{pmatrix} 0 & -2 \\ 4 & 6 \end{pmatrix}, so M12=06(2)4=8M_{12} = 0 \cdot 6 - (-2) \cdot 4 = 8. Deleting row 11 and column 33 leaves (0341)\begin{pmatrix} 0 & 3 \\ 4 & 1 \end{pmatrix}, so M13=0134=12M_{13} = 0 \cdot 1 - 3 \cdot 4 = -12.

Continuing this way produces all nine values:

M21=5611=29,M22=2614=8,M23=2154=18M_{21} = 5 \cdot 6 - 1 \cdot 1 = 29, \quad M_{22} = 2 \cdot 6 - 1 \cdot 4 = 8, \quad M_{23} = 2 \cdot 1 - 5 \cdot 4 = -18


M31=5(2)13=13,M32=2(2)10=4,M33=2350=6M_{31} = 5 \cdot (-2) - 1 \cdot 3 = -13, \quad M_{32} = 2 \cdot (-2) - 1 \cdot 0 = -4, \quad M_{33} = 2 \cdot 3 - 5 \cdot 0 = 6


For a 4×44 \times 4 matrix, each minor is a 3×33 \times 3 determinant. For a 5×55 \times 5 matrix, each minor is 4×44 \times 4. The recursive chain continues until reaching 1×11 \times 1 sub-determinants, where the minor is simply the lone entry.

Cofactors and the Sign Pattern

The cofactor CijC_{ij} attaches a prescribed sign to the minor:

Cij=(1)i+jMijC_{ij} = (-1)^{i+j} M_{ij}


The exponent i+ji + j determines whether the sign is positive or negative. When i+ji + j is even, the cofactor equals the minor; when i+ji + j is odd, the cofactor is the negative of the minor. This produces a checkerboard of signs across the matrix:

(++++++++)\begin{pmatrix} + & - & + & - \\ - & + & - & + \\ + & - & + & - \\ - & + & - & + \end{pmatrix}


The pattern always starts with ++ at position (1,1)(1,1) and alternates from there. The sign depends entirely on the position — the actual entries of the matrix play no role in determining it.

Using the 3×33 \times 3 matrix from the previous section, the cofactors are:

C11=(+1)(20)=20,C12=(1)(8)=8,C13=(+1)(12)=12C_{11} = (+1)(20) = 20, \quad C_{12} = (-1)(8) = -8, \quad C_{13} = (+1)(-12) = -12


C21=(1)(29)=29,C22=(+1)(8)=8,C23=(1)(18)=18C_{21} = (-1)(29) = -29, \quad C_{22} = (+1)(8) = 8, \quad C_{23} = (-1)(-18) = 18


C31=(+1)(13)=13,C32=(1)(4)=4,C33=(+1)(6)=6C_{31} = (+1)(-13) = -13, \quad C_{32} = (-1)(-4) = 4, \quad C_{33} = (+1)(6) = 6


Comparing cofactors to minors, entries at even-sum positions are unchanged while entries at odd-sum positions flip sign.

Laplace Expansion Along a Row

The determinant of AA can be computed by selecting any row ii and summing the products of each entry in that row with its cofactor:

det(A)=j=1naijCij=j=1n(1)i+jaijMij\det(A) = \sum_{j=1}^{n} a_{ij} \, C_{ij} = \sum_{j=1}^{n} (-1)^{i+j} \, a_{ij} \, M_{ij}


The remarkable fact is that every row produces the same number. Expanding along row 11, row 22, or row nn all yield the same determinant. This is not obvious from the formula itself — the proof relies on the algebraic properties of the determinant or on the permutation-based definition.

Worked Example: 4×4 Matrix


A=(1302102104132102)A = \begin{pmatrix} 1 & 3 & 0 & 2 \\ -1 & 0 & 2 & 1 \\ 0 & 4 & -1 & 3 \\ 2 & 1 & 0 & -2 \end{pmatrix}


Expanding along row 11:

det(A)=1C11+3C12+0C13+2C14\det(A) = 1 \cdot C_{11} + 3 \cdot C_{12} + 0 \cdot C_{13} + 2 \cdot C_{14}


The zero entry at position (1,3)(1,3) eliminates one 3×33 \times 3 determinant entirely. The three remaining cofactors require expanding the sub-determinants:

M11=det(021413102)=0(20)2(83)+1(0+1)=0+22+1=23M_{11} = \det\begin{pmatrix} 0 & 2 & 1 \\ 4 & -1 & 3 \\ 1 & 0 & -2 \end{pmatrix} = 0(2 - 0) - 2(-8 - 3) + 1(0 + 1) = 0 + 22 + 1 = 23


M12=det(121013202)=1(20)2(06)+1(0+2)=2+12+2=12M_{12} = \det\begin{pmatrix} -1 & 2 & 1 \\ 0 & -1 & 3 \\ 2 & 0 & -2 \end{pmatrix} = -1(2 - 0) - 2(0 - 6) + 1(0 + 2) = -2 + 12 + 2 = 12


M14=det(102041210)=1(0+1)0(0+2)+2(08)=1+016=17M_{14} = \det\begin{pmatrix} -1 & 0 & 2 \\ 0 & 4 & -1 \\ 2 & 1 & 0 \end{pmatrix} = -1(0 + 1) - 0(0 + 2) + 2(0 - 8) = -1 + 0 - 16 = -17


Applying the signs: C11=+23C_{11} = +23, C12=12C_{12} = -12, C14=+(17)=17C_{14} = +(-17) = -17. The determinant is

det(A)=1(23)+3(12)+0+2(17)=233634=47\det(A) = 1(23) + 3(-12) + 0 + 2(-17) = 23 - 36 - 34 = -47


Verification via a Different Row


Expanding the same matrix along row 33 (which has a zero in the first position) would produce the same value 47-47, confirming that the choice of row is purely a matter of computational convenience.

Laplace Expansion Along a Column

The expansion formula works identically along columns. Fixing column jj:

det(A)=i=1naijCij=i=1n(1)i+jaijMij\det(A) = \sum_{i=1}^{n} a_{ij} \, C_{ij} = \sum_{i=1}^{n} (-1)^{i+j} \, a_{ij} \, M_{ij}


That column expansion gives the same result as row expansion follows from transpose invariance: since det(AT)=det(A)\det(A^T) = \det(A), expanding AA along column jj is the same as expanding ATA^T along row jj.

The practical consequence is that before starting any cofactor expansion, the first step should be to scan the matrix for the row or column containing the most zeros. Each zero entry eliminates an entire sub-determinant from the sum.

Worked Example


B=(300125407)B = \begin{pmatrix} 3 & 0 & 0 \\ 1 & -2 & 5 \\ 4 & 0 & 7 \end{pmatrix}


Column 22 has two zeros. Expanding along column 22:

det(B)=0C12+(2)C22+0C32=(2)C22\det(B) = 0 \cdot C_{12} + (-2) \cdot C_{22} + 0 \cdot C_{32} = (-2) \cdot C_{22}


The minor M22M_{22} is the 2×22 \times 2 determinant from deleting row 22 and column 22:

M22=det(3047)=21M_{22} = \det\begin{pmatrix} 3 & 0 \\ 4 & 7 \end{pmatrix} = 21


Since C22=(1)2+2(21)=21C_{22} = (-1)^{2+2}(21) = 21, we get det(B)=(2)(21)=42\det(B) = (-2)(21) = -42.

An expansion along row 11 or column 11 would require more terms but produce the same result. The column 22 expansion reduced the work to a single 2×22 \times 2 determinant.

The Cofactor Matrix

The cofactor matrix of AA, sometimes written cof(A)\text{cof}(A), is the n×nn \times n matrix whose (i,j)(i,j) entry is the cofactor CijC_{ij}. It is not the matrix of minors — the sign factors (1)i+j(-1)^{i+j} are already incorporated.

For the 3×33 \times 3 matrix used earlier,

A=(251032416)A = \begin{pmatrix} 2 & 5 & 1 \\ 0 & 3 & -2 \\ 4 & 1 & 6 \end{pmatrix}


the cofactor matrix is

cof(A)=(20812298181346)\text{cof}(A) = \begin{pmatrix} 20 & -8 & -12 \\ -29 & 8 & 18 \\ -13 & 4 & 6 \end{pmatrix}


where each entry was computed in the earlier sections. As a check, the Laplace expansion along row 11 should give det(A)=2(20)+5(8)+1(12)=404012=12\det(A) = 2(20) + 5(-8) + 1(-12) = 40 - 40 - 12 = -12. Along row 22: 0(29)+3(8)+(2)(18)=0+2436=120(-29) + 3(8) + (-2)(18) = 0 + 24 - 36 = -12. Along row 33: 4(13)+1(4)+6(6)=52+4+36=124(-13) + 1(4) + 6(6) = -52 + 4 + 36 = -12. All three rows agree.

The cofactor matrix encodes every possible cofactor expansion simultaneously — each row of cof(A)\text{cof}(A) contains the cofactors needed for expansion along the corresponding row of AA, and each column contains those needed for column expansion.

The Adjugate

The adjugate (also called the classical adjoint) of AA is the transpose of the cofactor matrix:

adj(A)=cof(A)T\operatorname{adj}(A) = \text{cof}(A)^T


For the running example:

adj(A)=(20291388412186)\operatorname{adj}(A) = \begin{pmatrix} 20 & -29 & -13 \\ -8 & 8 & 4 \\ -12 & 18 & 6 \end{pmatrix}


The Fundamental Identity


The adjugate satisfies

Aadj(A)=det(A)IA \cdot \operatorname{adj}(A) = \det(A) \cdot I


To see why, consider the (i,k)(i,k) entry of the product Aadj(A)A \cdot \operatorname{adj}(A). This is j=1naij[adj(A)]jk=j=1naijCkj\sum_{j=1}^{n} a_{ij} \cdot [\operatorname{adj}(A)]_{jk} = \sum_{j=1}^{n} a_{ij} \, C_{kj}. When i=ki = k, this sum is exactly the Laplace expansion of det(A)\det(A) along row ii, so the diagonal entries equal det(A)\det(A). When iki \neq k, the sum pairs the entries of row ii with the cofactors of a different row kk. This is equivalent to computing the determinant of a matrix with two identical rows (row ii appears in both its own position and row kk's), which is always zero. So the off-diagonal entries vanish.

Verification


With det(A)=12\det(A) = -12:

Aadj(A)=(251032416)(20291388412186)=(120001200012)=12IA \cdot \operatorname{adj}(A) = \begin{pmatrix} 2 & 5 & 1 \\ 0 & 3 & -2 \\ 4 & 1 & 6 \end{pmatrix} \begin{pmatrix} 20 & -29 & -13 \\ -8 & 8 & 4 \\ -12 & 18 & 6 \end{pmatrix} = \begin{pmatrix} -12 & 0 & 0 \\ 0 & -12 & 0 \\ 0 & 0 & -12 \end{pmatrix} = -12 \, I


This identity is the foundation of the adjugate inverse formula: dividing both sides by det(A)\det(A) gives A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A), valid whenever det(A)0\det(A) \neq 0.

Computational Cost

Cofactor expansion is a recursive algorithm. Each n×nn \times n determinant spawns nn sub-problems of size (n1)×(n1)(n-1) \times (n-1). Without any zero entries to prune terms, the total number of multiplications satisfies the recurrence T(n)=nT(n1)T(n) = n \cdot T(n-1), which gives T(n)=O(n!)T(n) = O(n!).

To put this in concrete terms: a 10×1010 \times 10 determinant via cofactor expansion requires roughly 10!3.610! \approx 3.6 million multiplications. A 20×2020 \times 20 determinant would require over 2×10182 \times 10^{18} — well beyond the reach of any computer running a naive recursive implementation. Row reduction, by contrast, computes the same determinant in roughly 23n3\frac{2}{3}n^3 operations: about 670670 for n=10n = 10 and about 53005300 for n=20n = 20.

This cost difference does not make cofactor expansion useless. For matrices up to 4×44 \times 4, the expansion is fast enough to do by hand and gives the exact symbolic result. For matrices with many zero entries, the effective cost drops dramatically because each zero eliminates an entire recursive branch. In symbolic computation — where entries are polynomials or formal expressions rather than numbers — cofactor expansion preserves structure that row reduction would obscure.

The Laplace expansion is best understood as a theoretical instrument. It defines what the determinant is, establishes its algebraic properties, and produces the adjugate and the cofactor structure. For numerical computation on anything larger than a small matrix, the row-reduction approach is the practical choice.