The cofactor expansion generalizes the recursive pattern seen in the 3×3 case to matrices of arbitrary size. By systematically pairing each entry with a signed sub-determinant, it reduces an n×n determinant to n determinants of size (n−1)×(n−1), with complete freedom in choosing which row or column drives the expansion.
Minors
Given an n×n matrix A, the (i,j) minor Mij is the determinant of the (n−1)×(n−1) submatrix that remains after removing row i and column j. The minor is itself a determinant — a number, not a matrix.
For a 3×3 matrix
A=2045311−26
there are nine minors, one for each entry. Deleting row 1 and column 1 leaves (31−26), so M11=3⋅6−(−2)⋅1=20. Deleting row 1 and column 2 leaves (04−26), so M12=0⋅6−(−2)⋅4=8. Deleting row 1 and column 3 leaves (0431), so M13=0⋅1−3⋅4=−12.
For a 4×4 matrix, each minor is a 3×3 determinant. For a 5×5 matrix, each minor is 4×4. The recursive chain continues until reaching 1×1 sub-determinants, where the minor is simply the lone entry.
Cofactors and the Sign Pattern
The cofactor Cij attaches a prescribed sign to the minor:
Cij=(−1)i+jMij
The exponent i+j determines whether the sign is positive or negative. When i+j is even, the cofactor equals the minor; when i+j is odd, the cofactor is the negative of the minor. This produces a checkerboard of signs across the matrix:
+−+−−+−++−+−−+−+
The pattern always starts with + at position (1,1) and alternates from there. The sign depends entirely on the position — the actual entries of the matrix play no role in determining it.
Using the 3×3 matrix from the previous section, the cofactors are:
Comparing cofactors to minors, entries at even-sum positions are unchanged while entries at odd-sum positions flip sign.
Laplace Expansion Along a Row
The determinant of A can be computed by selecting any row i and summing the products of each entry in that row with its cofactor:
det(A)=j=1∑naijCij=j=1∑n(−1)i+jaijMij
The remarkable fact is that every row produces the same number. Expanding along row 1, row 2, or row n all yield the same determinant. This is not obvious from the formula itself — the proof relies on the algebraic properties of the determinant or on the permutation-based definition.
Worked Example: 4×4 Matrix
A=1−102304102−10213−2
Expanding along row 1:
det(A)=1⋅C11+3⋅C12+0⋅C13+2⋅C14
The zero entry at position (1,3) eliminates one 3×3 determinant entirely. The three remaining cofactors require expanding the sub-determinants:
Applying the signs: C11=+23, C12=−12, C14=+(−17)=−17. The determinant is
det(A)=1(23)+3(−12)+0+2(−17)=23−36−34=−47
Verification via a Different Row
Expanding the same matrix along row 3 (which has a zero in the first position) would produce the same value −47, confirming that the choice of row is purely a matter of computational convenience.
Laplace Expansion Along a Column
The expansion formula works identically along columns. Fixing column j:
det(A)=i=1∑naijCij=i=1∑n(−1)i+jaijMij
That column expansion gives the same result as row expansion follows from transpose invariance: since det(AT)=det(A), expanding A along column j is the same as expanding AT along row j.
The practical consequence is that before starting any cofactor expansion, the first step should be to scan the matrix for the row or column containing the most zeros. Each zero entry eliminates an entire sub-determinant from the sum.
Worked Example
B=3140−20057
Column 2 has two zeros. Expanding along column 2:
det(B)=0⋅C12+(−2)⋅C22+0⋅C32=(−2)⋅C22
The minor M22 is the 2×2 determinant from deleting row 2 and column 2:
M22=det(3407)=21
Since C22=(−1)2+2(21)=21, we get det(B)=(−2)(21)=−42.
An expansion along row 1 or column 1 would require more terms but produce the same result. The column 2 expansion reduced the work to a single 2×2 determinant.
The Cofactor Matrix
The cofactor matrix of A, sometimes written cof(A), is the n×n matrix whose (i,j) entry is the cofactor Cij. It is not the matrix of minors — the sign factors (−1)i+j are already incorporated.
For the 3×3 matrix used earlier,
A=2045311−26
the cofactor matrix is
cof(A)=20−29−13−884−12186
where each entry was computed in the earlier sections. As a check, the Laplace expansion along row 1 should give det(A)=2(20)+5(−8)+1(−12)=40−40−12=−12. Along row 2: 0(−29)+3(8)+(−2)(18)=0+24−36=−12. Along row 3: 4(−13)+1(4)+6(6)=−52+4+36=−12. All three rows agree.
The cofactor matrix encodes every possible cofactor expansion simultaneously — each row of cof(A) contains the cofactors needed for expansion along the corresponding row of A, and each column contains those needed for column expansion.
The Adjugate
The adjugate (also called the classical adjoint) of A is the transpose of the cofactor matrix:
adj(A)=cof(A)T
For the running example:
adj(A)=20−8−12−29818−1346
The Fundamental Identity
The adjugate satisfies
A⋅adj(A)=det(A)⋅I
To see why, consider the (i,k) entry of the product A⋅adj(A). This is ∑j=1naij⋅[adj(A)]jk=∑j=1naijCkj. When i=k, this sum is exactly the Laplace expansion of det(A) along row i, so the diagonal entries equal det(A). When i=k, the sum pairs the entries of row i with the cofactors of a different row k. This is equivalent to computing the determinant of a matrix with two identical rows (row i appears in both its own position and row k's), which is always zero. So the off-diagonal entries vanish.
This identity is the foundation of the adjugate inverse formula: dividing both sides by det(A) gives A−1=det(A)1adj(A), valid whenever det(A)=0.
Computational Cost
Cofactor expansion is a recursive algorithm. Each n×n determinant spawns n sub-problems of size (n−1)×(n−1). Without any zero entries to prune terms, the total number of multiplications satisfies the recurrence T(n)=n⋅T(n−1), which gives T(n)=O(n!).
To put this in concrete terms: a 10×10 determinant via cofactor expansion requires roughly 10!≈3.6 million multiplications. A 20×20 determinant would require over 2×1018 — well beyond the reach of any computer running a naive recursive implementation. Row reduction, by contrast, computes the same determinant in roughly 32n3 operations: about 670 for n=10 and about 5300 for n=20.
This cost difference does not make cofactor expansion useless. For matrices up to 4×4, the expansion is fast enough to do by hand and gives the exact symbolic result. For matrices with many zero entries, the effective cost drops dramatically because each zero eliminates an entire recursive branch. In symbolic computation — where entries are polynomials or formal expressions rather than numbers — cofactor expansion preserves structure that row reduction would obscure.
The Laplace expansion is best understood as a theoretical instrument. It defines what the determinant is, establishes its algebraic properties, and produces the adjugate and the cofactor structure. For numerical computation on anything larger than a small matrix, the row-reduction approach is the practical choice.