How Row Operations and Algebra Shape the Determinant
The determinant obeys a small set of algebraic rules that govern how it responds to matrix operations. These rules make the determinant computable without cofactor expansion, connect it to Gaussian elimination, and establish the multiplicative structure that links determinants to matrix products, inverses, and transposes.
Effect of Row Swaps
Swapping two rows of a matrix multiplies its determinant by −1. If B is obtained from A by exchanging rows i and k, then
det(B)=−det(A)
An immediate consequence is that any matrix with two identical rows has determinant zero. Swapping those two rows changes the sign of the determinant, yet the matrix itself is unchanged — the only number equal to its own negative is zero.
The same rule holds for columns: swapping two columns also flips the sign. This follows from transpose invariance, since swapping columns of A is the same as swapping rows of AT, and det(AT)=det(A).
Each row swap during Gaussian elimination must be tracked. If the reduction to triangular form uses s row swaps, the sign correction is (−1)s.
Effect of Row Scaling
Multiplying a single row of A by a nonzero scalar k multiplies the determinant by k. If B is obtained from A by replacing row i with k times row i, then
det(B)=k⋅det(A)
This is a single-row rule, not a whole-matrix rule. Scaling the entire matrix A by k means scaling every row, so
det(kA)=kndet(A)
for an n×n matrix. A common error is to write det(kA)=kdet(A), forgetting that the scalar passes through each of the n rows independently.
Factoring works in reverse as well: if every entry in some row shares a common factor, that factor can be pulled out in front of the determinant. For instance, if row 2 of a 3×3 matrix is (6,12,18), then 6 can be extracted to give a row of (1,2,3) and a factor of 6 multiplying the determinant. This often simplifies hand computations before beginning a cofactor or elimination approach.
A row of all zeros makes the determinant zero, since scaling that row by 0 gives det(A)=0⋅det(A′)=0 regardless of what A′ looks like.
Effect of Row Addition
Adding a scalar multiple of one row to a different row leaves the determinant completely unchanged. If B is obtained from A by replacing row i with row i plus c times row k (where i=k), then
det(B)=det(A)
This is the operation that does all the heavy lifting in Gaussian elimination, and it costs nothing in terms of the determinant. The reason traces back to the cofactor structure: the added row's contribution to the Laplace expansion along row i amounts to pairing entries from row k with cofactors from row i, which is a "wrong-row" expansion and always sums to zero.
Together, the three row-operation rules form a complete toolkit. Row addition is free, row scaling costs a known multiplicative factor, and row swapping costs a sign flip. Any sequence of these operations can be fully accounted for, which is what makes determinant computation via elimination both possible and efficient.
Computing Determinants via Row Reduction
The three row-operation rules convert Gaussian elimination into a determinant algorithm. The procedure is: reduce A to upper triangular form, record every row swap and every row scaling performed along the way, then compute the determinant of the triangular result as the product of its diagonal entries. Adjust by the accumulated sign flips and scale factors.
Worked Example
A=24−261032−1210315−1
Subtract 2 times row 1 from row 2, add row 1 to row 3, and subtract 3 times row 1 from row 4. All three are row-addition operations, so the determinant is unchanged:
20001−24−1−14033−58−10
Add 2 times row 2 to row 3, and subtract 21 times row 2 from row 4:
20001−200−14813−5−2−215
Subtract 81 times row 3 from row 4:
20001−200−14803−5−2−429
No row swaps and no row scalings were used — only row additions. The determinant is the product of the diagonal:
det(A)=2⋅(−2)⋅8⋅(−429)=2⋅(−2)⋅8⋅(−429)=116
Complexity
The reduction to triangular form requires roughly 32n3 arithmetic operations. For a 10×10 matrix this is about 670 operations; cofactor expansion on the same matrix would require on the order of 3.6 million. For anything beyond 4×4, row reduction is the only practical hand-computation method, and it is the standard numerical algorithm used by software.
The Multiplicative Property
For any two n×n matrices A and B,
det(AB)=det(A)⋅det(B)
This is one of the most powerful structural facts about determinants. The proof splits into two cases. If A is singular, then AB is also singular (it cannot map onto all of Rn if A already fails to), so both sides are zero. If A is invertible, it can be written as a product of elementary matrices, each corresponding to a single row operation. Since the determinant of each elementary matrix equals the factor by which that row operation multiplies the determinant, the result follows by chaining these factors together.
Corollaries
The multiplicative property generates several important identities at once. Since AA−1=I and det(I)=1:
det(A)⋅det(A−1)=1⟹det(A−1)=det(A)1
For any positive integer k:
det(Ak)=(det(A))k
And since multiplication of determinants is commutative even when matrix multiplication is not:
det(AB)=det(A)det(B)=det(B)det(A)=det(BA)
Note that AB and BA generally differ as matrices, yet their determinants always agree.
A Non-Property
The determinant is not additive. In general, det(A+B)=det(A)+det(B). A quick counterexample: take A=B=I2, so det(A)=det(B)=1 but det(A+B)=det(2I2)=4.
Transpose Invariance
The determinant of a matrix equals the determinant of its transpose:
det(AT)=det(A)
This single identity doubles the reach of every row-based property. Anything true about rows is automatically true about columns: swapping two columns flips the sign, scaling a column scales the determinant, and adding a multiple of one column to another leaves the determinant unchanged. Column expansion in the Laplace formula works precisely because the transpose identity converts it to a row expansion on AT.
One way to see why the identity holds is through the permutation definition of the determinant. Transposing A replaces the permutation σ with its inverse σ−1 in each term of the sum. Since a permutation and its inverse have the same sign (both are even or both are odd), every term in the expansion is unchanged, and the total is the same.
Triangular and Diagonal Matrices
For an upper triangular, lower triangular, or diagonal matrix, the determinant is simply the product of the diagonal entries:
det(A)=a11⋅a22⋯ann
This follows directly from cofactor expansion. For a lower triangular matrix, expanding along the first row leaves only the (1,1) entry (all others in the first row are zero), paired with the minor obtained by deleting row 1 and column 1 — which is again lower triangular. Repeating this reduction peels off one diagonal entry at a time until only the last entry remains.
As a special case, det(I)=1, since the identity matrix is diagonal with every diagonal entry equal to 1.
This property is what completes the row-reduction algorithm for computing determinants. Gaussian elimination produces an upper triangular matrix, and the determinant of that matrix is the product of its diagonal. Combined with the sign and scaling adjustments from the elimination steps, this gives det(A).
Block Triangular Matrices
A block triangular matrix is one that can be partitioned into square diagonal blocks with zero blocks either above or below:
A=(A110A12A22)(block upper triangular)
For such matrices, the determinant factors as
det(A)=det(A11)⋅det(A22)
and this extends to any number of diagonal blocks:
det(A)=det(A11)⋅det(A22)⋯det(Akk)
The off-diagonal blocks A12, etc., can contain anything — only the triangular placement of the zero blocks matters.
Example
A=3200015000001000043000−217
This is block upper triangular with a 2×2 block A11=(3215) and a 3×3 upper triangular block A22=100430−217. The determinant is det(A11)⋅det(A22)=(15−2)(1⋅3⋅7)=13⋅21=273.
This rule does not hold for general block matrices where the off-diagonal blocks are nonzero on both sides of the diagonal. The triangular structure is essential.
The Invertibility Equivalence
The determinant condenses the most fundamental structural question about a square matrix into a single test:
A is invertible⟺det(A)=0
This equivalence sits at the center of a web of conditions that are all mutually equivalent for an n×n matrix A. The following statements are either all true or all false:
det(A)=0. The matrix A is invertible. The rank of A equals n. The columns of A are linearly independent. The rows of A are linearly independent. The columns of AspanRn. The columns of A form a basis for Rn. The system Ax=b has exactly one solution for every b∈Rn. The homogeneous system Ax=0 has only the trivial solution. The null space of A is {0}. The matrix A can be written as a product of elementary matrices. The reduced row echelon form of A is the identity matrix In. All eigenvalues of A are nonzero.
Each of these conditions approaches invertibility from a different angle — rank, dimension, solvability, spectral theory — yet they all collapse to the same yes-or-no answer. The determinant is one entry in this list, but it is often the most efficient single computation for settling the question.