Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Properties of Determinants






How Row Operations and Algebra Shape the Determinant

The determinant obeys a small set of algebraic rules that govern how it responds to matrix operations. These rules make the determinant computable without cofactor expansion, connect it to Gaussian elimination, and establish the multiplicative structure that links determinants to matrix products, inverses, and transposes.



Effect of Row Swaps

Swapping two rows of a matrix multiplies its determinant by 1-1. If BB is obtained from AA by exchanging rows ii and kk, then

det(B)=det(A)\det(B) = -\det(A)


An immediate consequence is that any matrix with two identical rows has determinant zero. Swapping those two rows changes the sign of the determinant, yet the matrix itself is unchanged — the only number equal to its own negative is zero.

The same rule holds for columns: swapping two columns also flips the sign. This follows from transpose invariance, since swapping columns of AA is the same as swapping rows of ATA^T, and det(AT)=det(A)\det(A^T) = \det(A).

Each row swap during Gaussian elimination must be tracked. If the reduction to triangular form uses ss row swaps, the sign correction is (1)s(-1)^s.

Effect of Row Scaling

Multiplying a single row of AA by a nonzero scalar kk multiplies the determinant by kk. If BB is obtained from AA by replacing row ii with kk times row ii, then

det(B)=kdet(A)\det(B) = k \cdot \det(A)


This is a single-row rule, not a whole-matrix rule. Scaling the entire matrix AA by kk means scaling every row, so

det(kA)=kndet(A)\det(kA) = k^n \det(A)


for an n×nn \times n matrix. A common error is to write det(kA)=kdet(A)\det(kA) = k \det(A), forgetting that the scalar passes through each of the nn rows independently.

Factoring works in reverse as well: if every entry in some row shares a common factor, that factor can be pulled out in front of the determinant. For instance, if row 22 of a 3×33 \times 3 matrix is (6,12,18)(6, 12, 18), then 66 can be extracted to give a row of (1,2,3)(1, 2, 3) and a factor of 66 multiplying the determinant. This often simplifies hand computations before beginning a cofactor or elimination approach.

A row of all zeros makes the determinant zero, since scaling that row by 00 gives det(A)=0det(A)=0\det(A) = 0 \cdot \det(A') = 0 regardless of what AA' looks like.

Effect of Row Addition

Adding a scalar multiple of one row to a different row leaves the determinant completely unchanged. If BB is obtained from AA by replacing row ii with row ii plus cc times row kk (where iki \neq k), then

det(B)=det(A)\det(B) = \det(A)


This is the operation that does all the heavy lifting in Gaussian elimination, and it costs nothing in terms of the determinant. The reason traces back to the cofactor structure: the added row's contribution to the Laplace expansion along row ii amounts to pairing entries from row kk with cofactors from row ii, which is a "wrong-row" expansion and always sums to zero.

Together, the three row-operation rules form a complete toolkit. Row addition is free, row scaling costs a known multiplicative factor, and row swapping costs a sign flip. Any sequence of these operations can be fully accounted for, which is what makes determinant computation via elimination both possible and efficient.

Computing Determinants via Row Reduction

The three row-operation rules convert Gaussian elimination into a determinant algorithm. The procedure is: reduce AA to upper triangular form, record every row swap and every row scaling performed along the way, then compute the determinant of the triangular result as the product of its diagonal entries. Adjust by the accumulated sign flips and scale factors.

Worked Example


A=(2113402123156201)A = \begin{pmatrix} 2 & 1 & -1 & 3 \\ 4 & 0 & 2 & 1 \\ -2 & 3 & 1 & 5 \\ 6 & 2 & 0 & -1 \end{pmatrix}


Subtract 22 times row 11 from row 22, add row 11 to row 33, and subtract 33 times row 11 from row 44. All three are row-addition operations, so the determinant is unchanged:

(21130245040801310)\begin{pmatrix} 2 & 1 & -1 & 3 \\ 0 & -2 & 4 & -5 \\ 0 & 4 & 0 & 8 \\ 0 & -1 & 3 & -10 \end{pmatrix}


Add 22 times row 22 to row 33, and subtract 12\frac{1}{2} times row 22 from row 44:

(211302450082001152)\begin{pmatrix} 2 & 1 & -1 & 3 \\ 0 & -2 & 4 & -5 \\ 0 & 0 & 8 & -2 \\ 0 & 0 & 1 & -\frac{15}{2} \end{pmatrix}


Subtract 18\frac{1}{8} times row 33 from row 44:

(211302450082000294)\begin{pmatrix} 2 & 1 & -1 & 3 \\ 0 & -2 & 4 & -5 \\ 0 & 0 & 8 & -2 \\ 0 & 0 & 0 & -\frac{29}{4} \end{pmatrix}


No row swaps and no row scalings were used — only row additions. The determinant is the product of the diagonal:

det(A)=2(2)8(294)=2(2)8(294)=116\det(A) = 2 \cdot (-2) \cdot 8 \cdot \left(-\frac{29}{4}\right) = 2 \cdot (-2) \cdot 8 \cdot \left(-\frac{29}{4}\right) = 116


Complexity


The reduction to triangular form requires roughly 23n3\frac{2}{3}n^3 arithmetic operations. For a 10×1010 \times 10 matrix this is about 670670 operations; cofactor expansion on the same matrix would require on the order of 3.63.6 million. For anything beyond 4×44 \times 4, row reduction is the only practical hand-computation method, and it is the standard numerical algorithm used by software.

The Multiplicative Property

For any two n×nn \times n matrices AA and BB,

det(AB)=det(A)det(B)\det(AB) = \det(A) \cdot \det(B)


This is one of the most powerful structural facts about determinants. The proof splits into two cases. If AA is singular, then ABAB is also singular (it cannot map onto all of Rn\mathbb{R}^n if AA already fails to), so both sides are zero. If AA is invertible, it can be written as a product of elementary matrices, each corresponding to a single row operation. Since the determinant of each elementary matrix equals the factor by which that row operation multiplies the determinant, the result follows by chaining these factors together.

Corollaries


The multiplicative property generates several important identities at once. Since AA1=IAA^{-1} = I and det(I)=1\det(I) = 1:

det(A)det(A1)=1det(A1)=1det(A)\det(A) \cdot \det(A^{-1}) = 1 \quad \Longrightarrow \quad \det(A^{-1}) = \frac{1}{\det(A)}


For any positive integer kk:

det(Ak)=(det(A))k\det(A^k) = (\det(A))^k


And since multiplication of determinants is commutative even when matrix multiplication is not:

det(AB)=det(A)det(B)=det(B)det(A)=det(BA)\det(AB) = \det(A)\det(B) = \det(B)\det(A) = \det(BA)


Note that ABAB and BABA generally differ as matrices, yet their determinants always agree.

A Non-Property


The determinant is not additive. In general, det(A+B)det(A)+det(B)\det(A + B) \neq \det(A) + \det(B). A quick counterexample: take A=B=I2A = B = I_2, so det(A)=det(B)=1\det(A) = \det(B) = 1 but det(A+B)=det(2I2)=4\det(A + B) = \det(2I_2) = 4.

Transpose Invariance

The determinant of a matrix equals the determinant of its transpose:

det(AT)=det(A)\det(A^T) = \det(A)


This single identity doubles the reach of every row-based property. Anything true about rows is automatically true about columns: swapping two columns flips the sign, scaling a column scales the determinant, and adding a multiple of one column to another leaves the determinant unchanged. Column expansion in the Laplace formula works precisely because the transpose identity converts it to a row expansion on ATA^T.

One way to see why the identity holds is through the permutation definition of the determinant. Transposing AA replaces the permutation σ\sigma with its inverse σ1\sigma^{-1} in each term of the sum. Since a permutation and its inverse have the same sign (both are even or both are odd), every term in the expansion is unchanged, and the total is the same.

Triangular and Diagonal Matrices

For an upper triangular, lower triangular, or diagonal matrix, the determinant is simply the product of the diagonal entries:

det(A)=a11a22ann\det(A) = a_{11} \cdot a_{22} \cdots a_{nn}


This follows directly from cofactor expansion. For a lower triangular matrix, expanding along the first row leaves only the (1,1)(1,1) entry (all others in the first row are zero), paired with the minor obtained by deleting row 11 and column 11 — which is again lower triangular. Repeating this reduction peels off one diagonal entry at a time until only the last entry remains.

As a special case, det(I)=1\det(I) = 1, since the identity matrix is diagonal with every diagonal entry equal to 11.

This property is what completes the row-reduction algorithm for computing determinants. Gaussian elimination produces an upper triangular matrix, and the determinant of that matrix is the product of its diagonal. Combined with the sign and scaling adjustments from the elimination steps, this gives det(A)\det(A).

Block Triangular Matrices

A block triangular matrix is one that can be partitioned into square diagonal blocks with zero blocks either above or below:

A=(A11A120A22)(block upper triangular)A = \begin{pmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{pmatrix} \quad \text{(block upper triangular)}


For such matrices, the determinant factors as

det(A)=det(A11)det(A22)\det(A) = \det(A_{11}) \cdot \det(A_{22})


and this extends to any number of diagonal blocks:

det(A)=det(A11)det(A22)det(Akk)\det(A) = \det(A_{11}) \cdot \det(A_{22}) \cdots \det(A_{kk})


The off-diagonal blocks A12A_{12}, etc., can contain anything — only the triangular placement of the zero blocks matters.

Example


A=(3100025000001420003100007)A = \begin{pmatrix} 3 & 1 & 0 & 0 & 0 \\ 2 & 5 & 0 & 0 & 0 \\ 0 & 0 & 1 & 4 & -2 \\ 0 & 0 & 0 & 3 & 1 \\ 0 & 0 & 0 & 0 & 7 \end{pmatrix}


This is block upper triangular with a 2×22 \times 2 block A11=(3125)A_{11} = \begin{pmatrix} 3 & 1 \\ 2 & 5 \end{pmatrix} and a 3×33 \times 3 upper triangular block A22=(142031007)A_{22} = \begin{pmatrix} 1 & 4 & -2 \\ 0 & 3 & 1 \\ 0 & 0 & 7 \end{pmatrix}. The determinant is det(A11)det(A22)=(152)(137)=1321=273\det(A_{11}) \cdot \det(A_{22}) = (15 - 2)(1 \cdot 3 \cdot 7) = 13 \cdot 21 = 273.

This rule does not hold for general block matrices where the off-diagonal blocks are nonzero on both sides of the diagonal. The triangular structure is essential.

The Invertibility Equivalence

The determinant condenses the most fundamental structural question about a square matrix into a single test:

A is invertibledet(A)0A \text{ is invertible} \quad \Longleftrightarrow \quad \det(A) \neq 0


This equivalence sits at the center of a web of conditions that are all mutually equivalent for an n×nn \times n matrix AA. The following statements are either all true or all false:

det(A)0\det(A) \neq 0. The matrix AA is invertible. The rank of AA equals nn. The columns of AA are linearly independent. The rows of AA are linearly independent. The columns of AA span Rn\mathbb{R}^n. The columns of AA form a basis for Rn\mathbb{R}^n. The system Ax=bAx = \mathbf{b} has exactly one solution for every bRn\mathbf{b} \in \mathbb{R}^n. The homogeneous system Ax=0Ax = \mathbf{0} has only the trivial solution. The null space of AA is {0}\{\mathbf{0}\}. The matrix AA can be written as a product of elementary matrices. The reduced row echelon form of AA is the identity matrix InI_n. All eigenvalues of AA are nonzero.

Each of these conditions approaches invertibility from a different angle — rank, dimension, solvability, spectral theory — yet they all collapse to the same yes-or-no answer. The determinant is one entry in this list, but it is often the most efficient single computation for settling the question.