Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Determinants






A Single Number from a Square Matrix

Every square matrix maps to a single scalar called its determinant. This value captures whether the matrix is invertible, how it scales geometric regions, and whether it preserves or reverses orientation. The determinant appears throughout linear algebra — in eigenvalue equations, system-solving formulas, and volume computations — making it one of the most information-dense quantities attached to a matrix.



The Determinant as a Scalar Assignment

Every square matrix AA of size n×nn \times n has a single real number (or complex number, if the entries are complex) associated with it, written det(A)\det(A) or A|A|. This number is called the determinant. It is defined only for square matrices — a rectangular matrix has no determinant.

The notation A|A| is widespread but potentially confusing, since the same vertical bars denote absolute value for real numbers and modulus for complex numbers. When there is any risk of ambiguity, det(A)\det(A) is preferred.

The determinant encodes a remarkable amount of structural information. Its most fundamental role is as an invertibility test: AA is invertible if and only if det(A)0\det(A) \neq 0. But it also measures how the linear map xAxx \mapsto Ax distorts volume, determines whether that map preserves or reverses orientation, and appears in explicit formulas for eigenvalues, matrix inverses, and solutions to linear systems. The next several sections develop the determinant starting from the smallest cases and building toward the general definition.

The 2×2 Formula

For a 2×22 \times 2 matrix

A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}


the determinant is

det(A)=adbc\det(A) = ad - bc


One way to see where this formula comes from is to solve the system Ax=vAx = \mathbf{v} for a general right-hand side. Applying elimination, the solution involves dividing by adbcad - bc at every step. When adbc=0ad - bc = 0, the system either has no solution or infinitely many, and the matrix cannot be inverted. When adbc0ad - bc \neq 0, there is a unique solution for every v\mathbf{v}, and the inverse exists.

The expression adbcad - bc also carries geometric meaning. The columns of AA are the vectors (a,c)(a, c) and (b,d)(b, d) in R2\mathbb{R}^2. The signed area of the parallelogram these two vectors span equals exactly adbcad - bc. A positive value means the columns are arranged counterclockwise; a negative value means clockwise; zero means the columns are parallel and the parallelogram collapses to a line segment.

Worked Examples


For A=(3125)A = \begin{pmatrix} 3 & 1 \\ 2 & 5 \end{pmatrix}, the determinant is 3512=133 \cdot 5 - 1 \cdot 2 = 13. The matrix is invertible and its column vectors span a parallelogram of area 1313.

For A=(4623)A = \begin{pmatrix} 4 & 6 \\ 2 & 3 \end{pmatrix}, the determinant is 4362=04 \cdot 3 - 6 \cdot 2 = 0. The second column is 32\frac{3}{2} times the first, so the columns are parallel and the matrix is singular.

For A=(0110)A = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}, the determinant is 00(1)1=10 \cdot 0 - (-1) \cdot 1 = 1. This is a 90°90° rotation matrix — it preserves areas and orientation.

The 3×3 Formula

For a 3×33 \times 3 matrix

A=(a11a12a13a21a22a23a31a32a33)A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}


the determinant is computed by expanding along the first row:

det(A)=a11(a22a33a23a32)a12(a21a33a23a31)+a13(a21a32a22a31)\det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31})


Each first-row entry multiplies the 2×22 \times 2 determinant of the submatrix that remains after deleting that entry's row and column. The signs alternate: +,,++, -, +.

The Sarrus Mnemonic


A shortcut for the 3×33 \times 3 case is the rule of Sarrus. Write the matrix and copy its first two columns to the right:

(a11a12a13a21a22a23a31a32a33)a11a12a21a22a31a32\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{matrix}


Sum the three products along the downward diagonals, then subtract the three products along the upward diagonals. This gives the same result as the first-row expansion. The Sarrus rule works only for 3×33 \times 3 matrices — there is no analogous diagonal trick for 4×44 \times 4 or larger.

Worked Example


For A=(231014523)A = \begin{pmatrix} 2 & 3 & 1 \\ 0 & -1 & 4 \\ 5 & 2 & 3 \end{pmatrix}, expanding along the first row:

det(A)=2[(1)(3)(4)(2)]3[(0)(3)(4)(5)]+1[(0)(2)(1)(5)]\det(A) = 2[(-1)(3) - (4)(2)] - 3[(0)(3) - (4)(5)] + 1[(0)(2) - (-1)(5)]


=2(38)3(020)+1(0+5)=2(11)3(20)+5=22+60+5=43= 2(-3 - 8) - 3(0 - 20) + 1(0 + 5) = 2(-11) - 3(-20) + 5 = -22 + 60 + 5 = 43


The same result follows from the Sarrus rule: downward diagonals give 2(1)(3)+3(4)(5)+1(0)(2)=6+60+0=542(-1)(3) + 3(4)(5) + 1(0)(2) = -6 + 60 + 0 = 54; upward diagonals give 1(1)(5)+2(4)(2)+3(0)(3)=5+16+0=111(-1)(5) + 2(4)(2) + 3(0)(3) = -5 + 16 + 0 = 11; the determinant is 5411=4354 - 11 = 43.

The General n×n Determinant

The pattern from the 3×33 \times 3 case extends to any dimension. For an n×nn \times n matrix, expand along the first row:

det(A)=j=1n(1)1+ja1jM1j\det(A) = \sum_{j=1}^{n} (-1)^{1+j} \, a_{1j} \, M_{1j}


where M1jM_{1j} is the determinant of the (n1)×(n1)(n-1) \times (n-1) submatrix obtained by deleting row 11 and column jj. This is a recursive definition: each n×nn \times n determinant reduces to nn determinants of size (n1)×(n1)(n-1) \times (n-1), each of which reduces further, until reaching the base case of 1×11 \times 1 matrices where det(a)=a\det(a) = a.

A crucial fact is that the expansion need not use the first row. Expanding along any row or any column gives the same result. The signed sub-determinants (1)i+jMij(-1)^{i+j} M_{ij} are called cofactors, and the freedom to choose the expansion axis is what makes the formula practical: a row or column with many zeros dramatically reduces the number of terms.

The recursive nature of this definition means the computational cost grows factorially. An n×nn \times n determinant via cofactor expansion requires on the order of n!n! arithmetic operations. For n=5n = 5 that is 120120 operations; for n=10n = 10 it is over 3.63.6 million. This explosion is what motivates the row-reduction approach, which accomplishes the same task in O(n3)O(n^3) operations by exploiting how row operations affect the determinant.

Singular and Nonsingular Matrices

The value det(A)=0\det(A) = 0 marks a sharp dividing line. A square matrix with zero determinant is called singular; a square matrix with nonzero determinant is called nonsingular or invertible.

When det(A)=0\det(A) = 0, the columns of AA are linearly dependent — at least one column can be written as a linear combination of the others. The homogeneous system Ax=0Ax = \mathbf{0} has nontrivial solutions, the rank of AA is strictly less than nn, and the matrix maps Rn\mathbb{R}^n onto a lower-dimensional subspace. Geometrically, the transformation collapses at least one dimension, flattening regions of positive volume down to zero volume.

When det(A)0\det(A) \neq 0, everything works. The columns of AA form a basis for Rn\mathbb{R}^n. The system Ax=bAx = \mathbf{b} has exactly one solution for every right-hand side b\mathbf{b}. The matrix has an inverse A1A^{-1}, and the linear map xAxx \mapsto Ax is a bijection from Rn\mathbb{R}^n to itself that stretches or compresses volumes by the factor det(A)|\det(A)| without collapsing any dimension.

The determinant thus answers the single most important structural question about a square matrix in a single number.

Computing Small Determinants

The most direct way to build intuition is to compute by hand. The 2×22 \times 2 formula adbcad - bc is immediate. The 3×33 \times 3 case requires either first-row expansion or the Sarrus shortcut. For 4×44 \times 4 matrices, no shortcut exists — the computation goes through cofactor expansion, but choosing a good row or column makes a significant difference.

A 4×4 Example


A=(1021300521431050)A = \begin{pmatrix} 1 & 0 & 2 & -1 \\ 3 & 0 & 0 & 5 \\ 2 & 1 & 4 & -3 \\ 1 & 0 & 5 & 0 \end{pmatrix}


Column 22 has three zeros and a single 11 in position (3,2)(3,2), making it the best expansion axis. Expanding along column 22:

det(A)=(1)3+21M32\det(A) = (-1)^{3+2} \cdot 1 \cdot M_{32}


where M32M_{32} is the 3×33 \times 3 determinant obtained by deleting row 33 and column 22:

M32=det(121305150)M_{32} = \det\begin{pmatrix} 1 & 2 & -1 \\ 3 & 0 & 5 \\ 1 & 5 & 0 \end{pmatrix}


Expanding this along its first row:

M32=1(0055)2(3051)+(1)(3501)M_{32} = 1(0 \cdot 0 - 5 \cdot 5) - 2(3 \cdot 0 - 5 \cdot 1) + (-1)(3 \cdot 5 - 0 \cdot 1)


=1(25)2(5)+(1)(15)=25+1015=30= 1(-25) - 2(-5) + (-1)(15) = -25 + 10 - 15 = -30


So det(A)=(1)51(30)=(1)(30)=30\det(A) = (-1)^{5} \cdot 1 \cdot (-30) = (-1)(-30) = 30.

The choice of column 22 reduced the problem from four 3×33 \times 3 determinants down to one. This illustrates why scanning for zeros before expanding is the first step in any hand computation.

Expanding by Minors and Cofactors

The sub-determinants appearing in cofactor expansion have their own terminology. The (i,j)(i,j) minor MijM_{ij} is the determinant of the submatrix formed by deleting row ii and column jj from AA. The (i,j)(i,j) cofactor is the signed version:

Cij=(1)i+jMijC_{ij} = (-1)^{i+j} M_{ij}


The sign factor (1)i+j(-1)^{i+j} follows a checkerboard pattern starting with ++ at position (1,1)(1,1):

(++++++)\begin{pmatrix} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ \vdots & & & & \ddots \end{pmatrix}


Using this notation, the Laplace expansion along row ii takes the compact form

det(A)=j=1naijCij\det(A) = \sum_{j=1}^{n} a_{ij} \, C_{ij}


and the expansion along column jj is

det(A)=i=1naijCij\det(A) = \sum_{i=1}^{n} a_{ij} \, C_{ij}


Both give the same result regardless of which row or column is chosen. Collecting all cofactors into a matrix and transposing produces the adjugate, which leads to an explicit formula for the matrix inverse. The full development of minors, cofactors, the adjugate, and the structural results they enable is on the cofactors page.

How Row Operations Affect the Determinant

The three elementary row operations interact with the determinant in simple, predictable ways. Swapping two rows multiplies the determinant by 1-1. Multiplying a row by a nonzero scalar kk multiplies the determinant by kk. Adding a scalar multiple of one row to a different row leaves the determinant unchanged.

These three rules turn Gaussian elimination into a determinant-computing algorithm. Reduce AA to upper triangular form using row operations, tracking every swap and every scaling. The determinant of the resulting triangular matrix is the product of its diagonal entries. Adjusting for the tracked sign flips and scale factors gives det(A)\det(A).

This approach requires on the order of n3n^3 arithmetic operations — a dramatic improvement over the n!n! cost of cofactor expansion. For any matrix larger than 4×44 \times 4, row reduction is the practical method.

A Quick Illustration


Starting from A=(2437)A = \begin{pmatrix} 2 & 4 \\ 3 & 7 \end{pmatrix}, subtract 32\frac{3}{2} times row 11 from row 22 (this does not change the determinant) to get (2401)\begin{pmatrix} 2 & 4 \\ 0 & 1 \end{pmatrix}. The product of the diagonal is 21=22 \cdot 1 = 2, which matches det(A)=2743=2\det(A) = 2 \cdot 7 - 4 \cdot 3 = 2.

The complete set of algebraic properties — including the multiplicative rule det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B), transpose invariance, and the full invertibility equivalence — is developed on the properties page.

Area, Volume, and Orientation

The determinant has a direct geometric meaning: it measures how a matrix, viewed as a linear transformation, distorts size and orientation.

In two dimensions, det(A)|\det(A)| equals the area of the parallelogram spanned by the columns of AA. In three dimensions, det(A)|\det(A)| equals the volume of the parallelepiped spanned by the three column vectors, which also equals the scalar triple product a(b×c)\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}). In nn dimensions, det(A)|\det(A)| is the factor by which the map xAxx \mapsto Ax scales nn-dimensional volumes.

The sign carries its own meaning. A positive determinant means the transformation preserves orientation — counterclockwise stays counterclockwise in R2\mathbb{R}^2, right-handed stays right-handed in R3\mathbb{R}^3. A negative determinant means orientation is reversed. A zero determinant means the image is lower-dimensional: a 3×33 \times 3 transformation with det=0\det = 0 maps all of R3\mathbb{R}^3 onto a plane, a line, or a point.

Rotation matrices always have determinant +1+1. Reflection matrices always have determinant 1-1. These are the cleanest examples of orientation-preserving and orientation-reversing maps. The full geometric treatment, including the change-of-variables formula from multivariable calculus and explicit area and volume formulas, is on the geometry page.

Determinant-Based Formulas

The determinant is not only a diagnostic tool — it provides closed-form expressions for quantities that might otherwise require iterative procedures.

Cramer's rule solves a linear system Ax=bAx = \mathbf{b} by expressing each component of the solution as a ratio of two determinants: xi=det(Ai)/det(A)x_i = \det(A_i)/\det(A), where AiA_i is AA with its ii-th column replaced by b\mathbf{b}. The adjugate formula gives the inverse of AA as A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A), writing every entry of the inverse explicitly in terms of cofactors. In the 2×22 \times 2 case, this reduces to the familiar swap-and-negate formula.

The cross product a×b\mathbf{a} \times \mathbf{b} in R3\mathbb{R}^3 can be computed as a symbolic 3×33 \times 3 determinant with the unit vectors i^,j^,k^\hat{i}, \hat{j}, \hat{k} in the first row. The characteristic polynomial det(AλI)\det(A - \lambda I) defines the eigenvalues of AA — its roots are precisely the scalars λ\lambda for which AλIA - \lambda I is singular. The Wronskian, a determinant built from functions and their derivatives, tests linear independence in the setting of differential equations.

Each of these formulas is developed with full worked examples on the applications page.