Every square matrix maps to a single scalar called its determinant. This value captures whether the matrix is invertible, how it scales geometric regions, and whether it preserves or reverses orientation. The determinant appears throughout linear algebra — in eigenvalue equations, system-solving formulas, and volume computations — making it one of the most information-dense quantities attached to a matrix.
The Determinant as a Scalar Assignment
Every square matrix A of size n×n has a single real number (or complex number, if the entries are complex) associated with it, written det(A) or ∣A∣. This number is called the determinant. It is defined only for square matrices — a rectangular matrix has no determinant.
The notation ∣A∣ is widespread but potentially confusing, since the same vertical bars denote absolute value for real numbers and modulus for complex numbers. When there is any risk of ambiguity, det(A) is preferred.
The determinant encodes a remarkable amount of structural information. Its most fundamental role is as an invertibility test: A is invertible if and only if det(A)=0. But it also measures how the linear map x↦Ax distorts volume, determines whether that map preserves or reverses orientation, and appears in explicit formulas for eigenvalues, matrix inverses, and solutions to linear systems. The next several sections develop the determinant starting from the smallest cases and building toward the general definition.
The 2×2 Formula
For a 2×2 matrix
A=(acbd)
the determinant is
det(A)=ad−bc
One way to see where this formula comes from is to solve the system Ax=v for a general right-hand side. Applying elimination, the solution involves dividing by ad−bc at every step. When ad−bc=0, the system either has no solution or infinitely many, and the matrix cannot be inverted. When ad−bc=0, there is a unique solution for every v, and the inverse exists.
The expression ad−bc also carries geometric meaning. The columns of A are the vectors (a,c) and (b,d) in R2. The signed area of the parallelogram these two vectors span equals exactly ad−bc. A positive value means the columns are arranged counterclockwise; a negative value means clockwise; zero means the columns are parallel and the parallelogram collapses to a line segment.
Worked Examples
For A=(3215), the determinant is 3⋅5−1⋅2=13. The matrix is invertible and its column vectors span a parallelogram of area 13.
For A=(4263), the determinant is 4⋅3−6⋅2=0. The second column is 23 times the first, so the columns are parallel and the matrix is singular.
For A=(01−10), the determinant is 0⋅0−(−1)⋅1=1. This is a 90° rotation matrix — it preserves areas and orientation.
The 3×3 Formula
For a 3×3 matrix
A=a11a21a31a12a22a32a13a23a33
the determinant is computed by expanding along the first row:
Each first-row entry multiplies the 2×2 determinant of the submatrix that remains after deleting that entry's row and column. The signs alternate: +,−,+.
The Sarrus Mnemonic
A shortcut for the 3×3 case is the rule of Sarrus. Write the matrix and copy its first two columns to the right:
Sum the three products along the downward diagonals, then subtract the three products along the upward diagonals. This gives the same result as the first-row expansion. The Sarrus rule works only for 3×3 matrices — there is no analogous diagonal trick for 4×4 or larger.
Worked Example
For A=2053−12143, expanding along the first row:
The same result follows from the Sarrus rule: downward diagonals give 2(−1)(3)+3(4)(5)+1(0)(2)=−6+60+0=54; upward diagonals give 1(−1)(5)+2(4)(2)+3(0)(3)=−5+16+0=11; the determinant is 54−11=43.
The General n×n Determinant
The pattern from the 3×3 case extends to any dimension. For an n×n matrix, expand along the first row:
det(A)=j=1∑n(−1)1+ja1jM1j
where M1j is the determinant of the (n−1)×(n−1) submatrix obtained by deleting row 1 and column j. This is a recursive definition: each n×n determinant reduces to n determinants of size (n−1)×(n−1), each of which reduces further, until reaching the base case of 1×1 matrices where det(a)=a.
A crucial fact is that the expansion need not use the first row. Expanding along any row or any column gives the same result. The signed sub-determinants (−1)i+jMij are called cofactors, and the freedom to choose the expansion axis is what makes the formula practical: a row or column with many zeros dramatically reduces the number of terms.
The recursive nature of this definition means the computational cost grows factorially. An n×n determinant via cofactor expansion requires on the order of n! arithmetic operations. For n=5 that is 120 operations; for n=10 it is over 3.6 million. This explosion is what motivates the row-reduction approach, which accomplishes the same task in O(n3) operations by exploiting how row operations affect the determinant.
Singular and Nonsingular Matrices
The value det(A)=0 marks a sharp dividing line. A square matrix with zero determinant is called singular; a square matrix with nonzero determinant is called nonsingular or invertible.
When det(A)=0, the columns of A are linearly dependent — at least one column can be written as a linear combination of the others. The homogeneous system Ax=0 has nontrivial solutions, the rank of A is strictly less than n, and the matrix maps Rn onto a lower-dimensional subspace. Geometrically, the transformation collapses at least one dimension, flattening regions of positive volume down to zero volume.
When det(A)=0, everything works. The columns of A form a basis for Rn. The system Ax=b has exactly one solution for every right-hand side b. The matrix has an inverse A−1, and the linear map x↦Ax is a bijection from Rn to itself that stretches or compresses volumes by the factor ∣det(A)∣ without collapsing any dimension.
The determinant thus answers the single most important structural question about a square matrix in a single number.
Computing Small Determinants
The most direct way to build intuition is to compute by hand. The 2×2 formula ad−bc is immediate. The 3×3 case requires either first-row expansion or the Sarrus shortcut. For 4×4 matrices, no shortcut exists — the computation goes through cofactor expansion, but choosing a good row or column makes a significant difference.
A 4×4 Example
A=132100102045−15−30
Column 2 has three zeros and a single 1 in position (3,2), making it the best expansion axis. Expanding along column 2:
det(A)=(−1)3+2⋅1⋅M32
where M32 is the 3×3 determinant obtained by deleting row 3 and column 2:
M32=det131205−150
Expanding this along its first row:
M32=1(0⋅0−5⋅5)−2(3⋅0−5⋅1)+(−1)(3⋅5−0⋅1)
=1(−25)−2(−5)+(−1)(15)=−25+10−15=−30
So det(A)=(−1)5⋅1⋅(−30)=(−1)(−30)=30.
The choice of column 2 reduced the problem from four 3×3 determinants down to one. This illustrates why scanning for zeros before expanding is the first step in any hand computation.
Expanding by Minors and Cofactors
The sub-determinants appearing in cofactor expansion have their own terminology. The (i,j) minor Mij is the determinant of the submatrix formed by deleting row i and column j from A. The (i,j) cofactor is the signed version:
Cij=(−1)i+jMij
The sign factor (−1)i+j follows a checkerboard pattern starting with + at position (1,1):
+−+⋮−+−+−+−+−⋯⋯⋯⋱
Using this notation, the Laplace expansion along row i takes the compact form
det(A)=j=1∑naijCij
and the expansion along column j is
det(A)=i=1∑naijCij
Both give the same result regardless of which row or column is chosen. Collecting all cofactors into a matrix and transposing produces the adjugate, which leads to an explicit formula for the matrix inverse. The full development of minors, cofactors, the adjugate, and the structural results they enable is on the cofactors page.
How Row Operations Affect the Determinant
The three elementary row operations interact with the determinant in simple, predictable ways. Swapping two rows multiplies the determinant by −1. Multiplying a row by a nonzero scalar k multiplies the determinant by k. Adding a scalar multiple of one row to a different row leaves the determinant unchanged.
These three rules turn Gaussian elimination into a determinant-computing algorithm. Reduce A to upper triangular form using row operations, tracking every swap and every scaling. The determinant of the resulting triangular matrix is the product of its diagonal entries. Adjusting for the tracked sign flips and scale factors gives det(A).
This approach requires on the order of n3 arithmetic operations — a dramatic improvement over the n! cost of cofactor expansion. For any matrix larger than 4×4, row reduction is the practical method.
A Quick Illustration
Starting from A=(2347), subtract 23 times row 1 from row 2 (this does not change the determinant) to get (2041). The product of the diagonal is 2⋅1=2, which matches det(A)=2⋅7−4⋅3=2.
The complete set of algebraic properties — including the multiplicative rule det(AB)=det(A)det(B), transpose invariance, and the full invertibility equivalence — is developed on the properties page.
Area, Volume, and Orientation
The determinant has a direct geometric meaning: it measures how a matrix, viewed as a linear transformation, distorts size and orientation.
In two dimensions, ∣det(A)∣ equals the area of the parallelogram spanned by the columns of A. In three dimensions, ∣det(A)∣ equals the volume of the parallelepiped spanned by the three column vectors, which also equals the scalar triple product a⋅(b×c). In n dimensions, ∣det(A)∣ is the factor by which the map x↦Ax scales n-dimensional volumes.
The sign carries its own meaning. A positive determinant means the transformation preserves orientation — counterclockwise stays counterclockwise in R2, right-handed stays right-handed in R3. A negative determinant means orientation is reversed. A zero determinant means the image is lower-dimensional: a 3×3 transformation with det=0 maps all of R3 onto a plane, a line, or a point.
Rotation matrices always have determinant +1. Reflection matrices always have determinant −1. These are the cleanest examples of orientation-preserving and orientation-reversing maps. The full geometric treatment, including the change-of-variables formula from multivariable calculus and explicit area and volume formulas, is on the geometry page.
Determinant-Based Formulas
The determinant is not only a diagnostic tool — it provides closed-form expressions for quantities that might otherwise require iterative procedures.
Cramer's rule solves a linear systemAx=b by expressing each component of the solution as a ratio of two determinants: xi=det(Ai)/det(A), where Ai is A with its i-th column replaced by b. The adjugate formula gives the inverse of A as A−1=det(A)1adj(A), writing every entry of the inverse explicitly in terms of cofactors. In the 2×2 case, this reduces to the familiar swap-and-negate formula.
The cross producta×b in R3 can be computed as a symbolic 3×3 determinant with the unit vectors i^,j^,k^ in the first row. The characteristic polynomial det(A−λI) defines the eigenvalues of A — its roots are precisely the scalars λ for which A−λI is singular. The Wronskian, a determinant built from functions and their derivatives, tests linear independence in the setting of differential equations.
Each of these formulas is developed with full worked examples on the applications page.