Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Inverse of a Matrix






Undoing a Matrix

For real numbers, dividing by a means multiplying by 1/a. Matrices have no division operation, but invertible matrices have an inverse that plays the same role — multiplying by A⁻¹ reverses the effect of multiplying by A. Not every matrix has an inverse, and understanding when one exists, how to compute it, and what properties it carries is central to linear algebra.



Definition of the Inverse

For a square matrix AA of order nn, the inverse — if it exists — is the unique matrix A1A^{-1} satisfying

AA1=A1A=IAA^{-1} = A^{-1}A = I


Both products must equal the identity. A matrix possessing an inverse is called invertible or nonsingular. A matrix with no inverse is called singular.

Uniqueness follows from a short argument. Suppose both BB and CC satisfy AB=IAB = I and CA=ICA = I. Then B=IB=(CA)B=C(AB)=CI=CB = IB = (CA)B = C(AB) = CI = C, so BB and CC must be the same matrix. This means a matrix either has no inverse or has exactly one.

The inverse is defined only for square matrices. A rectangular matrix cannot satisfy AA1=A1A=IAA^{-1} = A^{-1}A = I because the products would require incompatible dimensions. One-sided inverses (left or right) can exist for rectangular matrices with full column or full row rank, but the two-sided inverse is a strictly square concept.

The 2×2 Inverse Formula

For a 2×22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} with adbc0ad - bc \neq 0, the inverse is

A1=1adbc(dbca)A^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}


The recipe is: swap the diagonal entries, negate the off-diagonal entries, and divide everything by the determinant adbcad - bc.

Worked Examples


For A=(3152)A = \begin{pmatrix} 3 & 1 \\ 5 & 2 \end{pmatrix}, the determinant is 3215=13 \cdot 2 - 1 \cdot 5 = 1. The inverse is A1=(2153)A^{-1} = \begin{pmatrix} 2 & -1 \\ -5 & 3 \end{pmatrix}. Since det(A)=1\det(A) = 1, every entry of A1A^{-1} is an integer.

Verification: AA1=(3152)(2153)=(653+310105+6)=(1001)AA^{-1} = \begin{pmatrix} 3 & 1 \\ 5 & 2 \end{pmatrix} \begin{pmatrix} 2 & -1 \\ -5 & 3 \end{pmatrix} = \begin{pmatrix} 6 - 5 & -3 + 3 \\ 10 - 10 & -5 + 6 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.

For A=(2436)A = \begin{pmatrix} 2 & 4 \\ 3 & 6 \end{pmatrix}, the determinant is 2643=02 \cdot 6 - 4 \cdot 3 = 0. The second column is twice the first, the columns are linearly dependent, and no inverse exists.

For A=(1327)A = \begin{pmatrix} 1 & 3 \\ 2 & 7 \end{pmatrix}, the determinant is 76=17 - 6 = 1. The inverse is A1=(7321)A^{-1} = \begin{pmatrix} 7 & -3 \\ -2 & 1 \end{pmatrix}.

When Does the Inverse Exist?

The invertible matrix theorem collects a list of conditions that are all equivalent for an n×nn \times n matrix AA. Each approaches invertibility from a different angle — algebraic, geometric, computational, spectral — but they are all either simultaneously true or simultaneously false.

AA is invertible. The determinant det(A)0\det(A) \neq 0. The rank of AA equals nn. The columns of AA are linearly independent. The rows of AA are linearly independent. The columns of AA span Rn\mathbb{R}^n. The columns of AA form a basis for Rn\mathbb{R}^n. The homogeneous system Ax=0Ax = \mathbf{0} has only the trivial solution. The system Ax=bAx = \mathbf{b} has a unique solution for every bRn\mathbf{b} \in \mathbb{R}^n. The null space of AA is {0}\{\mathbf{0}\}. The reduced row echelon form of AA is II. The matrix AA is a product of elementary matrices. Zero is not an eigenvalue of AA.

The power of this theorem is that proving any one condition automatically establishes all the others. Checking the determinant is often the fastest single test, but in large-scale computation, rank determination via row reduction is more practical.

Computing the Inverse by Row Reduction

The standard algorithm for computing the inverse of an n×nn \times n matrix AA is to form the n×2nn \times 2n augmented matrix [AI][A \mid I] and apply row operations to reduce the left half to the identity. If the reduction succeeds, the right half becomes A1A^{-1}:

[AI]row ops[IA1][A \mid I] \xrightarrow{\text{row ops}} [I \mid A^{-1}]


Each row operation is left-multiplication by an elementary matrix. If the sequence of operations is E1,E2,,EkE_1, E_2, \dots, E_k, then EkE2E1A=IE_k \cdots E_2 E_1 A = I, which means A1=EkE2E1A^{-1} = E_k \cdots E_2 E_1. Applying the same operations to II produces exactly this product.

Worked Example


A=(121253133)A = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 5 & 3 \\ 1 & 3 & 3 \end{pmatrix}


Form [AI][A \mid I] and reduce. Subtract 22 times row 11 from row 22, and subtract row 11 from row 33:

(121100011210012101)\begin{pmatrix} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & -2 & 1 & 0 \\ 0 & 1 & 2 & -1 & 0 & 1 \end{pmatrix}


Subtract row 22 from row 33:

(121100011210001111)\begin{pmatrix} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & -2 & 1 & 0 \\ 0 & 0 & 1 & 1 & -1 & 1 \end{pmatrix}


Subtract row 33 from row 22, and subtract row 33 from row 11:

(120011010321001111)\begin{pmatrix} 1 & 2 & 0 & 0 & 1 & -1 \\ 0 & 1 & 0 & -3 & 2 & -1 \\ 0 & 0 & 1 & 1 & -1 & 1 \end{pmatrix}


Subtract 22 times row 22 from row 11:

(100631010321001111)\begin{pmatrix} 1 & 0 & 0 & 6 & -3 & 1 \\ 0 & 1 & 0 & -3 & 2 & -1 \\ 0 & 0 & 1 & 1 & -1 & 1 \end{pmatrix}


So A1=(631321111)A^{-1} = \begin{pmatrix} 6 & -3 & 1 \\ -3 & 2 & -1 \\ 1 & -1 & 1 \end{pmatrix}.

If at any point during reduction the left half develops a row of all zeros, AA is singular and no inverse exists.

Computing the Inverse via the Adjugate

The adjugate identity Aadj(A)=det(A)IA \cdot \operatorname{adj}(A) = \det(A) \cdot I gives an explicit formula when det(A)0\det(A) \neq 0:

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A)


The adjugate is the transpose of the cofactor matrix, so each entry of A1A^{-1} is a cofactor of AA divided by det(A)\det(A).

For the 2×22 \times 2 case, the adjugate formula reduces to the swap-and-negate formula from section 22. The cofactor matrix of (abcd)\begin{pmatrix} a & b \\ c & d \end{pmatrix} is (dcba)\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}, and transposing gives (dbca)\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}, which is exactly the numerator matrix in the 2×22 \times 2 inverse formula.

For 3×33 \times 3 and larger matrices, the adjugate formula remains exact and fully symbolic — it shows explicitly how each entry of A1A^{-1} depends on the entries of AA. This makes it valuable for theoretical work and for deriving sensitivity formulas. For numerical computation, however, it is vastly more expensive than row reduction: computing the adjugate requires n2n^2 cofactors, each of which is an (n1)×(n1)(n-1) \times (n-1) determinant.

Properties of the Inverse

The inverse satisfies a collection of identities that mirror and extend the familiar rules for reciprocals of real numbers.

Applying the inverse twice recovers the original: (A1)1=A(A^{-1})^{-1} = A. The inverse of a product reverses the order: (AB)1=B1A1(AB)^{-1} = B^{-1}A^{-1}. This generalizes to any number of factors — (A1A2Ak)1=Ak1A21A11(A_1 A_2 \cdots A_k)^{-1} = A_k^{-1} \cdots A_2^{-1} A_1^{-1}.

Transpose and inverse commute: (AT)1=(A1)T(A^T)^{-1} = (A^{-1})^T. It does not matter whether you transpose first and then invert, or invert first and then transpose.

Scalars pass through as expected: (cA)1=1cA1(cA)^{-1} = \frac{1}{c} A^{-1} for any nonzero scalar cc. Powers behave cleanly: (Ak)1=(A1)k=Ak(A^k)^{-1} = (A^{-1})^k = A^{-k}.

The determinant of the inverse is the reciprocal of the determinant: det(A1)=1/det(A)\det(A^{-1}) = 1/\det(A). This follows immediately from the multiplicative property of the determinant: det(A)det(A1)=det(AA1)=det(I)=1\det(A)\det(A^{-1}) = \det(AA^{-1}) = \det(I) = 1.

Solving Systems with the Inverse

When AA is invertible, the system Ax=bAx = \mathbf{b} has the unique solution

x=A1b\mathbf{x} = A^{-1}\mathbf{b}


This is the matrix analogue of dividing both sides by AA. Multiplying both sides on the left by A1A^{-1} gives A1Ax=A1bA^{-1}A\mathbf{x} = A^{-1}\mathbf{b}, which simplifies to x=A1b\mathbf{x} = A^{-1}\mathbf{b}.

In principle, this solves the system in one matrix-vector multiplication — but only if A1A^{-1} is already known. Computing A1A^{-1} from scratch requires roughly as much work as solving the system by Gaussian elimination, and the elimination approach is more numerically stable. Even when multiple systems share the same coefficient matrix AA with different right-hand sides, the LU decomposition is preferred: factor A=LUA = LU once, then solve each system with two cheap triangular substitutions.

The formula x=A1b\mathbf{x} = A^{-1}\mathbf{b} is most valuable as a theoretical tool. It proves that an invertible system always has a unique solution, and it makes the dependence of x\mathbf{x} on b\mathbf{b} explicit and linear.

Inverses of Special Matrix Types

Several matrix types have inverses with guaranteed structure, and some are trivially cheap to compute.

A diagonal matrix D=diag(d1,,dn)D = \text{diag}(d_1, \dots, d_n) is invertible if and only if every diagonal entry is nonzero, and its inverse is D1=diag(1/d1,,1/dn)D^{-1} = \text{diag}(1/d_1, \dots, 1/d_n) — simply reciprocate each entry.

An orthogonal matrix QQ satisfies Q1=QTQ^{-1} = Q^T. The inverse is the transpose, which costs nothing to compute — just reinterpret the matrix with rows and columns swapped.

The inverse of an upper triangular matrix is upper triangular, and the inverse of a lower triangular matrix is lower triangular. The computation can be done by back-substitution without forming the full augmented matrix.

If AA is symmetric and invertible, then A1A^{-1} is also symmetric: (A1)T=(AT)1=A1(A^{-1})^T = (A^T)^{-1} = A^{-1}.

A block diagonal matrix diag(A1,,Ak)\text{diag}(A_1, \dots, A_k) is invertible if and only if each block is invertible, and the inverse is diag(A11,,Ak1)\text{diag}(A_1^{-1}, \dots, A_k^{-1}) — each block is inverted independently.

Common Errors

Several incorrect analogies from scalar arithmetic cause persistent mistakes when working with matrix inverses.

The inverse does not distribute over addition. The expression (A+B)1(A + B)^{-1} is not equal to A1+B1A^{-1} + B^{-1}, and there is no simple formula relating the two. If A=B=IA = B = I, then (A+B)1=(2I)1=12I(A + B)^{-1} = (2I)^{-1} = \frac{1}{2}I, while A1+B1=I+I=2IA^{-1} + B^{-1} = I + I = 2I — these are clearly different.

The inverse of a product reverses order. Writing (AB)1=A1B1(AB)^{-1} = A^{-1}B^{-1} is wrong; the correct identity is (AB)1=B1A1(AB)^{-1} = B^{-1}A^{-1}. The reversal is a consequence of non-commutativity: to undo the operation "first apply BB, then apply AA," you must first undo AA, then undo BB.

Not every matrix is invertible. Assuming an inverse exists without checking the determinant or rank leads to division-by-zero errors in the 2×22 \times 2 formula and to contradictions in row reduction.

The inverse is not the entry-by-entry reciprocal. The matrix A1A^{-1} is not obtained by replacing each aija_{ij} with 1/aij1/a_{ij}. The inverse is a global operation that depends on all entries simultaneously.

When Not to Compute the Inverse

The formula x=A1b\mathbf{x} = A^{-1}\mathbf{b} is clean on paper but misleading as a computational strategy. In almost every practical setting, solving Ax=bAx = \mathbf{b} by row reduction or LU decomposition is faster and more numerically stable than computing A1A^{-1} first and then multiplying.

Row reduction requires roughly 23n3\frac{2}{3}n^3 operations to factor a system. Computing the full inverse requires roughly 2n32n^3 operations — three times the cost — and introduces additional rounding error in floating-point arithmetic. When multiple systems with the same AA need to be solved, the LU factorization should be computed once and reused, not replaced by an explicit inverse.

The inverse is the right object when A1A^{-1} itself — the entire matrix, not just its action on a specific b\mathbf{b} — is what matters. This happens in theoretical derivations, in symbolic formulas where the dependence on parameters must be made explicit, in sensitivity analysis, and when working with small matrices by hand. For a 2×22 \times 2 or 3×33 \times 3 matrix, computing the inverse directly is perfectly reasonable. For an n×nn \times n matrix with nn in the hundreds or thousands, it is almost never the right approach.