Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Rank of a Matrix






Measuring the Effective Size of a Matrix

A matrix may have many rows and columns, but some of them may carry redundant information — expressible as combinations of others. The rank strips away this redundancy and counts the number of truly independent directions the matrix uses, revealing its effective dimensionality and governing the solvability of every linear system it defines.



What Rank Measures

The rank of an m×nm \times n matrix AA is a single non-negative integer rr that captures how much of the matrix's potential dimensionality is actually used. It satisfies

0rank(A)min(m,n)0 \leq \text{rank}(A) \leq \min(m, n)


When rank(A)=min(m,n)\text{rank}(A) = \min(m, n), the matrix has full rank — every row and every column contributes something that no combination of the others can reproduce. When rank(A)<min(m,n)\text{rank}(A) < \min(m, n), the matrix is rank-deficient, meaning at least one row or column is a linear combination of the others.

A 5×35 \times 3 matrix with rank 33 uses all three of its column directions. A 5×35 \times 3 matrix with rank 22 has one column that is redundant — it lies in the span of the other two. The rank does not say which column is redundant (often more than one subset works), only that the effective column count is 22.

Column Rank and Row Rank

The column rank of AA is the dimension of its column space — the subspace of Rm\mathbb{R}^m spanned by the columns of AA. It counts the maximum number of linearly independent columns.

The row rank is the dimension of the row space — the subspace of Rn\mathbb{R}^n spanned by the rows. It counts the maximum number of linearly independent rows.

A fundamental theorem states that these two numbers are always equal:

column rank of A=row rank of A\text{column rank of } A = \text{row rank of } A


This common value is called the rank of AA, written rank(A)\text{rank}(A) or rk(A)\text{rk}(A).

The equality is not obvious. The columns live in Rm\mathbb{R}^m and the rows live in Rn\mathbb{R}^n — two different spaces, potentially of different dimensions. The proof goes through row reduction: elementary row operations do not change the row space, and in reduced row echelon form, the number of nonzero rows (row rank) equals the number of pivot columns (column rank). Since row operations preserve both counts, the equality holds for the original matrix.

Computing Rank via Row Reduction

The standard method for finding the rank of a matrix is to reduce it to row echelon form and count the pivot positions.

Worked Example


A=(12013241051234100231)A = \begin{pmatrix} 1 & 2 & 0 & -1 & 3 \\ 2 & 4 & 1 & 0 & 5 \\ -1 & -2 & 3 & 4 & 1 \\ 0 & 0 & 2 & 3 & -1 \end{pmatrix}


Subtract 22 times row 11 from row 22, and add row 11 to row 33:

(12013001210033400231)\begin{pmatrix} 1 & 2 & 0 & -1 & 3 \\ 0 & 0 & 1 & 2 & -1 \\ 0 & 0 & 3 & 3 & 4 \\ 0 & 0 & 2 & 3 & -1 \end{pmatrix}


Subtract 33 times row 22 from row 33, and subtract 22 times row 22 from row 44:

(12013001210003700011)\begin{pmatrix} 1 & 2 & 0 & -1 & 3 \\ 0 & 0 & 1 & 2 & -1 \\ 0 & 0 & 0 & -3 & 7 \\ 0 & 0 & 0 & -1 & 1 \end{pmatrix}


Subtract 33 times row 44 from row 33:

(12013001210000400011)\begin{pmatrix} 1 & 2 & 0 & -1 & 3 \\ 0 & 0 & 1 & 2 & -1 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & -1 & 1 \end{pmatrix}


Swap rows 33 and 44 to place the pivot:

(12013001210001100004)\begin{pmatrix} 1 & 2 & 0 & -1 & 3 \\ 0 & 0 & 1 & 2 & -1 \\ 0 & 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & 0 & 4 \end{pmatrix}


There are four pivots, in columns 11, 33, 44, and 55. So rank(A)=4\text{rank}(A) = 4. Column 22 is the only non-pivot column, corresponding to the single free variable if this matrix were the coefficient matrix of a system.

Rank and Dimension

For an m×nm \times n matrix AA, the rank can be at most min(m,n)\min(m, n). Whether it reaches this maximum depends on the matrix's entries, not just its shape.

Full column rank means rank(A)=n\text{rank}(A) = n — all nn columns are independent. When AA has full column rank, the system Ax=bAx = \mathbf{b} has at most one solution for any b\mathbf{b}, because no free variables exist. The null space is {0}\{\mathbf{0}\}.

Full row rank means rank(A)=m\text{rank}(A) = m — all mm rows are independent. When AA has full row rank, the system Ax=bAx = \mathbf{b} has at least one solution for every b\mathbf{b}, because the column space is all of Rm\mathbb{R}^m.

When AA is square (m=nm = n) and has full rank nn, both conditions hold simultaneously: the system has exactly one solution for every right-hand side, and AA is invertible.

Rank and Linear Systems

The solvability of a linear system Ax=bAx = \mathbf{b} is determined entirely by comparing the rank of the coefficient matrix AA with the rank of the augmented matrix [Ab][A \mid \mathbf{b}].

A solution exists if and only if rank(A)=rank([Ab])\text{rank}(A) = \text{rank}([A \mid \mathbf{b}]). This condition means that b\mathbf{b} lies in the column space of AA — it can be expressed as a linear combination of the columns.

When solutions exist, uniqueness depends on whether the rank equals the number of unknowns nn. If rank(A)=n\text{rank}(A) = n, there are no free variables and the solution is unique. If rank(A)<n\text{rank}(A) < n, there are nrank(A)n - \text{rank}(A) free variables, and the solution set is an infinite family parametrized by those free variables.

The three possible outcomes are: rank(A)<rank([Ab])\text{rank}(A) < \text{rank}([A \mid \mathbf{b}]) means the system is inconsistent and has no solution. rank(A)=rank([Ab])=n\text{rank}(A) = \text{rank}([A \mid \mathbf{b}]) = n means there is exactly one solution. rank(A)=rank([Ab])<n\text{rank}(A) = \text{rank}([A \mid \mathbf{b}]) < n means there are infinitely many solutions.

There is no scenario with a finite number of solutions greater than one. A linear system either has zero, one, or infinitely many solutions.

The Rank-Nullity Theorem

For an m×nm \times n matrix AA, the rank and the nullity — the dimension of the null space {x:Ax=0}\{\mathbf{x} : A\mathbf{x} = \mathbf{0}\} — satisfy

rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n


The nn columns of AA partition into two groups: the pivot columns, which contribute to the column space and drive the rank, and the free columns, which contribute to the null space and drive the nullity. Every column does exactly one of these things.

For a 3×53 \times 5 matrix with rank 22, the nullity is 33. The column space is a two-dimensional subspace of R3\mathbb{R}^3 (a plane through the origin), and the null space is a three-dimensional subspace of R5\mathbb{R}^5.

For a square n×nn \times n matrix, the theorem says rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n. If the rank is nn (full rank), the nullity is 00 — the null space contains only 0\mathbf{0}, and AA is invertible. If the rank is less than nn, the null space is nontrivial, the determinant is zero, and AA is singular.

The rank-nullity theorem is sometimes called the dimension theorem for linear maps. If AA defines a linear transformation T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m, then the rank is the dimension of the image (range) of TT, and the nullity is the dimension of the kernel. Their sum equals the dimension of the domain.

Properties of Rank

The rank function obeys several inequalities and identities that constrain how matrix operations affect it.

The rank of the zero matrix is 00, and this is the only matrix with rank zero. For any nonzero scalar cc, rank(cA)=rank(A)\text{rank}(cA) = \text{rank}(A) — scaling does not create or destroy independence.

Transposition preserves rank: rank(AT)=rank(A)\text{rank}(A^T) = \text{rank}(A). This is a restatement of the equality of row rank and column rank.

The rank of a product can only decrease:

rank(AB)min(rank(A),rank(B))\text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))


Multiplying by a matrix can collapse dimensions but cannot create new independent directions. There is also a lower bound due to Sylvester's inequality:

rank(A)+rank(B)nrank(AB)\text{rank}(A) + \text{rank}(B) - n \leq \text{rank}(AB)


for AA of size m×nm \times n and BB of size n×pn \times p. This says the rank of the product cannot drop too far below the ranks of the factors.

The rank of a sum satisfies rank(A+B)rank(A)+rank(B)\text{rank}(A + B) \leq \text{rank}(A) + \text{rank}(B). Equality holds when the column spaces of AA and BB are disjoint (intersect only at 0\mathbf{0}).

Multiplying by an invertible matrix preserves rank exactly: if PP and QQ are invertible, then rank(PAQ)=rank(A)\text{rank}(PAQ) = \text{rank}(A). This is because invertible matrices neither collapse nor create dimensions.

Rank of Special Matrices

Several matrix types have rank that can be read off directly from their structure.

The identity matrix InI_n has rank nn — all columns are standard basis vectors, which are linearly independent. Every invertible matrix has full rank by definition.

A diagonal matrix has rank equal to the number of nonzero diagonal entries. The zero entries correspond to collapsed coordinate directions.

A rank-11 matrix has the form A=uvTA = \mathbf{u}\mathbf{v}^T, an outer product of two nonzero vectors. Every column of AA is a scalar multiple of u\mathbf{u}, so the column space is the one-dimensional line through u\mathbf{u}. Equivalently, every row is a scalar multiple of vT\mathbf{v}^T. Rank-11 matrices are the building blocks of the outer product decomposition of matrix multiplication.

A symmetric positive definite matrix always has full rank — all its eigenvalues are strictly positive, so no dimension is collapsed. A nilpotent matrix of order n>1n > 1 always has rank strictly less than nn, since its determinant is zero.

The rank of ATAA^T A equals the rank of AA. This follows from the fact that the null spaces of AA and ATAA^T A are identical: Ax=0A\mathbf{x} = \mathbf{0} implies ATAx=0A^T A \mathbf{x} = \mathbf{0}, and conversely ATAx=0A^T A \mathbf{x} = \mathbf{0} implies xTATAx=Ax2=0\mathbf{x}^T A^T A \mathbf{x} = \|A\mathbf{x}\|^2 = 0, so Ax=0A\mathbf{x} = \mathbf{0}. By the rank-nullity theorem, equal nullities with the same nn give equal ranks.

Rank and the Four Fundamental Subspaces

Every m×nm \times n matrix AA gives rise to four subspaces, and the rank governs all of their dimensions.

The column space of AA is the span of the columns, a subspace of Rm\mathbb{R}^m with dimension equal to rank(A)\text{rank}(A). The row space of AA is the span of the rows, a subspace of Rn\mathbb{R}^n also with dimension rank(A)\text{rank}(A). The null space of AA consists of all solutions to Ax=0A\mathbf{x} = \mathbf{0}, a subspace of Rn\mathbb{R}^n with dimension nrank(A)n - \text{rank}(A). The left null space consists of all solutions to ATy=0A^T\mathbf{y} = \mathbf{0}, a subspace of Rm\mathbb{R}^m with dimension mrank(A)m - \text{rank}(A).

These four subspaces split into two pairs of orthogonal complements. In Rn\mathbb{R}^n, the row space and the null space are orthogonal complements: every vector in Rn\mathbb{R}^n can be uniquely decomposed into a row-space component and a null-space component, and the two are perpendicular. In Rm\mathbb{R}^m, the column space and the left null space form the analogous pair.

The four dimensions add up correctly on both sides: rank(A)+(nrank(A))=n\text{rank}(A) + (n - \text{rank}(A)) = n in Rn\mathbb{R}^n, and rank(A)+(mrank(A))=m\text{rank}(A) + (m - \text{rank}(A)) = m in Rm\mathbb{R}^m. The rank is the single number that controls the entire structural decomposition.