A matrix may have many rows and columns, but some of them may carry redundant information — expressible as combinations of others. The rank strips away this redundancy and counts the number of truly independent directions the matrix uses, revealing its effective dimensionality and governing the solvability of every linear system it defines.
What Rank Measures
The rank of an m×n matrix A is a single non-negative integer r that captures how much of the matrix's potential dimensionality is actually used. It satisfies
0≤rank(A)≤min(m,n)
When rank(A)=min(m,n), the matrix has full rank — every row and every column contributes something that no combination of the others can reproduce. When rank(A)<min(m,n), the matrix is rank-deficient, meaning at least one row or column is a linear combination of the others.
A 5×3 matrix with rank 3 uses all three of its column directions. A 5×3 matrix with rank 2 has one column that is redundant — it lies in the span of the other two. The rank does not say which column is redundant (often more than one subset works), only that the effective column count is 2.
Column Rank and Row Rank
The column rank of A is the dimension of its column space — the subspace of Rm spanned by the columns of A. It counts the maximum number of linearly independent columns.
The row rank is the dimension of the row space — the subspace of Rn spanned by the rows. It counts the maximum number of linearly independent rows.
A fundamental theorem states that these two numbers are always equal:
column rank of A=row rank of A
This common value is called the rank of A, written rank(A) or rk(A).
The equality is not obvious. The columns live in Rm and the rows live in Rn — two different spaces, potentially of different dimensions. The proof goes through row reduction: elementary row operations do not change the row space, and in reduced row echelon form, the number of nonzero rows (row rank) equals the number of pivot columns (column rank). Since row operations preserve both counts, the equality holds for the original matrix.
Computing Rank via Row Reduction
The standard method for finding the rank of a matrix is to reduce it to row echelon form and count the pivot positions.
Worked Example
A=12−1024−200132−1043351−1
Subtract 2 times row 1 from row 2, and add row 1 to row 3:
100020000132−12333−14−1
Subtract 3 times row 2 from row 3, and subtract 2 times row 2 from row 4:
100020000100−12−3−13−171
Subtract 3 times row 4 from row 3:
100020000100−120−13−141
Swap rows 3 and 4 to place the pivot:
100020000100−12−103−114
There are four pivots, in columns 1, 3, 4, and 5. So rank(A)=4. Column 2 is the only non-pivot column, corresponding to the single free variable if this matrix were the coefficient matrix of a system.
Rank and Dimension
For an m×n matrix A, the rank can be at most min(m,n). Whether it reaches this maximum depends on the matrix's entries, not just its shape.
Full column rank means rank(A)=n — all n columns are independent. When A has full column rank, the system Ax=b has at most one solution for any b, because no free variables exist. The null space is {0}.
Full row rank means rank(A)=m — all m rows are independent. When A has full row rank, the system Ax=b has at least one solution for every b, because the column space is all of Rm.
When A is square (m=n) and has full rank n, both conditions hold simultaneously: the system has exactly one solution for every right-hand side, and A is invertible.
Rank and Linear Systems
The solvability of a linear systemAx=b is determined entirely by comparing the rank of the coefficient matrix A with the rank of the augmented matrix [A∣b].
A solution exists if and only if rank(A)=rank([A∣b]). This condition means that b lies in the column space of A — it can be expressed as a linear combination of the columns.
When solutions exist, uniqueness depends on whether the rank equals the number of unknowns n. If rank(A)=n, there are no free variables and the solution is unique. If rank(A)<n, there are n−rank(A) free variables, and the solution set is an infinite family parametrized by those free variables.
The three possible outcomes are: rank(A)<rank([A∣b]) means the system is inconsistent and has no solution. rank(A)=rank([A∣b])=n means there is exactly one solution. rank(A)=rank([A∣b])<n means there are infinitely many solutions.
There is no scenario with a finite number of solutions greater than one. A linear system either has zero, one, or infinitely many solutions.
The Rank-Nullity Theorem
For an m×n matrix A, the rank and the nullity — the dimension of the null space {x:Ax=0} — satisfy
rank(A)+nullity(A)=n
The n columns of A partition into two groups: the pivot columns, which contribute to the column space and drive the rank, and the free columns, which contribute to the null space and drive the nullity. Every column does exactly one of these things.
For a 3×5 matrix with rank 2, the nullity is 3. The column space is a two-dimensional subspace of R3 (a plane through the origin), and the null space is a three-dimensional subspace of R5.
For a square n×n matrix, the theorem says rank(A)+nullity(A)=n. If the rank is n (full rank), the nullity is 0 — the null space contains only 0, and A is invertible. If the rank is less than n, the null space is nontrivial, the determinant is zero, and A is singular.
The rank-nullity theorem is sometimes called the dimension theorem for linear maps. If A defines a linear transformationT:Rn→Rm, then the rank is the dimension of the image (range) of T, and the nullity is the dimension of the kernel. Their sum equals the dimension of the domain.
Properties of Rank
The rank function obeys several inequalities and identities that constrain how matrix operations affect it.
The rank of the zero matrix is 0, and this is the only matrix with rank zero. For any nonzero scalar c, rank(cA)=rank(A) — scaling does not create or destroy independence.
Transposition preserves rank: rank(AT)=rank(A). This is a restatement of the equality of row rank and column rank.
The rank of a product can only decrease:
rank(AB)≤min(rank(A),rank(B))
Multiplying by a matrix can collapse dimensions but cannot create new independent directions. There is also a lower bound due to Sylvester's inequality:
rank(A)+rank(B)−n≤rank(AB)
for A of size m×n and B of size n×p. This says the rank of the product cannot drop too far below the ranks of the factors.
The rank of a sum satisfies rank(A+B)≤rank(A)+rank(B). Equality holds when the column spaces of A and B are disjoint (intersect only at 0).
Multiplying by an invertible matrix preserves rank exactly: if P and Q are invertible, then rank(PAQ)=rank(A). This is because invertible matrices neither collapse nor create dimensions.
Rank of Special Matrices
Several matrix types have rank that can be read off directly from their structure.
The identity matrix In has rank n — all columns are standard basis vectors, which are linearly independent. Every invertible matrix has full rank by definition.
A diagonal matrix has rank equal to the number of nonzero diagonal entries. The zero entries correspond to collapsed coordinate directions.
A rank-1 matrix has the form A=uvT, an outer product of two nonzero vectors. Every column of A is a scalar multiple of u, so the column space is the one-dimensional line through u. Equivalently, every row is a scalar multiple of vT. Rank-1 matrices are the building blocks of the outer product decomposition of matrix multiplication.
A symmetric positive definite matrix always has full rank — all its eigenvalues are strictly positive, so no dimension is collapsed. A nilpotent matrix of order n>1 always has rank strictly less than n, since its determinant is zero.
The rank of ATA equals the rank of A. This follows from the fact that the null spaces of A and ATA are identical: Ax=0 implies ATAx=0, and conversely ATAx=0 implies xTATAx=∥Ax∥2=0, so Ax=0. By the rank-nullity theorem, equal nullities with the same n give equal ranks.
Rank and the Four Fundamental Subspaces
Every m×n matrix A gives rise to four subspaces, and the rank governs all of their dimensions.
The column space of A is the span of the columns, a subspace of Rm with dimension equal to rank(A). The row space of A is the span of the rows, a subspace of Rn also with dimension rank(A). The null space of A consists of all solutions to Ax=0, a subspace of Rn with dimension n−rank(A). The left null space consists of all solutions to ATy=0, a subspace of Rm with dimension m−rank(A).
These four subspaces split into two pairs of orthogonal complements. In Rn, the row space and the null space are orthogonal complements: every vector in Rn can be uniquely decomposed into a row-space component and a null-space component, and the two are perpendicular. In Rm, the column space and the left null space form the analogous pair.
The four dimensions add up correctly on both sides: rank(A)+(n−rank(A))=n in Rn, and rank(A)+(m−rank(A))=m in Rm. The rank is the single number that controls the entire structural decomposition.