Every matrix defines four subspaces — two in the domain and two in the codomain — whose dimensions and orthogonality relationships form a complete picture of the linear map. The rank governs all four dimensions, and the four subspaces together account for every vector in both spaces.
The column space Col(A) and the left null space Null(AT) live in Rm.
The row space Row(A) and the null space Null(A) live in Rn.
A single integer — the rankr — controls the dimension of all four. The column space and row space each have dimension r. The null space has dimension n−r. The left null space has dimension m−r. These four numbers add up correctly: r+(n−r)=n in the domain, and r+(m−r)=m in the codomain.
The four subspaces are not independent of each other. They pair off into orthogonal complements — the row space and null space are perpendicular in Rn, while the column space and left null space are perpendicular in Rm. This structure is the definitive description of what the map x↦Ax does.
The Column Space
The column space of A is the span of the columns of A:
Col(A)={Ax:x∈Rn}=Span{a1,a2,…,an}
It is the set of all possible outputs of the linear transformationx↦Ax, and it lives in Rm. Its dimension is r=rank(A).
The column space answers the solvability question: Ax=b has a solution if and only if b∈Col(A). Vectors outside the column space are unreachable — no input x can produce them.
To find a basis for the column space, row reduce A and identify the pivot columns. The corresponding columns of the original matrix A form the basis. The echelon form identifies which columns are independent, but the original columns are the actual vectors in Rm that span the column space.
Worked Example
A=12−136−3250−103
Row reduction gives pivots in columns 1 and 3. The column space basis consists of the first and third columns of the original A:
⎩⎨⎧12−1,250⎭⎬⎫
The column space is a two-dimensional subspace (a plane through the origin) in R3. The rank is 2.
The Row Space
The row space of A is the span of the rows, viewed as vectors in Rn:
Row(A)=Col(AT)
It lives in Rn and has dimension r — the same as the column space, despite the two subspaces living in different ambient spaces.
To find a basis for the row space, row reduce A and take the nonzero rows of the echelon form. Unlike the column space, the echelon form's rows are used directly — not the original rows. This is valid because elementary row operations replace rows with linear combinations of existing rows, preserving the row space. The nonzero rows of the echelon form are independent (the staircase pattern of pivots guarantees this) and span the same space as the original rows.
Continuing the Example
The echelon form of the matrix above has two nonzero rows. These rows (as vectors in R4) form a basis for the row space. The row space is a two-dimensional subspace of R4.
A key fact that distinguishes the row space from the column space: row reduction preserves the row space but changes the column space. The pivot columns of the echelon form are not a basis for the column space of the original matrix — only the corresponding columns of A are.
The Null Space
The null space of A is the set of all vectors that A maps to zero:
Null(A)={x∈Rn:Ax=0}
It lives in Rn and has dimension n−r, where r is the rank. This dimension is the nullity, and the identity r+(n−r)=n is the rank-nullity theorem.
The null space measures the failure of injectivity. If Null(A)={0}, the map is injective — different inputs produce different outputs. If the null space is nontrivial, the map collapses some directions to zero, and distinct inputs can produce the same output: if Ax1=Ax2, then x1−x2∈Null(A).
To find a basis, reduce A to RREF and identify the free variables. Each free variable is set to 1 (with the others at 0), and the corresponding solution is one basis vector for the null space.
Continuing the Example
The 3×4 matrix has rank 2, so the null space has dimension 4−2=2. Two free variables produce two basis vectors. The null space is a two-dimensional subspace of R4 — a plane through the origin in four-dimensional space.
The Left Null Space
The left null space is the null space of the transpose:
Null(AT)={y∈Rm:ATy=0}
Equivalently, it consists of all vectors y satisfying yTA=0T — hence the name "left" null space, since y multiplies A from the left.
It lives in Rm and has dimension m−r.
The left null space measures the failure of surjectivity. If Null(AT)={0}, the column space is all of Rm and Ax=b has a solution for every b. If the left null space is nontrivial, there are directions in Rm that the column space misses.
To find a basis, solve ATy=0 by row reducing AT. Alternatively, row reduce [AT∣Im] — the identity block tracks the row operations, and the bottom portion of the result reveals the left null space.
Continuing the Example
The matrix is 3×4 with rank 2, so the left null space has dimension 3−2=1. It is a line through the origin in R3 — a single vector (up to scaling) that is orthogonal to every column of A.
Dimension Accounting
The four dimensions are not independent — they are locked together by the rank.
In the domain Rn:
dim(Row(A))+dim(Null(A))=r+(n−r)=n
In the codomain Rm:
dim(Col(A))+dim(Null(AT))=r+(m−r)=m
The first equation is the rank-nullity theorem. The second is its transpose analogue. Together they say that the four subspaces account for every dimension of both the domain and the codomain — nothing is missing and nothing is double-counted.
For the running example (3×4 matrix, rank 2): the row space and null space have dimensions 2 and 2, summing to 4=n. The column space and left null space have dimensions 2 and 1, summing to 3=m.
Orthogonal Complements
The four subspaces pair off into orthogonal complements.
In Rn, the row space and the null space are orthogonal complements. Every vector in the null space is perpendicular to every row of A, because Ax=0 means the dot product of x with each row is zero. Every vector in Rn decomposes uniquely as the sum of a row-space component and a null-space component, and these two components are perpendicular.
In Rm, the column space and the left null space are orthogonal complements. Every vector in Null(AT) is perpendicular to every column of A (since ATy=0 means y dots to zero with each column). Every vector in Rm decomposes uniquely as a column-space component plus a left-null-space component.
These orthogonality relationships are not incidental — they are the structural backbone of projection, least squares, and the singular value decomposition. Projecting b onto the column space means splitting b into its column-space component (the best approximation Ax^) and its left-null-space component (the residual b−Ax^).
The Big Picture
The four fundamental subspaces can be arranged in a single diagram with the domain Rn on one side and the codomain Rm on the other.
The matrix A maps the row space onto the column space. This restriction is a bijection — every vector in the row space has a unique image in the column space, and every vector in the column space comes from exactly one row-space vector. The rank r is the dimension of both spaces, and this bijection is the "useful part" of the map.
The matrix A sends the entire null space to 0. These are the directions that the map annihilates — the information that is lost.
Combining these two facts: every vector x∈Rn decomposes as x=xr+xn where xr is in the row space and xn is in the null space. Then Ax=Axr+Axn=Axr. The null-space component is destroyed, and the row-space component maps bijectively to the column space.
On the codomain side, the column space is what the map can reach, and the left null space is what remains unreachable. Every vector b∈Rm decomposes as b=bc+bℓ where bc∈Col(A) and bℓ∈Null(AT). The system Ax=b is solvable if and only if bℓ=0.
This four-subspace decomposition summarizes the entire geometry of the linear map in one picture: what gets mapped where, what gets collapsed, and what is left unreachable.
Examples Across Matrix Types
The four-subspace structure varies dramatically with the properties of the matrix.
For a full-rank square matrix (r=n=m): the column space and row space are both all of Rn. The null space and left null space are both {0}. The map is a bijection — nothing is lost and nothing is missed. This is the case where A is invertible.
For a rank-1 matrix (r=1): the column space is a line in Rm, and the row space is a line in Rn. Every input maps to a scalar multiple of a single vector. The null space has dimension n−1 — an entire hyperplane is collapsed to zero. The left null space has dimension m−1. Almost everything on both sides belongs to the null spaces; only one direction survives the map.
For a projection matrix (A2=A, A=AT): the column space and the row space coincide. The null space is the orthogonal complement of the column space. The map fixes every vector in the column space and kills every vector in the null space — it projects Rn onto a subspace.
For the zero matrix (r=0): the column space and row space are both {0}. The null space is all of Rn and the left null space is all of Rm. Every vector is sent to zero.