Two separate questions govern every linear system: does at least one solution exist, and if so, is that solution the only one? The rank of the coefficient matrix answers both. A single integer determines whether the system is inconsistent, uniquely solvable, or infinitely underdetermined — and characterizes the geometry of the solution set in each case.
The Two Questions
Given the system Ax=b with A of size m×n, two logically independent questions arise.
Existence: is there at least one vector x satisfying Ax=b? This asks whether b lies in the column space of A.
Uniqueness: if a solution exists, is it the only one? This asks whether the null space of A is trivial.
The two questions are independent — existence can hold without uniqueness, and non-existence makes uniqueness moot. Both are answered by the rank of A.
The Existence Condition
The system Ax=b has at least one solution if and only if
rank(A)=rank([A∣b])
When this condition holds, b is already a linear combination of the columns of A, so appending b as an extra column does not introduce a new independent direction — the rank stays the same.
When the condition fails — rank([A∣b])>rank(A) — the vector b is not in the column space. In the echelon form of the augmented matrix, this appears as a row [00⋯0∣d] with d=0: the equation 0=d has no solution, and the system is inconsistent.
Since appending one column can increase the rank by at most 1, the only possibilities are rank([A∣b])=rank(A) (consistent) or rank([A∣b])=rank(A)+1 (inconsistent).
The Uniqueness Condition
When a solution exists, it is unique if and only if
rank(A)=n
where n is the number of unknowns. Full column rank means every column of A contains a pivot, leaving no free variables. The null space is {0}, so the particular solution xp stands alone with nothing to add.
When rank(A)<n, there are n−rank(A) free variables. Each free variable parametrizes a direction along which the solution can move without violating the equations. The solution set is infinite — a translated copy of the null space, which has dimension n−rank(A).
The Three Cases Combined
Putting existence and uniqueness together, every linear system falls into exactly one of three cases.
rank(A)<rank([A∣b]): no solution. The system is inconsistent.
rank(A)=rank([A∣b])=n: exactly one solution. The system is consistent and fully determined.
rank(A)=rank([A∣b])<n: infinitely many solutions. The system is consistent but underdetermined, with n−rank(A) free parameters.
There is no case with a finite number of solutions greater than one. If two distinct solutions exist, their difference lies in the null space, which is a subspace — and a nontrivial subspace contains infinitely many vectors, generating infinitely many solutions.
The Rouché–Capelli Theorem
The existence condition rank(A)=rank([A∣b]) is known as the Rouché–Capelli theorem (or the Kronecker–Capelli theorem in some traditions). It states:
The system Ax=b is consistent if and only if the coefficient matrix and the augmented matrix have the same rank. When consistent, the solution set has dimension n−rank(A).
The theorem unifies the existence and dimension questions into a single rank comparison. It applies to every linear system regardless of the shape of A — square, tall, or wide. The proof follows directly from the column-space interpretation: b∈Col(A) if and only if adding b as a column does not increase the rank.
Square Systems
When A is n×n, the determinant provides the sharpest diagnostic.
If det(A)=0: the matrix is invertible, the rank is n, and the system Ax=b has exactly one solution for every b. The solution is x=A−1b, or equivalently, the solution obtained by Gaussian elimination. Existence and uniqueness both hold universally — the right-hand side does not matter.
If det(A)=0: the matrix is singular, the rank is less than n, and the outcome depends on b. For some b (those in the column space), infinitely many solutions exist. For other b (those outside the column space), no solution exists. The determinant test determines whether the coefficient matrix is adequate; the rank comparison with the augmented matrix determines whether the specific b is reachable.
Overdetermined Systems
When m>n — more equations than unknowns — the system is overdetermined. The coefficient matrix is tall, and generically, no solution exists.
The column space of A is at most n-dimensional inside Rm. When m>n, this column space is a proper subspace — most vectors in Rm lie outside it. A randomly chosen b will almost certainly not be in the column space, making the system inconsistent.
A solution exists only when b happens to lie in the column space — when the extra equations are consistent with the first n. When a solution does exist and rank(A)=n, it is unique.
When no exact solution exists, the least-squares approach finds the x^ that minimizes ∥Ax−b∥2 — the closest approximation. The least-squares solution satisfies the normal equations ATAx^=ATb and is the projection of b onto the column space.
Underdetermined Systems
When m<n — fewer equations than unknowns — the system is underdetermined. If a solution exists, it is never unique: the rank cannot exceed m, which is less than n, so at least n−m free variables remain.
The solution set, when nonempty, is an affine subspace of dimension at least n−m. Multiple vectors x satisfy the equations, and additional criteria beyond the linear system itself are needed to select a preferred one.
Common selection criteria include minimum norm (the x closest to 0, given by the pseudoinverse x=AT(AAT)−1b), sparsity (the x with the fewest nonzero entries, central to compressed sensing), and physical constraints (bounds or nonnegativity in engineering applications).
Underdetermined systems are not defective — they arise naturally whenever a problem has more degrees of freedom than constraints. The linear system identifies the feasible set; the selection criterion picks a point within it.
Geometric Interpretation
Each equation ai1x1+ai2x2+⋯+ainxn=bi defines a hyperplane in Rn — a flat set of dimension n−1. The solution set of the full system is the intersection of all m hyperplanes.
When the system is inconsistent, the hyperplanes have no common point. In R2 this means parallel lines. In R3 it can mean parallel planes, or planes arranged in a triangular prism where each pair intersects but no point lies on all three.
When the system has a unique solution, the hyperplanes meet at a single point. This requires at least n independent equations — enough to cut the solution space down from n dimensions to 0.
When the system has infinitely many solutions, the hyperplanes meet along a flat of dimension n−r, where r=rank(A). Each independent equation reduces the dimension of the intersection by one, and the rank counts how many equations cut independently. The remaining n−r dimensions are the free directions in the solution set.
Structure of the Solution Set
When Ax=b is consistent, the complete solution set has the form
{xp+xh:xh∈Null(A)}
where xp is any one particular solution. This set is a coset of the null space — the null space translated by xp. In geometric terms, it is an affine subspace: a subspace shifted away from the origin.
The dimension of the solution set equals the dimension of the null space: n−rank(A). When this is 0, the solution set is a single point {xp}. When it is 1, the solution set is a line. When it is 2, a plane. The shape is always a flat, and the null space determines its orientation.
The particular solution xp captures the effect of the right-hand side b. The homogeneous component xh captures the inherent freedom in the system — the directions along which the solution can shift without violating any equation. This decomposition into "forced" and "free" parts is one of the most fundamental structural results in linear algebra.