Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Subspaces






Vector Spaces Inside Vector Spaces

A subspace is a subset of a vector space that is itself a vector space under the same operations. Lines and planes through the origin in R³, null spaces and column spaces of matrices, and solution sets of homogeneous systems are all subspaces. A simple two-condition test determines whether a given subset qualifies.



Definition

A subspace of a vector space VV is a nonempty subset WVW \subseteq V that is itself a vector space when equipped with the same addition and scalar multiplication as VV.

Most of the ten axioms — commutativity, associativity, distributivity, the identity 1v=v1\mathbf{v} = \mathbf{v} — hold automatically in WW because they hold for all vectors in VV, and vectors in WW are vectors in VV. The properties that can fail are the closure conditions: the sum of two vectors in WW might land outside WW, or scaling a vector in WW might produce something not in WW. These are the only things that need checking.

The Subspace Test

A nonempty subset WVW \subseteq V is a subspace if and only if it satisfies two conditions:

Closure under addition: for all u,vW\mathbf{u}, \mathbf{v} \in W, the sum u+v\mathbf{u} + \mathbf{v} is in WW.

Closure under scalar multiplication: for all cRc \in \mathbb{R} and all vW\mathbf{v} \in W, the product cvc\mathbf{v} is in WW.

These two conditions can be compressed into one: WW is a subspace if and only if cu+dvWc\mathbf{u} + d\mathbf{v} \in W for all u,vW\mathbf{u}, \mathbf{v} \in W and all scalars c,dc, d. This single condition captures both closure properties simultaneously.

The requirement that WW be nonempty is essential. Once at least one vector v\mathbf{v} is known to lie in WW, closure under scalar multiplication with c=0c = 0 guarantees 0=0vW\mathbf{0} = 0\mathbf{v} \in W. So the zero vector belongs to every subspace. Conversely, if 0W\mathbf{0} \notin W, then WW cannot be a subspace — this is often the fastest way to disqualify a candidate.

Trivial Subspaces

Every vector space VV has two subspaces that require no verification. The set {0}\{\mathbf{0}\} containing only the zero vector is a subspace: adding 0\mathbf{0} to itself gives 0\mathbf{0}, and scaling 0\mathbf{0} by any scalar gives 0\mathbf{0}, so both closure conditions hold. This is the smallest possible subspace, with dimension zero.

The entire space VV is also a subspace of itself — trivially, since every vector in VV is in VV and every operation on VV stays in VV. This is the largest possible subspace.

Every other subspace lies strictly between these two extremes: it contains 0\mathbf{0} but does not contain everything. Finding and classifying these intermediate subspaces is one of the central tasks of linear algebra.

Subspaces of R² and R³

The subspaces of R2\mathbb{R}^2 are completely classified: {0}\{\mathbf{0}\}, lines through the origin, and R2\mathbb{R}^2 itself. There is nothing else. Every line through the origin has the form {tv:tR}\{t\mathbf{v} : t \in \mathbb{R}\} for some nonzero vector v\mathbf{v}, and it is straightforward to verify that this set is closed under addition and scalar multiplication.

The subspaces of R3\mathbb{R}^3 are: {0}\{\mathbf{0}\}, lines through the origin (dimension 11), planes through the origin (dimension 22), and R3\mathbb{R}^3 itself (dimension 33).

A line that does not pass through the origin — say the set {(1,0)+t(2,3):tR}\{(1, 0) + t(2, 3) : t \in \mathbb{R}\} — is not a subspace. It does not contain 0\mathbf{0}, and adding two vectors on this line produces a vector that is generally not on the line. Similarly, a plane that does not contain the origin fails the subspace test.

The geometric intuition is that subspaces are the "flat" subsets that pass through the origin. In Rn\mathbb{R}^n, every subspace is a span of some set of vectors, and its dimension equals the number of independent vectors needed to span it.

The Null Space

For an m×nm \times n matrix AA, the null space is

Null(A)={xRn:Ax=0}\text{Null}(A) = \{\mathbf{x} \in \mathbb{R}^n : A\mathbf{x} = \mathbf{0}\}


the set of all vectors that AA maps to the zero vector. This is a subspace of Rn\mathbb{R}^n.

Verification is direct. The zero vector satisfies A0=0A\mathbf{0} = \mathbf{0}, so 0Null(A)\mathbf{0} \in \text{Null}(A). If Au=0A\mathbf{u} = \mathbf{0} and Av=0A\mathbf{v} = \mathbf{0}, then A(u+v)=Au+Av=0+0=0A(\mathbf{u} + \mathbf{v}) = A\mathbf{u} + A\mathbf{v} = \mathbf{0} + \mathbf{0} = \mathbf{0}, so u+vNull(A)\mathbf{u} + \mathbf{v} \in \text{Null}(A). If Av=0A\mathbf{v} = \mathbf{0}, then A(cv)=cAv=c0=0A(c\mathbf{v}) = cA\mathbf{v} = c\mathbf{0} = \mathbf{0}, so cvNull(A)c\mathbf{v} \in \text{Null}(A). Both closure conditions hold.

The dimension of the null space is the nullity. By the rank-nullity theorem, rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n. When AA has full column rank (rank=n\text{rank} = n), the null space is {0}\{\mathbf{0}\} and the map xAx\mathbf{x} \mapsto A\mathbf{x} is injective. When the rank is less than nn, the null space is nontrivial and the map collapses some directions to zero.

The Column Space

For an m×nm \times n matrix AA with columns a1,,an\mathbf{a}_1, \dots, \mathbf{a}_n, the column space is

Col(A)={Ax:xRn}=Span{a1,a2,,an}\text{Col}(A) = \{A\mathbf{x} : \mathbf{x} \in \mathbb{R}^n\} = \text{Span}\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_n\}


It is the set of all possible outputs of the linear transformation xAx\mathbf{x} \mapsto A\mathbf{x}, and it lives in Rm\mathbb{R}^m.

The column space is a subspace because the span of any set of vectors is always a subspace. Its dimension equals the rank of AA.

The column space answers the solvability question: the system Ax=bA\mathbf{x} = \mathbf{b} has a solution if and only if b\mathbf{b} lies in Col(A)\text{Col}(A). If b\mathbf{b} is a linear combination of the columns of AA, the coefficients in that combination are a solution vector x\mathbf{x}. If b\mathbf{b} is not in the column space, no solution exists.

To find a basis for the column space, row reduce AA and identify the pivot columns. The corresponding columns of the original matrix AA — not the echelon form — form a basis for Col(A)\text{Col}(A).

The Row Space

The row space of an m×nm \times n matrix AA is the span of the rows of AA, viewed as vectors in Rn\mathbb{R}^n. Equivalently, it is the column space of ATA^T:

Row(A)=Col(AT)\text{Row}(A) = \text{Col}(A^T)


The row space lives in Rn\mathbb{R}^n and has dimension equal to the rank of AA — the same dimension as the column space, despite the two spaces living in different ambient spaces.

A key property is that elementary row operations do not change the row space. Each row operation replaces rows with linear combinations of existing rows, so every row of the echelon form lies in the span of the original rows, and vice versa. The nonzero rows of the echelon form therefore provide a basis for the row space.

The row space and the null space together account for all of Rn\mathbb{R}^n. They are orthogonal complements: every vector in the null space is perpendicular to every row of AA (since Ax=0A\mathbf{x} = \mathbf{0} means the dot product of x\mathbf{x} with each row is zero), and their dimensions add up to nn.

Subspaces from Operations

New subspaces can be built from existing ones through set-theoretic operations, though not all operations preserve the subspace property.

The intersection of two subspaces W1W_1 and W2W_2 is always a subspace. If u\mathbf{u} and v\mathbf{v} both lie in W1W2W_1 \cap W_2, then u+v\mathbf{u} + \mathbf{v} lies in W1W_1 (since W1W_1 is a subspace) and in W2W_2 (since W2W_2 is a subspace), so it lies in W1W2W_1 \cap W_2. The same argument works for scalar multiples. The intersection can be anything from {0}\{\mathbf{0}\} (if the two subspaces share only the zero vector) to one of the original subspaces (if one contains the other).

The union of two subspaces is almost never a subspace. If uW1W2\mathbf{u} \in W_1 \setminus W_2 and vW2W1\mathbf{v} \in W_2 \setminus W_1, the sum u+v\mathbf{u} + \mathbf{v} typically lies in neither W1W_1 nor W2W_2, violating closure. The only exception is when one subspace contains the other.

The sum W1+W2={w1+w2:w1W1,w2W2}W_1 + W_2 = \{\mathbf{w}_1 + \mathbf{w}_2 : \mathbf{w}_1 \in W_1, \mathbf{w}_2 \in W_2\} is always a subspace — it is the smallest subspace containing both W1W_1 and W2W_2. Its dimension satisfies

dim(W1+W2)=dim(W1)+dim(W2)dim(W1W2)\dim(W_1 + W_2) = \dim(W_1) + \dim(W_2) - \dim(W_1 \cap W_2)


When W1W2={0}W_1 \cap W_2 = \{\mathbf{0}\}, the sum is called a direct sum, written W1W2W_1 \oplus W_2, and every vector in the sum has a unique decomposition as w1+w2\mathbf{w}_1 + \mathbf{w}_2.

Solution Sets and Subspaces

The solution set of a linear system Ax=bA\mathbf{x} = \mathbf{b} is a subspace only when b=0\mathbf{b} = \mathbf{0}. In that case, the solution set is the null space of AA, which passes the subspace test as shown above.

When b0\mathbf{b} \neq \mathbf{0}, the solution set is not a subspace. It does not contain 0\mathbf{0} (since A0=0bA\mathbf{0} = \mathbf{0} \neq \mathbf{b}), and it is not closed under addition or scalar multiplication in general. However, the solution set has a clean geometric description in terms of subspaces.

If xp\mathbf{x}_p is any one particular solution to Ax=bA\mathbf{x} = \mathbf{b}, then every solution has the form

x=xp+xh\mathbf{x} = \mathbf{x}_p + \mathbf{x}_h


where xhNull(A)\mathbf{x}_h \in \text{Null}(A) is a solution to the homogeneous system Ax=0A\mathbf{x} = \mathbf{0}. The full solution set is a translated copy of the null space — shifted away from the origin by the vector xp\mathbf{x}_p. In geometry, this is an affine subspace (also called a coset or a flat): a subspace that has been displaced from the origin.

This decomposition separates the particular and homogeneous contributions. The particular solution xp\mathbf{x}_p accounts for the right-hand side b\mathbf{b}, while the null-space component xh\mathbf{x}_h parametrizes the freedom in the solution. When the null space is trivial (Null(A)={0}\text{Null}(A) = \{\mathbf{0}\}), the solution is unique: x=xp\mathbf{x} = \mathbf{x}_p with no freedom.