Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Vector Spaces






The Abstract Structure Behind Linear Algebra

Vectors in the plane can be added and scaled. So can polynomials, matrices, and functions. A vector space is any collection of objects where these two operations make sense and obey a consistent set of rules. By isolating this common structure, results proved once in the abstract apply to every concrete setting — from coordinate geometry to differential equations to signal processing.



What a Vector Space Is

Vectors in Rn\mathbb{R}^n can be added and scaled. So can polynomials, matrices, and continuous functions. A vector space is the formal name for any collection of objects where these two operations — addition and scalar multiplication — satisfy ten algebraic axioms: closure, commutativity, associativity, the existence of a zero element, additive inverses, and the expected distributive and identity laws for scalars.

The objects in a vector space are called vectors regardless of whether they look like arrows, columns of numbers, polynomials, or functions. The power of the abstraction is that every theorem proved from the axioms alone applies to every vector space simultaneously. Linear independence, span, basis, dimension, and subspaces are all defined from the axioms, and their properties carry over to any setting where the axioms hold.

Basis: Definition

A basis for a vector space VV is a set of vectors B={v1,v2,,vn}\mathcal{B} = \{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n\} that satisfies two conditions simultaneously.

The set is linearly independent: no vector in the set can be written as a linear combination of the others. Equivalently, the only combination c1v1+c2v2++cnvn=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n = \mathbf{0} is the trivial one with all scalars equal to zero.

The set spans VV: every vector in VV can be expressed as some linear combination of v1,,vn\mathbf{v}_1, \dots, \mathbf{v}_n.

Independence means no vector in the basis is wasted. Spanning means no vector in VV is out of reach. A basis is a minimal spanning set — remove any element and the span shrinks. It is also a maximal independent set — add any vector from VV and independence breaks. These two characterizations are equivalent and place the basis at the exact boundary between "too few" and "too many."

Unique Representation

If B={v1,,vn}\mathcal{B} = \{\mathbf{v}_1, \dots, \mathbf{v}_n\} is a basis for VV, then every vector vV\mathbf{v} \in V can be written as

v=c1v1+c2v2++cnvn\mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n


in exactly one way. Existence is guaranteed by the spanning condition — every vector is reachable. Uniqueness is guaranteed by independence: if two different sets of scalars both produced v\mathbf{v}, subtracting one from the other would give a nontrivial combination equal to 0\mathbf{0}, contradicting independence.

This uniqueness is what separates a basis from an arbitrary spanning set. A spanning set that is not independent can represent some vectors in multiple ways — the representation is ambiguous. A basis eliminates all ambiguity: every vector has exactly one set of coefficients.

Coordinates

The scalars c1,c2,,cnc_1, c_2, \dots, c_n in the unique representation v=c1v1++cnvn\mathbf{v} = c_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n are called the coordinates of v\mathbf{v} relative to the basis B\mathcal{B}. They are collected into a coordinate vector:

[v]B=(c1c2cn)[\mathbf{v}]_\mathcal{B} = \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}


Coordinates depend entirely on the choice of basis. The same vector v\mathbf{v} has different coordinates in different bases — the vector does not change, but its numerical description does.

To find the coordinates of v\mathbf{v} relative to B\mathcal{B}, solve the linear system c1v1++cnvn=vc_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n = \mathbf{v}. If the basis vectors are columns of a matrix BB, this is the system Bc=vB\mathbf{c} = \mathbf{v}, and the coordinate vector is c=B1v\mathbf{c} = B^{-1}\mathbf{v} when BB is invertible. For the standard basis, B=IB = I, so the coordinates are simply the components of v\mathbf{v} itself.

The Standard Basis for Rⁿ

The most familiar basis for Rn\mathbb{R}^n is the standard basis {e1,e2,,en}\{\mathbf{e}_1, \mathbf{e}_2, \dots, \mathbf{e}_n\}, where ei\mathbf{e}_i has a 11 in position ii and zeros elsewhere:

e1=(100),e2=(010),,en=(001)\mathbf{e}_1 = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}, \quad \mathbf{e}_2 = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{pmatrix}, \quad \dots, \quad \mathbf{e}_n = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{pmatrix}


Independence is immediate — each vector has a 11 in a position where all others have 00, so no vector is a combination of the rest. Spanning follows from the fact that any vector (v1,,vn)=v1e1++vnen(v_1, \dots, v_n) = v_1\mathbf{e}_1 + \cdots + v_n\mathbf{e}_n.

The standard basis has a special property: coordinates relative to it are just the components of the vector. For any other basis, finding coordinates requires solving a system. This is why the standard basis is the default — but it is one choice among infinitely many, and other bases are often better suited to particular problems.

Other Standard Bases

Every vector space encountered in practice comes with a natural default basis.

The polynomial space Pn\mathcal{P}_n of polynomials with degree at most nn has the monomial basis {1,x,x2,,xn}\{1, x, x^2, \dots, x^n\}, consisting of n+1n + 1 elements. Every polynomial a0+a1x++anxna_0 + a_1 x + \cdots + a_n x^n is a linear combination of these monomials, and the coefficients are unique (a polynomial is determined by its coefficients). The coordinates of a polynomial relative to the monomial basis are its coefficients.

The matrix space Rm×n\mathbb{R}^{m \times n} has the matrix unit basis {Eij}\{E_{ij}\}, where EijE_{ij} has a 11 in position (i,j)(i,j) and zeros elsewhere. There are mnmn such matrices, and every m×nm \times n matrix is a unique linear combination of them. The coordinates of a matrix are its entries.

These are all "standard" bases in the sense that they are the most natural first choice. But many problems benefit from non-standard bases: eigenvector bases simplify matrix powers and differential equations, orthonormal bases simplify projections and least squares, and Fourier bases decompose periodic signals into frequency components. Choosing the right basis is often the key step in solving a problem.

Finding a Basis for a Subspace

The three subspaces associated with a matrix each have a basis that can be extracted from row reduction.

For the column space of an m×nm \times n matrix AA: row reduce AA and identify the pivot columns. The corresponding columns of the original matrix AA — not the echelon form — are a basis for Col(A)\text{Col}(A). The echelon form reveals which columns are independent, but the original columns are the actual vectors spanning the column space.

For the null space: solve Ax=0A\mathbf{x} = \mathbf{0} by reducing to RREF and expressing the general solution in terms of free variables. Each free variable contributes one basis vector.

For the row space: the nonzero rows of the echelon form are a basis. Unlike the column space, the echelon form's rows — not the original rows — are used, because row operations change individual rows but preserve their span.

Worked Example


For A=(120124133614)A = \begin{pmatrix} 1 & 2 & 0 & 1 \\ 2 & 4 & 1 & 3 \\ 3 & 6 & 1 & 4 \end{pmatrix}, row reduction gives (120100110000)\begin{pmatrix} 1 & 2 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}. Pivots are in columns 11 and 33. The column space basis is {(1,2,3),(0,1,1)}\{(1, 2, 3), (0, 1, 1)\} — the first and third columns of the original AA. The row space basis is {(1,2,0,1),(0,0,1,1)}\{(1, 2, 0, 1), (0, 0, 1, 1)\}. The null space has two free variables (columns 22 and 44), and solving yields a two-dimensional null space.

Extending and Reducing to a Basis

Two fundamental operations guarantee that bases always exist in finite-dimensional spaces.

Extension: any linearly independent set can be grown into a basis by adding vectors one at a time. At each step, pick any vector not in the current span and adjoin it. Independence is preserved because the new vector is not a combination of the existing ones. The process stops when the span reaches all of VV.

Reduction: any spanning set can be trimmed into a basis by removing redundant vectors. A vector is redundant if it lies in the span of the remaining vectors. Remove redundant vectors one at a time until what remains is independent. The span does not shrink, because each removed vector was already expressible in terms of the others.

Both processes terminate because dimension is finite — the independent set cannot grow past nn vectors, and the spanning set cannot shrink below nn vectors, where n=dim(V)n = \dim(V). This means every finite-dimensional vector space has a basis, and the choice of basis is highly flexible.

Change of Basis

Different bases assign different coordinates to the same vector. The change-of-basis matrix converts between them.

If B\mathcal{B} and C\mathcal{C} are two bases for VV, the change-of-basis matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}} satisfies

[v]C=PCB[v]B[\mathbf{v}]_\mathcal{C} = P_{\mathcal{C} \leftarrow \mathcal{B}} \, [\mathbf{v}]_\mathcal{B}


for every vector vV\mathbf{v} \in V. The columns of PCBP_{\mathcal{C} \leftarrow \mathcal{B}} are the C\mathcal{C}-coordinate vectors of each B\mathcal{B}-basis vector. The reverse conversion uses the inverse: PBC=PCB1P_{\mathcal{B} \leftarrow \mathcal{C}} = P_{\mathcal{C} \leftarrow \mathcal{B}}^{-1}.

Worked Example


In R2\mathbb{R}^2, let B={(1,1),(1,1)}\mathcal{B} = \{(1, 1), (1, -1)\} and let C\mathcal{C} be the standard basis. The C\mathcal{C}-coordinates of (1,1)(1, 1) are just (1,1)(1, 1), and the C\mathcal{C}-coordinates of (1,1)(1, -1) are (1,1)(1, -1). So PCB=(1111)P_{\mathcal{C} \leftarrow \mathcal{B}} = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}. To find the B\mathcal{B}-coordinates of v=(3,1)\mathbf{v} = (3, 1): solve Pc=(3,1)P\mathbf{c} = (3, 1), giving c=P1(3,1)=12(1111)(31)=(2,1)\mathbf{c} = P^{-1}(3, 1) = \frac{1}{-2}\begin{pmatrix} -1 & -1 \\ -1 & 1 \end{pmatrix}\begin{pmatrix} 3 \\ 1 \end{pmatrix} = (2, 1). So [v]B=(2,1)[\mathbf{v}]_\mathcal{B} = (2, 1), meaning v=2(1,1)+1(1,1)\mathbf{v} = 2(1, 1) + 1(1, -1).

Change of basis connects to similarity: if a linear transformation has matrix AA in basis B\mathcal{B}, its matrix in basis C\mathcal{C} is P1APP^{-1}AP. Choosing a good basis — one that simplifies AA into diagonal or triangular form — is the central idea behind diagonalization.

Coordinates and Isomorphism

Choosing a basis for an nn-dimensional vector space VV creates a one-to-one correspondence between VV and Rn\mathbb{R}^n. Each vector vV\mathbf{v} \in V maps to its coordinate vector [v]BRn[\mathbf{v}]_\mathcal{B} \in \mathbb{R}^n, and this mapping preserves addition and scalar multiplication:

[u+v]B=[u]B+[v]B,[cv]B=c[v]B[\mathbf{u} + \mathbf{v}]_\mathcal{B} = [\mathbf{u}]_\mathcal{B} + [\mathbf{v}]_\mathcal{B}, \qquad [c\mathbf{v}]_\mathcal{B} = c[\mathbf{v}]_\mathcal{B}


Such a structure-preserving bijection is called an isomorphism. Its existence means that every nn-dimensional real vector space — Rn\mathbb{R}^n, Pn1\mathcal{P}_{n-1}, Rm×k\mathbb{R}^{m \times k} with mk=nmk = n, solution spaces of ODEs — behaves identically to Rn\mathbb{R}^n in all algebraic respects. The objects differ, but the linear algebra is the same.

Dimension is the single invariant that classifies finite-dimensional vector spaces up to isomorphism. Two spaces over the same field are isomorphic if and only if they have the same dimension. This is why dimension occupies such a central place in the theory.

Independence and Span

The two concepts that a basis unifies — linear independence and span — each have their own rich theory.

Independence is tested by checking whether the homogeneous system Ac=0A\mathbf{c} = \mathbf{0} has only the trivial solution, where AA is the matrix whose columns are the vectors. For nn vectors in Rn\mathbb{R}^n, this reduces to checking whether the determinant is nonzero. In Rn\mathbb{R}^n, at most nn vectors can be independent — any set of n+1n + 1 or more is automatically dependent.

Span is tested by checking whether Ac=bA\mathbf{c} = \mathbf{b} is consistent for every b\mathbf{b}, or for a specific b\mathbf{b} if the question is about membership. The span of a set is always a subspace, and its dimension equals the number of independent vectors in the set.

A set of exactly nn vectors in an nn-dimensional space is a basis if and only if it is independent (spanning follows automatically), and if and only if it spans the space (independence follows automatically). At the magic count n=dim(V)n = \dim(V), the two conditions become equivalent.

Subspaces and the Fundamental Subspaces

A subspace is a subset of a vector space that is itself a vector space under the same operations. The subspace test requires only two checks: closure under addition and closure under scalar multiplication. Lines and planes through the origin, null spaces, column spaces, and row spaces are all subspaces.

Every m×nm \times n matrix AA defines four fundamental subspaces: the column space in Rm\mathbb{R}^m (dimension rr), the row space in Rn\mathbb{R}^n (dimension rr), the null space in Rn\mathbb{R}^n (dimension nrn - r), and the left null space in Rm\mathbb{R}^m (dimension mrm - r), where r=rank(A)r = \text{rank}(A).

These four subspaces split into two pairs of orthogonal complements: the row space and null space are perpendicular in Rn\mathbb{R}^n, while the column space and left null space are perpendicular in Rm\mathbb{R}^m. The rank governs all four dimensions and completely determines the geometry of the linear map xAx\mathbf{x} \mapsto A\mathbf{x}.