Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Axioms related to Vector Spaces






The Ten Rules That Define the Structure

A vector space is any set equipped with addition and scalar multiplication that satisfy ten axioms. These axioms capture the common algebraic behavior of Rⁿ, polynomial spaces, matrix spaces, and function spaces. By working from the axioms alone, every theorem applies to all of these settings at once.



The Idea of Abstraction

Vectors in Rn\mathbb{R}^n can be added entry by entry and scaled by real numbers. Polynomials can be added and scaled. Matrices can be added and scaled. Continuous functions on an interval can be added and scaled. In each case, the same algebraic patterns appear: addition is commutative and associative, scaling distributes over sums, a zero element absorbs addition, and scaling by 11 leaves every object unchanged.

A vector space is the formal extraction of these patterns. Rather than proving results separately for columns, polynomials, matrices, and functions, the axioms identify the common thread. Anything proved from the axioms alone — and that includes the entire theory of linear independence, span, basis, and dimension — holds in every setting where the axioms are satisfied.

The Ten Axioms

A vector space over a field F\mathbb{F} is a set VV together with two operations — vector addition (u+v\mathbf{u} + \mathbf{v}) and scalar multiplication (cvc\mathbf{v}) — satisfying the following ten axioms. For all u,v,wV\mathbf{u}, \mathbf{v}, \mathbf{w} \in V and all scalars c,dFc, d \in \mathbb{F}:

Addition Axioms


Closure under addition: u+vV\mathbf{u} + \mathbf{v} \in V.

Commutativity: u+v=v+u\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}.

Associativity: (u+v)+w=u+(v+w)(\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}).

Zero vector: there exists an element 0V\mathbf{0} \in V such that v+0=v\mathbf{v} + \mathbf{0} = \mathbf{v} for every vV\mathbf{v} \in V.

Additive inverse: for every vV\mathbf{v} \in V, there exists vV-\mathbf{v} \in V such that v+(v)=0\mathbf{v} + (-\mathbf{v}) = \mathbf{0}.

Scalar Multiplication Axioms


Closure under scalar multiplication: cvVc\mathbf{v} \in V.

Associativity of scalars: c(dv)=(cd)vc(d\mathbf{v}) = (cd)\mathbf{v}.

Distributivity over vector addition: c(u+v)=cu+cvc(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v}.

Distributivity over scalar addition: (c+d)v=cv+dv(c + d)\mathbf{v} = c\mathbf{v} + d\mathbf{v}.

Multiplicative identity: 1v=v1\mathbf{v} = \mathbf{v}.

A set satisfying all ten is a vector space. A set violating even one is not.

The Field of Scalars

The scalars in a vector space come from a field — a set where addition, subtraction, multiplication, and division (by nonzero elements) all work and satisfy the standard arithmetic laws. The real numbers R\mathbb{R} and the complex numbers C\mathbb{C} are the two fields that appear most often in linear algebra.

A vector space over R\mathbb{R} is called a real vector space. A vector space over C\mathbb{C} is called a complex vector space. The choice of field determines what scalars are available for multiplication, and this affects the structure of the space. For instance, every real symmetric matrix has real eigenvalues, but a general real matrix may have complex eigenvalues — a phenomenon visible only when the scalar field extends from R\mathbb{R} to C\mathbb{C}.

On this site, the scalar field is R\mathbb{R} unless explicitly stated otherwise. The axioms and definitions carry over to C\mathbb{C} without modification.

The Standard Example: Rⁿ

The most concrete vector space is Rn\mathbb{R}^n, the set of all ordered nn-tuples of real numbers:

Rn={(v1,v2,,vn):viR}\mathbb{R}^n = \{(v_1, v_2, \dots, v_n) : v_i \in \mathbb{R}\}


Addition and scalar multiplication are defined entry by entry:

(u1,,un)+(v1,,vn)=(u1+v1,,un+vn)(u_1, \dots, u_n) + (v_1, \dots, v_n) = (u_1 + v_1, \dots, u_n + v_n)


c(v1,,vn)=(cv1,,cvn)c(v_1, \dots, v_n) = (cv_1, \dots, cv_n)


All ten axioms hold. Closure is immediate — sums and scalar products of nn-tuples are nn-tuples. Commutativity and associativity of vector addition follow from commutativity and associativity of real number addition applied to each component. The zero vector is (0,0,,0)(0, 0, \dots, 0), and the additive inverse of (v1,,vn)(v_1, \dots, v_n) is (v1,,vn)(-v_1, \dots, -v_n). The scalar multiplication axioms all reduce to properties of real number arithmetic applied entry by entry.

This is the vector space that underlies coordinate geometry, matrix algebra, and nearly every computational method in linear algebra. Every finite-dimensional real vector space is isomorphic to Rn\mathbb{R}^n for some nn.

Polynomial Spaces

The set Pn\mathcal{P}_n of all polynomials of degree at most nn is a vector space under ordinary polynomial addition and scalar multiplication. A typical element is a0+a1x+a2x2++anxna_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n with real coefficients.

Addition combines like terms: (a0+a1x)+(b0+b1x)=(a0+b0)+(a1+b1)x(a_0 + a_1 x) + (b_0 + b_1 x) = (a_0 + b_0) + (a_1 + b_1)x. Scalar multiplication scales every coefficient: c(a0+a1x)=ca0+ca1xc(a_0 + a_1 x) = ca_0 + ca_1 x. The zero vector is the zero polynomial (all coefficients zero). The additive inverse of p(x)=a0+a1x++anxnp(x) = a_0 + a_1 x + \cdots + a_n x^n is p(x)=a0a1xanxn-p(x) = -a_0 - a_1 x - \cdots - a_n x^n.

Closure under addition holds because adding two polynomials of degree at most nn produces a polynomial of degree at most nn — the degree cannot increase beyond nn. All other axioms follow from the corresponding properties of real number arithmetic applied to coefficients. The space has dimension n+1n + 1, with the monomial basis {1,x,x2,,xn}\{1, x, x^2, \dots, x^n\}.

The set P\mathcal{P} of all polynomials (with no degree restriction) is also a vector space, but it is infinite-dimensional: no finite set of polynomials can span it, because any finite set has a maximum degree that limits which polynomials are reachable.

Matrix Spaces

The set Rm×n\mathbb{R}^{m \times n} of all m×nm \times n real matrices is a vector space with entry-by-entry operations. Addition adds corresponding entries, and scalar multiplication scales every entry by the same scalar.

The zero vector is the m×nm \times n zero matrix OO. The additive inverse of A=(aij)A = (a_{ij}) is A=(aij)-A = (-a_{ij}). All ten axioms reduce to the corresponding properties of real number arithmetic applied to each of the mnmn entries independently.

This space has dimension mnmn. The standard basis consists of the mnmn matrix units EijE_{ij}, each with a single 11 in position (i,j)(i,j) and zeros elsewhere. Every matrix is a unique linear combination of these basis elements, with the matrix entries as coefficients.

The fact that matrices form a vector space means that the concepts of linear independence, span, and basis apply to sets of matrices — not just to column vectors. For example, the set {I,A,A2}\{I, A, A^2\} might be independent or dependent in Rn×n\mathbb{R}^{n \times n}, depending on the specific matrix AA, and answering this question uses exactly the same abstract framework as for vectors in Rn\mathbb{R}^n.

Function Spaces

The set C[a,b]C[a, b] of all continuous real-valued functions on the interval [a,b][a, b] is a vector space with pointwise operations:

(f+g)(x)=f(x)+g(x),(cf)(x)=cf(x)(f + g)(x) = f(x) + g(x), \qquad (cf)(x) = c \cdot f(x)


The zero vector is the function that is identically zero: z(x)=0z(x) = 0 for all x[a,b]x \in [a, b]. The additive inverse of ff is f-f, defined by (f)(x)=f(x)(-f)(x) = -f(x).

The axioms hold because the sum of two continuous functions is continuous (closure), real-number addition is commutative and associative (so pointwise addition inherits these properties), and the distributive laws follow from ordinary scalar arithmetic applied at each point xx.

This space is infinite-dimensional. A more structured example is the solution space of a homogeneous linear ordinary differential equation. The set of all solutions to y+py+qy=0y'' + py' + qy = 0 (with continuous coefficients p,qp, q) forms a vector space of dimension 22: the superposition principle guarantees that any linear combination of solutions is again a solution, and the existence-uniqueness theorem guarantees that two independent solutions suffice to generate every solution.

Non-Examples

The axioms are genuine constraints, not automatic properties. Several natural-looking sets fail them.

The set of polynomials of degree exactly nn is not a vector space. Adding two polynomials of degree nn can cancel the leading terms — for instance, (x2+x)+(x2+3)=x+3(x^2 + x) + (-x^2 + 3) = x + 3, which has degree 11, not 22. Closure under addition fails.

The set of positive real numbers with ordinary addition is not a vector space. There is no zero element: no positive real number zz satisfies x+z=xx + z = x for all positive xx.

The set R2\mathbb{R}^2 can be equipped with non-standard operations that violate the axioms. Defining "addition" by (u1,u2)(v1,v2)=(u1+v1,0)(u_1, u_2) \oplus (v_1, v_2) = (u_1 + v_1, 0) fails to produce a vector space — the second component is always destroyed, and the distributive law c(u+v)=cu+cvc(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v} breaks.

A line in R2\mathbb{R}^2 that does not pass through the origin is not a subspace: it does not contain 0\mathbf{0}, and adding two points on the line generally produces a point not on the line. These failures are useful — they confirm that the axioms distinguish genuine vector spaces from imposters.

Immediate Consequences of the Axioms

Several useful facts follow from the ten axioms alone. They are not additional assumptions but provable theorems.

Scaling any vector by zero gives the zero vector: 0v=00\mathbf{v} = \mathbf{0}. The proof uses distributivity: 0v=(0+0)v=0v+0v0\mathbf{v} = (0 + 0)\mathbf{v} = 0\mathbf{v} + 0\mathbf{v}, and adding (0v)-(0\mathbf{v}) to both sides gives 0=0v\mathbf{0} = 0\mathbf{v}.

Scaling the zero vector by any scalar gives the zero vector: c0=0c\mathbf{0} = \mathbf{0}. The argument is similar: c0=c(0+0)=c0+c0c\mathbf{0} = c(\mathbf{0} + \mathbf{0}) = c\mathbf{0} + c\mathbf{0}.

Scaling by 1-1 produces the additive inverse: (1)v=v(-1)\mathbf{v} = -\mathbf{v}. This follows from v+(1)v=1v+(1)v=(1+(1))v=0v=0\mathbf{v} + (-1)\mathbf{v} = 1\mathbf{v} + (-1)\mathbf{v} = (1 + (-1))\mathbf{v} = 0\mathbf{v} = \mathbf{0}.

If cv=0c\mathbf{v} = \mathbf{0}, then c=0c = 0 or v=0\mathbf{v} = \mathbf{0}. A nonzero scalar cannot annihilate a nonzero vector — there are no zero divisors in a vector space.

The zero vector is unique, and the additive inverse of each vector is unique. Both proofs are short exercises from the axioms. These facts ensure that the algebraic structure is well-defined and free of ambiguity.

Why Axioms Matter

Working from axioms rather than from specific examples is the mechanism that makes linear algebra so broadly applicable.

Every theorem about linear independence applies to vectors in Rn\mathbb{R}^n, to polynomials in Pn\mathcal{P}_n, to matrices in Rm×n\mathbb{R}^{m \times n}, and to functions in C[a,b]C[a, b]. Every theorem about span and basis applies identically in all these settings. The rank-nullity theorem, the theory of subspaces, the classification by dimension — none of these need to be reproved when the objects change from column vectors to polynomials.

The axioms also make clear what is not a vector space. Attempting to apply basis theory, dimension counting, or rank arguments to a set that fails the axioms produces nonsense. Checking the axioms first is a prerequisite for using any of the tools of linear algebra.

The ten axioms are not arbitrary — they are the minimal set of conditions that support the concepts of independence, span, and basis. Every axiom is used in at least one proof along the way from the definition of a vector space to the classification theorem that says two spaces are isomorphic if and only if they have the same dimension.