Every operation in this section — addition, subtraction, scalar multiplication — is a special case of one unifying construction: the linear combination. Scale a collection of vectors by chosen coefficients, then add the results. The output is a single vector assembled from the pieces. This idea is deceptively simple, but it reaches into every corner of linear algebra. Asking which vectors can be built from a given set leads to the concept of span. Asking whether a set contains redundant vectors leads to linear independence. Asking whether a particular vector can be expressed as a combination of others turns out to be equivalent to solving a system of linear equations. The definitions introduced here are developed formally in the vector spaces section, but their computational foundation belongs here, grounded in the concrete algebra of Rn.
Definition
Given vectors v1,v2,…,vk in Rn and scalars c1,c2,…,ck in R, the expression
c1v1+c2v2+⋯+ckvk
is a linear combination of those vectors. The scalars ci are called the coefficients or weights of the combination. Each term civi is a scaled copy of vi, and the full expression adds all the scaled copies together.
The definition imposes no restrictions on the coefficients — they can be positive, negative, or zero. Setting all coefficients to 1 reduces the linear combination to ordinary vector addition. Using a single vector with one coefficient gives scalar multiplication. Setting all coefficients to zero produces the zero vector. These familiar operations are not separate concepts; they are boundary cases of the same construction.
A linear combination always produces a vector in the same Rn as the inputs. Both scalar multiplication and addition preserve the dimension, so no matter how many vectors are combined or what coefficients are chosen, the result remains in the original space.
Geometric Interpretation
The geometric meaning of a linear combination depends on how many vectors are involved and how they relate to one another.
A single nonzero vector v generates all its scalar multiples: cv for every real number c. These fill a line through the origin in the direction of v. Moving c from −∞ to ∞ sweeps the entire line, passing through the origin at c=0.
Two vectors u and v that are not parallel generate all combinations c1u+c2v. Varying both coefficients independently fills a plane through the origin — the plane that contains both u and v. In R2, two non-parallel vectors already span the entire space. In R3, they span a flat sheet passing through the origin, leaving one dimension unreached.
Three vectors in R3 that do not all lie in the same plane span the full space: every point in R3 is reachable as c1u+c2v+c3w for some choice of coefficients. If the three vectors are coplanar, however, the third contributes nothing new — the span remains a plane rather than expanding to fill three dimensions.
The coefficients act as continuous dials. Adjusting one coefficient while holding the others fixed slides the result along the direction of the corresponding vector. Adjusting all of them simultaneously moves the result anywhere within the span.
Span
The span of a set of vectors is the collection of every vector that can be formed as a linear combination of the set. For vectors v1,v2,…,vk:
The span is a subset of Rn that contains all vectors reachable through linear combinations — and no others. It always contains the zero vector, since setting every coefficient to zero produces 0.
The geometric shape of the span reflects the structure of the input vectors. A single nonzero vector spans a line. Two non-parallel vectors span a plane. In general, the span of k vectors that are "sufficiently independent" is a k-dimensional flat subspace passing through the origin. What "sufficiently independent" means precisely is the subject of linear independence, but the intuition is clear: each new vector expands the span by one dimension only if it points in a direction not already covered.
The span of the standard basis {e1,e2,…,en} is all of Rn, since any vector (v1,v2,…,vn) equals v1e1+v2e2+⋯+vnen. The span of a proper subset of the standard basis is a coordinate subspace — a lower-dimensional slice of Rn aligned with the coordinate axes.
The formal theory of span, including its relationship to subspaces, is developed in the vector spaces section.
Spanning Sets
A set of vectors spans a space if every vector in that space belongs to the span of the set. Equivalently, the set spans Rn if for every vector b in Rn, there exist coefficients c1,c2,…,ck such that c1v1+c2v2+⋯+ckvk=b.
The standard basis {e1,e2,…,en} spans Rn with exactly n vectors and no redundancy. But spanning sets need not be this efficient. The set {(1,0),(0,1),(1,1)} also spans R2, because any vector (a,b) can be written as a(1,0)+b(0,1)+0(1,1). The third vector is unnecessary — it is already a linear combination of the first two — yet its presence does not prevent the set from spanning. A spanning set with redundant vectors is larger than it needs to be but still reaches every point in the space.
The question of which vectors in a spanning set are genuinely needed and which are superfluous is answered by the concept of linear independence. A spanning set with no redundancy is called a basis — the smallest possible spanning set for a given space. Both ideas are formalized in the vector spaces section.
Trivial and Non-Trivial Combinations
Among all possible linear combinations of a set of vectors, one always exists regardless of the vectors involved: the trivial combination. Setting every coefficient to zero gives:
0v1+0v2+⋯+0vk=0
This produces the zero vector for any choice of v1,…,vk. It is called trivial because it carries no information about the vectors themselves — it works automatically, without engaging with the actual components.
A non-trivial combination is one in which at least one coefficient is nonzero. The distinction between trivial and non-trivial combinations is the key to defining linear independence. A set of vectors is linearly independent if the only combination that produces 0 is the trivial one — there is no way to combine the vectors with nonzero coefficients and arrive back at the origin. If a non-trivial combination does produce 0, then at least one vector in the set can be written as a linear combination of the others, revealing redundancy.
This characterization is developed formally in the linear independence section, but the computational meaning is accessible now: if the equation c1v1+⋯+ckvk=0 has only the solution c1=c2=⋯=ck=0, the vectors are independent. If it has other solutions, they are dependent.
Linear Combinations and Systems of Equations
The question "Is b a linear combination of v1,v2,…,vk?" is an algebraic question about the existence of coefficients. Writing it out component by component produces a system of linear equations:
c1v1+c2v2+⋯+ckvk=b
Arranging the vectors v1,…,vk as columns of a matrixA and the coefficients as an unknown vector x, the equation becomes:
Ax=b
The linear combination question and the system-of-equations question are identical — they are two descriptions of the same problem. The combination exists if and only if the system has a solution. Solving the system finds the coefficients; the coefficients reconstruct the combination.
This connection transforms a geometric question (does b lie in the span of the columns?) into a computational one (does the augmented matrix [A∣b] reduce to a consistent system?). The tools for answering the computational question — row reduction, echelon forms, pivot analysis — belong to the linear systems section, but the conceptual link originates here, in the definition of a linear combination.
Toward Independence and Basis
Linear combinations open two questions that this page has raised but not resolved. The first: given a set of vectors, can any of them be removed without shrinking the span? A vector that is a linear combination of the others in the set contributes nothing new — every point it helps reach is already reachable without it. A set where no vector is redundant in this sense is called linearly independent. Independence is a property of the set as a whole, not of any individual vector, and it is characterized by the trivial-combination criterion: the only way to combine the vectors into 0 is with all-zero coefficients.
The second question: what is the smallest set of vectors whose span equals the entire space? Such a set must span Rn (reaching every vector) and be linearly independent (containing no excess). A set with both properties is a basis for Rn, and every basis of Rn contains exactly n vectors — a fact that defines the dimension of the space.
Both concepts — linear independence and basis — are developed rigorously in the vector spaces section. The machinery of linear combinations built on this page provides the raw material from which those definitions are constructed.