Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Basic Operations on Vectors






The Three Operations That Define Vector Algebra

Addition, subtraction, and scalar multiplication are the operations that give vectors their algebraic structure. Each works component by component, each preserves the dimension of the input, and each carries a geometric interpretation that reinforces the computation. Together, they satisfy a precise set of rules — commutativity, associativity, distributivity — that make vector algebra predictable and consistent. Every other operation in this section, from the dot product to linear combinations, is built on top of these three.



Vector Addition

Adding two vectors means pairing up their components and summing each pair independently. For a=(a1,a2,,an)\mathbf{a} = (a_1, a_2, \ldots, a_n) and b=(b1,b2,,bn)\mathbf{b} = (b_1, b_2, \ldots, b_n) in the same Rn\mathbb{R}^n:

a+b=(a1+b1, a2+b2, , an+bn)\mathbf{a} + \mathbf{b} = (a_1 + b_1,\ a_2 + b_2,\ \ldots,\ a_n + b_n)


The result is again a vector in Rn\mathbb{R}^n — addition does not change the dimension. Both inputs must belong to the same space; adding a vector in R2\mathbb{R}^2 to a vector in R3\mathbb{R}^3 is undefined because there is no way to match up the components.

Geometrically, vector addition has two equivalent visualizations. In the tip-to-tail method, the tail of b\mathbf{b} is placed at the head of a\mathbf{a}, and the sum a+b\mathbf{a} + \mathbf{b} is the vector running from the tail of a\mathbf{a} to the head of b\mathbf{b}. In the parallelogram method, both vectors share a common tail, and the sum is the diagonal of the parallelogram they form. The two constructions always yield the same result, and each highlights a different aspect of addition: tip-to-tail emphasizes sequential displacement, while the parallelogram makes the symmetry between a\mathbf{a} and b\mathbf{b} visually explicit.

Properties of Addition

Vector addition obeys four algebraic rules that govern how sums behave. Each has a geometric counterpart that can be verified by drawing the vectors involved.

Commutativity


a+b=b+a\mathbf{a} + \mathbf{b} = \mathbf{b} + \mathbf{a}


The order in which two vectors are added does not affect the result. Geometrically, the parallelogram has the same diagonal regardless of which vector is placed first. At the component level, this follows directly from commutativity of real number addition: ai+bi=bi+aia_i + b_i = b_i + a_i for every component.

Associativity


(a+b)+c=a+(b+c)(\mathbf{a} + \mathbf{b}) + \mathbf{c} = \mathbf{a} + (\mathbf{b} + \mathbf{c})


When adding three or more vectors, grouping does not matter. The tip-to-tail construction confirms this: chaining a\mathbf{a}, b\mathbf{b}, and c\mathbf{c} end to end produces the same resultant vector no matter which pair is summed first. This property allows sums of multiple vectors to be written without parentheses.

Additive Identity


a+0=a\mathbf{a} + \mathbf{0} = \mathbf{a}


The zero vector 0=(0,0,,0)\mathbf{0} = (0, 0, \ldots, 0) leaves any vector unchanged under addition. It functions as the neutral element: adding it contributes nothing to any component. Geometrically, the zero vector is a point with no length and no direction — appending it tip-to-tail adds no displacement.

Additive Inverse


a+(a)=0\mathbf{a} + (-\mathbf{a}) = \mathbf{0}


Every vector a\mathbf{a} has a corresponding vector a=(a1,a2,,an)-\mathbf{a} = (-a_1, -a_2, \ldots, -a_n) that cancels it exactly. The inverse has the same magnitude as a\mathbf{a} but points in the opposite direction. Adding a vector to its inverse returns to the origin — the displacements undo each other completely.

Vector Subtraction

Subtraction is not an independent operation — it is addition combined with negation. The difference ab\mathbf{a} - \mathbf{b} is defined as:

ab=a+(b)=(a1b1, a2b2, , anbn)\mathbf{a} - \mathbf{b} = \mathbf{a} + (-\mathbf{b}) = (a_1 - b_1,\ a_2 - b_2,\ \ldots,\ a_n - b_n)


Reducing subtraction to addition and negation means that all four properties of addition — commutativity, associativity, identity, and inverse — carry over automatically. No separate set of rules is needed.

The geometric picture of subtraction is particularly useful. When a\mathbf{a} and b\mathbf{b} are drawn from a common tail, the difference ab\mathbf{a} - \mathbf{b} is the vector pointing from the tip of b\mathbf{b} to the tip of a\mathbf{a}. This interpretation connects subtraction directly to distance: the length of ab\mathbf{a} - \mathbf{b} is the Euclidean distance between the heads of the two vectors, formalized as d(a,b)=abd(\mathbf{a}, \mathbf{b}) = \|\mathbf{a} - \mathbf{b}\| on the magnitude page.

Scalar Multiplication

Scalar multiplication takes a real number cc and a vector a\mathbf{a} and scales every component by cc:

ca=(ca1, ca2, , can)c\mathbf{a} = (ca_1,\ ca_2,\ \ldots,\ ca_n)


The result is always a vector in the same Rn\mathbb{R}^n. Geometrically, scalar multiplication changes the length of the vector by a factor of c|c| while preserving or reversing its direction depending on the sign of cc.

When c>0c > 0, the scaled vector cac\mathbf{a} points in the same direction as a\mathbf{a}. If c>1c > 1, the vector stretches; if 0<c<10 < c < 1, it compresses. When c<0c < 0, the direction flips — the scaled vector points opposite to a\mathbf{a}, with its length multiplied by c|c|. The boundary case c=0c = 0 collapses any vector to the zero vector: 0a=00\mathbf{a} = \mathbf{0}.

Because scalar multiplication only ever stretches, compresses, or reverses a vector along its own line, the result cac\mathbf{a} is always parallel to a\mathbf{a} (provided a0\mathbf{a} \neq \mathbf{0}). This observation becomes important later: two vectors are parallel precisely when one is a scalar multiple of the other.

Properties of Scalar Multiplication

Scalar multiplication satisfies its own set of algebraic rules that describe how scalars and vectors interact. Combined with the properties of addition, these rules form the complete algebraic framework for working with vectors.

Associativity with Scalars


c(da)=(cd)ac(d\mathbf{a}) = (cd)\mathbf{a}


Scaling a vector by dd and then by cc produces the same result as scaling once by the product cdcd. The operations collapse into a single multiplication on the scalar side.

Distributivity over Vector Addition


c(a+b)=ca+cbc(\mathbf{a} + \mathbf{b}) = c\mathbf{a} + c\mathbf{b}


A scalar applied to a sum distributes across both terms. Geometrically, scaling the diagonal of a parallelogram by cc yields the same vector as scaling both sides by cc and then forming the new parallelogram.

Distributivity over Scalar Addition


(c+d)a=ca+da(c + d)\mathbf{a} = c\mathbf{a} + d\mathbf{a}


Two scalars summed before multiplication produce the same result as two separate scalings added afterward. This rule links the arithmetic of real numbers to the algebra of vectors.

Multiplicative Identity


1a=a1\mathbf{a} = \mathbf{a}


Scaling by 11 leaves a vector unchanged. This ensures that the scalar 11 acts neutrally, just as the zero vector acts neutrally under addition.

Consequences


Two special cases follow immediately from these rules. Setting c=0c = 0 gives 0a=00\mathbf{a} = \mathbf{0}: scaling any vector by zero produces the zero vector. Setting c=1c = -1 gives (1)a=a(-1)\mathbf{a} = -\mathbf{a}: scaling by 1-1 produces the additive inverse. Neither fact requires a separate axiom — both are consequences of the properties above.

The Algebraic Foundation

Addition and scalar multiplication are not just two operations among many — they are the two operations on which the entire algebraic theory of vectors rests. Every other construction in this section is built from them. The dot product multiplies corresponding components and sums the results — a sequence of scalar multiplications followed by real-number addition. The cross product combines components through differences of pairwise products — again, scalar multiplication and addition in a specific pattern. A linear combination c1v1+c2v2++ckvkc_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k is nothing more than repeated scalar multiplication followed by repeated addition.

The ten properties listed on this page — four for addition, four for scalar multiplication, plus two distributive laws — are not arbitrary. They are precisely the axioms that define a vector space. Any collection of objects that satisfies these rules qualifies as a vector space, whether the objects are arrows in R3\mathbb{R}^3, polynomials, matrices, or functions. The vectors in Rn\mathbb{R}^n studied throughout this section are the most concrete example, but the algebraic structure they exhibit extends far beyond ordered tuples of numbers.