Before diving into operations and products, it is worth stepping back to ask what a vector possesses simply by existing and how two vectors can be compared. Every vector in Rn carries intrinsic attributes — magnitude, direction, and a fixed number of components — that belong to the vector itself, independent of any operation performed on it. Vectors also stand in structural relationships to one another: they may be equal, parallel, or orthogonal. This page collects both the intrinsic and relational properties in a single reference, with each developed fully on the page where it naturally belongs.
Magnitude
Every vector has a magnitude — a single non-negative number that measures its size. For a vector v=(v1,v2,…,vn) in Rn, the magnitude is given by the Euclidean norm:
∥v∥=v12+v22+⋯+vn2
Magnitude is an intrinsic property: it belongs to the vector and does not depend on any other vector or operation. It is always non-negative, and the only vector with magnitude zero is the zero vector 0. This makes magnitude a reliable indicator of whether a vector is trivial — a vector is the zero vector if and only if its magnitude vanishes.
The norm satisfies two additional structural properties. Under scalar multiplication, the magnitude scales predictably: ∥cv∥=∣c∣∥v∥. Under addition, it obeys the triangle inequality: ∥a+b∥≤∥a∥+∥b∥. These properties, along with normalization and the distance formula, are developed on the magnitude page.
Direction
Every nonzero vector has a direction — an orientation in space that specifies which way the vector points. Direction is what separates a vector from a scalar: a scalar carries size alone, while a vector carries size together with a spatial orientation.
In R2, direction can be described by the angle α that the vector makes with the positive x-axis. A vector (v1,v2) points in the direction α=arctan(v1v2), with appropriate adjustment for quadrant. In R3, a single angle is insufficient — direction is instead specified by two angles or, more commonly, by the unit vector ∥v∥v that points the same way with magnitude stripped away.
The unit vector representation generalizes to any Rn. Dividing a nonzero vector by its magnitude isolates its direction as a vector on the unit sphere. Two vectors that differ only in magnitude share the same unit vector and therefore point the same way.
The zero vector has no direction. Its components are all zero, its magnitude is zero, and there is no meaningful orientation to extract. The expression ∥0∥0 involves division by zero and is undefined. This is not a technicality — the zero vector genuinely has no directional content.
Dimension
Every vector belongs to a specific Rn, and the number n — the count of components — is its dimension. A vector in R2 has two components, a vector in R5 has five, and the distinction is absolute: there is no natural way to add or compare vectors from different dimensions because there is no way to match up their components.
In R2 and R3, dimension has a direct geometric meaning — it is the number of independent spatial directions available. Two-component vectors live in a plane; three-component vectors live in the full space we inhabit visually. For n>3, the geometric picture breaks down, but the algebraic structure does not. Operations, norms, and dot products all work identically in R100 as they do in R3 — only the number of terms in each sum changes.
Dimension is fixed by the space, not by the vector. Every vector in Rn has exactly n components, and all operations within Rn produce vectors with n components. A vector cannot gain or lose components through addition, scaling, or any linear combination. This rigidity is what keeps the algebra consistent.
Equality
Two vectors are equal when every one of their corresponding components matches. For a=(a1,a2,…,an) and b=(b1,b2,…,bn):
a=b⟺ai=bi for every i=1,2,…,n
There is no partial equality for vectors. If even a single component differs, the vectors are not equal. This all-or-nothing criterion is the algebraic version of the geometric requirement: equal vectors must have the same magnitude and the same direction.
For free vectors — vectors defined by magnitude and direction alone, without a fixed position — equality is independent of location. An arrow drawn at one corner of a diagram represents the same vector as an identical arrow drawn elsewhere, as long as the length and orientation agree. Moving a vector without rotating or rescaling it does not change the vector. This is the principle that allows vectors to be repositioned freely in tip-to-tail constructions and parallelogram diagrams without altering the result of an addition.
Parallelism
Two vectors are parallel when one is a scalar multiple of the other. For a and b with b=0:
a∥b⟺a=cb for some scalar c
When c>0, the two vectors point in the same direction — they are parallel in the strict sense. When c<0, they point in opposite directions — anti-parallel. Both cases fall under the umbrella of parallelism because the vectors lie along the same line through the origin, differing only in scale and possibly in sign.
Parallelism can be detected without computing the scalar c explicitly. If the components of a and b satisfy b1a1=b2a2=⋯=bnan (with appropriate handling when a component of b is zero), the vectors are parallel. Alternatively, in R3, the cross product provides a definitive test: a×b=0 if and only if a and b are parallel.
By convention, the zero vector is considered parallel to every vector, since 0=0b for any b. This convention avoids the need to exclude 0 as a special case in statements about parallelism.
Parallelism is the simplest instance of linear dependence. Two vectors are linearly dependent precisely when one is a scalar multiple of the other — precisely when they are parallel. The concept generalizes: for three or more vectors, dependence means at least one vector is a linear combination of the others, but for a pair, dependence reduces to parallelism.
Orthogonality
Two vectors are orthogonal when their dot product equals zero:
a⊥b⟺a⋅b=0
In R2 and R3, orthogonality corresponds to perpendicularity — the two vectors meet at a right angle. The algebraic condition a1b1+a2b2+⋯+anbn=0 translates the geometric concept of a 90° angle into a computation that works in any dimension, including those beyond visualization.
Orthogonality is in a sense the opposite extreme of parallelism. Parallel vectors are maximally aligned — one lies entirely along the direction of the other. Orthogonal vectors have zero alignment — projecting one onto the other yields the zero vector. Between these extremes lies every other angular relationship, measured quantitatively by the dot product.
The zero vector is orthogonal to every vector, since 0⋅v=0 for all v. This is consistent with the convention that the zero vector is also parallel to every vector. The zero vector is the only vector that is simultaneously parallel and orthogonal to everything — a consequence of its having no magnitude and no direction.
Orthogonality grows in importance well beyond this section. Orthogonal bases simplify coordinate computations, orthogonal projections decompose vectors into independent components, and the Gram–Schmidt process converts an arbitrary basis into an orthogonal one. These ideas are developed in the orthogonality section.
Algebraic Properties
In addition to the intrinsic and relational properties above, vectors in Rn obey a collection of algebraic rules governing how they interact with addition and scalar multiplication. These rules are not properties of individual vectors but of the operations themselves — they describe the behavior of addition and scaling as applied to any vectors in the space.
Under addition, vectors are commutative (a+b=b+a), associative ((a+b)+c=a+(b+c)), have an identity element (a+0=a), and every vector has an additive inverse (a+(−a)=0). Under scalar multiplication, associativity holds (c(da)=(cd)a), the scalar 1 acts as an identity (1a=a), and two distributive laws connect the two operations: c(a+b)=ca+cb and (c+d)a=ca+da.
These ten properties are not specific to vectors in Rn — they are the axioms that define a vector space. Any mathematical structure satisfying the same rules qualifies as a vector space, whether its elements are arrows in the plane, polynomials, matrices, or functions. The vectors in Rn are the most tangible example, but the algebraic framework they satisfy is far more general.