The dot product is unlike any operation encountered so far in this section. It takes two vectors and returns not a vector but a single number — a scalar. That number encodes something geometric: the degree to which the two vectors point in the same direction. Through its algebraic definition, the dot product is a straightforward computation — multiply corresponding components and sum. Through its geometric definition, it connects directly to the angle between the vectors, opening the door to orthogonality, projection, and the Cauchy–Schwarz inequality. Both definitions are equivalent, and both are essential.
Algebraic Definition
The dot product of two vectors a=(a1,a2,…,an) and b=(b1,b2,…,bn) in Rn is the scalar obtained by multiplying corresponding components and summing the results:
a⋅b=a1b1+a2b2+⋯+anbn=i=1∑naibi
The notation uses a centered dot between the two vectors. Other names for the same operation are the inner product and the scalar product — the latter emphasizing that the output is a scalar, not a vector.
This definition works in any dimension. In R2, the dot product of (3,4) and (1,−2) is 3(1)+4(−2)=−5. In R5, the same rule applies with five terms instead of two. The computation is mechanical, but the meaning it carries — revealed by the geometric definition — is what makes the dot product central to the rest of linear algebra.
The dot product differs fundamentally from vector addition and scalar multiplication. Those operations take vectors and produce vectors. The dot product takes two vectors and collapses them into a single number, discarding the directional structure in favor of a measurement of alignment.
Geometric Definition
There is a second way to compute the dot product that bypasses components entirely and works directly with lengths and angles. If a and b are both nonzero and θ denotes the angle they form when drawn from a common point, then:
a⋅b=∥a∥∥b∥cosθ
The right-hand side multiplies the magnitudes of both vectors and adjusts the product by cosθ. That cosine factor does all the interpretive work: it equals 1 when θ=0 (full alignment), drops to 0 at θ=2π (no alignment at all), and reaches −1 at θ=π (complete opposition).
Why does this agree with the component-wise formula? Place a and b tail to tail and form the triangle completed by a−b. The law of cosines gives the squared length of the third side:
∥a−b∥2=∥a∥2+∥b∥2−2∥a∥∥b∥cosθ
Now expand ∥a−b∥2 by writing it as (a−b)⋅(a−b) and distributing. The result is ∥a∥2−2(a⋅b)+∥b∥2. Setting the two expressions equal and cancelling the squared norms leaves a⋅b=∥a∥∥b∥cosθ. The component-based sum of products and the length-angle formula are two faces of a single quantity — one assembled from arithmetic on coordinates, the other encoding geometric information about orientation.
Properties of the Dot Product
The dot product obeys a set of algebraic rules that make it behave predictably in calculations.
Commutativity
a⋅b=b⋅a
The order of the two vectors does not matter. This follows directly from commutativity of real number multiplication: aibi=biai for every component.
Distributivity
a⋅(b+c)=a⋅b+a⋅c
The dot product distributes over vector addition. This allows the dot product of a vector with a sum to be broken apart, a property used constantly when expanding expressions involving multiple vectors.
Scalar Factoring
(ca)⋅b=c(a⋅b)
A scalar multiplied into one of the vectors can be pulled out of the dot product entirely. Combined with commutativity, this also gives a⋅(cb)=c(a⋅b).
Positive Definiteness
a⋅a=∥a∥2≥0,with equality if and only if a=0
The dot product of a vector with itself is the sum of squared components — always non-negative, and zero only for the zero vector. This property ties the dot product directly to the norm and ensures that the geometric notion of length is consistent with the algebraic framework.
Connection to Magnitude
Positive definiteness reveals that the norm is hiding inside the dot product. When a vector is dotted with itself, the result is exactly the quantity that sits under the square root in the norm formula:
v⋅v=v12+v22+⋯+vn2=∥v∥2
Squared length, in other words, is not a separate concept — it is a dot product in which both slots are filled by the same vector. Flipping this around gives an alternative expression for the norm: ∥v∥=v⋅v. Rather than defining magnitude independently and then discovering a coincidence, we can view the norm as something the dot product generates.
This observation has practical consequences. Several norm properties that would otherwise demand their own proofs fall out as corollaries of dot product rules. Scalar factoring implies ∥cv∥=∣c∣∥v∥ with no extra work. The triangle inequality ∥a+b∥≤∥a∥+∥b∥ follows once we control the size of a⋅b through Cauchy–Schwarz — again, a statement framed entirely in dot product language.
The dependence runs both ways. The geometric formula a⋅b=∥a∥∥b∥cosθ rebuilds the dot product from two norms and an angle. Neither concept stands alone: the dot product encodes magnitude, and magnitude participates in the dot product's geometric interpretation. Separating them into unrelated topics conceals the single algebraic mechanism they both rely on.
The Angle Between Vectors
Rearranging the geometric formula isolates the angle:
cosθ=∥a∥∥b∥a⋅b
This formula is defined only when both vectors are nonzero, since division by zero is undefined. The angle θ lies in the interval [0,π], covering all possibilities from vectors pointing in the same direction (θ=0) to vectors pointing in exactly opposite directions (θ=π).
The right-hand side is the dot product of the two normalized vectors a^⋅b^. Normalization strips away the magnitudes, leaving only the directional relationship. The cosine of the angle between two vectors depends solely on their directions, not on how long they are.
In R2 and R3, this formula can be verified against the angle measured with a protractor. In higher dimensions, where geometric angles cannot be visualized, the formula serves as the definition of the angle between two vectors — extending a geometric concept into spaces where geometry alone cannot reach.
Sign of the Dot Product
The sign of the dot product reveals the angular relationship between two vectors without requiring the angle itself to be computed. Since ∥a∥ and ∥b∥ are both positive for nonzero vectors, the sign of a⋅b is determined entirely by cosθ.
When a⋅b>0, the cosine is positive, which means θ lies in the interval (0,2π). The vectors form an acute angle — they point in broadly the same direction.
When a⋅b=0, the cosine is zero, placing θ at exactly 2π. The vectors are perpendicular — neither has any component in the direction of the other.
When a⋅b<0, the cosine is negative, so θ lies in (2π,π). The vectors form an obtuse angle — they point in broadly opposite directions.
This three-way classification is a fast diagnostic tool. Checking whether a dot product is positive, zero, or negative is often all that is needed to determine the geometric relationship between two vectors — no square roots, no inverse cosines, just the sign of a sum of products.
Orthogonality
Two vectors are orthogonal when their dot product is zero:
a⋅b=0⟺a⊥b
In R2 and R3, orthogonality corresponds to perpendicularity — the vectors meet at a right angle. In Rn for n>3, perpendicularity cannot be visualized, but the algebraic condition a⋅b=0 still serves as its definition. Orthogonality is the generalization of "right angle" to any number of dimensions.
The zero vector occupies a special position: 0⋅v=0 for every vector v, so the zero vector is orthogonal to everything. This is a convention that simplifies many statements — without it, theorems about orthogonal sets would need to exclude the zero vector as a separate case.
Orthogonality is far more than a geometric curiosity. Orthogonal vectors are algebraically independent in a strong sense — projecting one onto the other yields zero, meaning neither contributes anything in the direction of the other. This idea scales up: orthogonal bases, orthogonal decompositions, and orthogonal complements form a central thread through the orthogonality section of linear algebra.
The Cauchy–Schwarz Inequality
The dot product of two vectors cannot grow arbitrarily large when their lengths are fixed. Cauchy–Schwarz makes this constraint precise:
∣a⋅b∣≤∥a∥∥b∥
The geometric angle formula offers the most transparent explanation. Writing a⋅b=∥a∥∥b∥cosθ and noting that ∣cosθ∣ never surpasses 1, the absolute dot product is automatically bounded by the product of the norms. This ceiling is reached only when cosθ hits ±1 — at θ=0 or θ=π — so equality corresponds to parallel vectors pointing along or against each other.
An entirely coordinate-based proof exists as well, requiring no notion of angle. Consider the expression ∥a−tb∥2 for a variable scalar t. Because a squared norm is never negative, this expression defines a quadratic in t with no negative values. A real quadratic that stays non-negative must have a non-positive discriminant, and writing out that discriminant condition yields exactly the Cauchy–Schwarz bound. This argument extends to any setting where an inner product is defined, even when angles lack geometric substance.
The inequality plays a structural role beyond bounding computations. Without it, the angle formula would be ill-posed: feeding ∥a∥∥b∥a⋅b into arccos requires the fraction to land in [−1,1], and Cauchy–Schwarz is what guarantees this. The norm's triangle inequality likewise depends on it — establishing ∥a+b∥≤∥a∥+∥b∥ involves controlling the mixed term a⋅b, which is exactly the job Cauchy–Schwarz performs.
Orthogonal Projection
It is frequently necessary to break a vector a into two pieces relative to a nonzero vector b: one piece aligned with b and another at a right angle to it. The dot product supplies the machinery for this decomposition.
Scalar Projection
The signed distance that a covers in the direction of b is captured by a single number:
compba=∥b∥a⋅b
When this quantity comes out positive, a tilts toward b. A negative result means a tilts away. Zero signals complete orthogonality — a contributes nothing at all in the direction of b.
Vector Projection
To obtain an actual vector rather than a bare number, the scalar projection is re-embedded along b:
projba=b⋅ba⋅bb=∥b∥2a⋅bb
The coefficient b⋅ba⋅b rescales b so that the output has the appropriate length and orientation. What results is a vector parallel to b whose norm equals ∣compba∣.
Orthogonal Decomposition
Removing the projected component from a leaves behind the perpendicular part:
a⊥=a−projba
Together, the two pieces reconstruct the original:
a=projba+a⊥
The parallel component lies along b; the perpendicular component satisfies a⊥⋅b=0, a fact that follows by substituting the formula and simplifying. These two constituents are geometrically independent — adjusting one leaves the other unchanged, and each accounts for information the other entirely misses.
This two-part split reappears in more sophisticated forms throughout linear algebra. Projection onto a line generalizes to projection onto multi-dimensional subspaces. The Gram–Schmidt procedure repeats the same splitting step iteratively, peeling off parallel components one vector at a time to produce an orthogonal collection. Least-squares fitting identifies the closest approximation inside a subspace by projecting onto it. Each of these techniques, covered in the orthogonality section, traces back to the same principle at work here: partition a vector into what runs along a chosen direction and what runs across it.