An orthogonal set consists of vectors that are pairwise perpendicular. An orthonormal set adds the requirement that each vector has unit length. These sets are automatically linearly independent, and when they form a basis, coordinates are computed by dot products alone — no system solving, no row reduction, no matrix inversion.
Orthogonal Sets
A set of vectors {v1,v2,…,vk} is orthogonal if every pair has dot product zero:
vi⋅vj=0for all i=j
The vectors in an orthogonal set must all be nonzero — the zero vector is excluded because including it would trivialize the structure (every vector is orthogonal to 0, so 0 carries no directional information).
For example, {(1,0,0),(0,2,0),(0,0,−3)} is orthogonal in R3: every pair of distinct vectors has dot product zero. The vectors need not have the same length, and their lengths can be anything nonzero.
A less obvious example: {(1,1,1),(1,−2,1),(1,0,−1)}. Checking: (1)(1)+(1)(−2)+(1)(1)=0, (1)(1)+(1)(0)+(1)(−1)=0, (1)(1)+(−2)(0)+(1)(−1)=0. All three pairwise products vanish — the set is orthogonal despite none of the vectors being aligned with the coordinate axes.
Orthogonal Sets Are Independent
Every orthogonal set of nonzero vectors is linearly independent. The proof is short and reveals exactly why orthogonality is so powerful.
Suppose c1v1+c2v2+⋯+ckvk=0. Dot both sides with vj:
c1(v1⋅vj)+c2(v2⋅vj)+⋯+ck(vk⋅vj)=0
Every term with i=j vanishes because vi⋅vj=0. Only the j-th term survives: cj∥vj∥2=0. Since vj=0, ∥vj∥2>0, so cj=0. This works for every j, so all coefficients are zero.
The key mechanism is that orthogonality isolates each coefficient. Dotting with vj kills every other term, leaving cj alone. This is why orthogonal bases make coordinates computable by individual dot products — the same isolation principle that proves independence also extracts coordinates.
Orthonormal Sets
An orthonormal set is an orthogonal set where every vector additionally has unit length: ∥vi∥=1 for all i. The two conditions together can be written compactly using the Kronecker delta:
vi⋅vj=δij={10if i=jif i=j
Any orthogonal set can be made orthonormal by normalizing each vector: v^i=vi/∥vi∥. The directions are preserved, only the lengths change to 1.
The standard basis{e1,e2,…,en} for Rn is orthonormal: ei⋅ej=δij because each basis vector has a single 1 in a different position. It is the simplest orthonormal set, but far from the only one.
Orthogonal and Orthonormal Bases
An orthogonal basis is an orthogonal set that spans the space. An orthonormal basis is an orthonormal set that spans the space.
In Rn, an orthogonal set of n nonzero vectors is automatically a basis — independence is guaranteed by orthogonality, and n independent vectors in an n-dimensional space automatically span. So the only check needed is: do I have n pairwise-orthogonal nonzero vectors? If yes, they form a basis.
Orthonormal bases exist for every finite-dimensional inner product space. The Gram-Schmidt process constructs one from any given basis. This means the computational advantages of orthonormal bases are always available — any space that has a basis at all has an orthonormal one.
Coordinates via Dot Products
The defining computational advantage of orthogonal bases is that coordinates are extracted by individual dot products.
For an orthogonal basis {v1,…,vn}, the coordinate of x along vi is
ci=vi⋅vix⋅vi
For an orthonormal basis {q1,…,qn}, the denominator is 1, and the formula simplifies to
ci=x⋅qi
No linear system needs to be solved. No matrix needs to be inverted. Each coordinate is computed independently by a single dot product.
Worked Example
Let {q1,q2,q3} be an orthonormal basis for R3 with q1=21(1,1,0), q2=61(1,−1,2), q3=31(−1,1,1).
For x=(3,1,2): c1=x⋅q1=21(3+1+0)=24=22, c2=x⋅q2=61(3−1+4)=66=6, c3=x⋅q3=31(−3+1+2)=0.
So x=22q1+6q2+0⋅q3. The zero third coordinate means x has no component in the q3 direction — it is orthogonal to q3.
Orthogonal Matrices
An n×nmatrixQ is orthogonal if its columns form an orthonormal set. This is equivalent to QTQ=QQT=I, which is equivalent to Q−1=QT.
The rows of an orthogonal matrix also form an orthonormal set — orthogonality of columns and rows go together.
The determinant of an orthogonal matrix is ±1, since 1=det(I)=det(QTQ)=det(Q)2. When det(Q)=+1, the matrix represents a rotation. When det(Q)=−1, it represents a rotation composed with a reflection.
The defining geometric property is that orthogonal matrices preserve the dot product: (Qu)⋅(Qv)=(Qu)T(Qv)=uTQTQv=uTv=u⋅v. Preserving the dot product automatically preserves lengths (∥Qx∥=∥x∥), angles, and distances. An orthogonal matrix is a rigid motion of Rn — it rearranges vectors without distorting any geometric relationship.
Matrices with Orthonormal Columns
An m×n matrix Q with m>n can have orthonormal columns without being square. Such a matrix satisfies QTQ=In but QQT=Im (the product QQT is m×m and has rankn<m).
The matrix QQT is the projection matrix onto the column space of Q. For any b∈Rm, the vector QQTb is the orthogonal projection of b onto the n-dimensional subspace spanned by the columns of Q.
These rectangular matrices with orthonormal columns are the natural output of the Gram-Schmidt process applied to the columns of a matrix. If A is m×n with independent columns, Gram-Schmidt produces an m×n matrix Q with orthonormal columns and an n×n upper triangular matrix R such that A=QR. This is the thin QR decomposition.
Parseval's Identity and Bessel's Inequality
For an orthonormal basis {q1,…,qn} of Rn and any vector x, the coordinates ci=x⋅qi satisfy Parseval's identity:
∥x∥2=c12+c22+⋯+cn2=i=1∑n(x⋅qi)2
The squared length of x equals the sum of the squares of its coordinates. This is the Pythagorean theorem applied to the orthonormal decomposition x=c1q1+⋯+cnqn.
When the orthonormal set does not span — when k<n — the sum accounts for only part of the length:
i=1∑k(x⋅qi)2≤∥x∥2
This is Bessel's inequality. The left side is the squared length of the projection of x onto Span{q1,…,qk}. The deficit ∥x∥2−∑(x⋅qi)2 is the squared length of the component orthogonal to the span. Equality holds if and only if x is already in the span, leaving no perpendicular remainder.