Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Orthogonal Sets






Bases Where Coordinates Come Free

An orthogonal set consists of vectors that are pairwise perpendicular. An orthonormal set adds the requirement that each vector has unit length. These sets are automatically linearly independent, and when they form a basis, coordinates are computed by dot products alone — no system solving, no row reduction, no matrix inversion.



Orthogonal Sets

A set of vectors {v1,v2,,vk}\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} is orthogonal if every pair has dot product zero:

vivj=0for all ij\mathbf{v}_i \cdot \mathbf{v}_j = 0 \quad \text{for all } i \neq j


The vectors in an orthogonal set must all be nonzero — the zero vector is excluded because including it would trivialize the structure (every vector is orthogonal to 0\mathbf{0}, so 0\mathbf{0} carries no directional information).

For example, {(1,0,0),(0,2,0),(0,0,3)}\{(1, 0, 0), (0, 2, 0), (0, 0, -3)\} is orthogonal in R3\mathbb{R}^3: every pair of distinct vectors has dot product zero. The vectors need not have the same length, and their lengths can be anything nonzero.

A less obvious example: {(1,1,1),(1,2,1),(1,0,1)}\{(1, 1, 1), (1, -2, 1), (1, 0, -1)\}. Checking: (1)(1)+(1)(2)+(1)(1)=0(1)(1) + (1)(-2) + (1)(1) = 0, (1)(1)+(1)(0)+(1)(1)=0(1)(1) + (1)(0) + (1)(-1) = 0, (1)(1)+(2)(0)+(1)(1)=0(1)(1) + (-2)(0) + (1)(-1) = 0. All three pairwise products vanish — the set is orthogonal despite none of the vectors being aligned with the coordinate axes.

Orthogonal Sets Are Independent

Every orthogonal set of nonzero vectors is linearly independent. The proof is short and reveals exactly why orthogonality is so powerful.

Suppose c1v1+c2v2++ckvk=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k = \mathbf{0}. Dot both sides with vj\mathbf{v}_j:

c1(v1vj)+c2(v2vj)++ck(vkvj)=0c_1(\mathbf{v}_1 \cdot \mathbf{v}_j) + c_2(\mathbf{v}_2 \cdot \mathbf{v}_j) + \cdots + c_k(\mathbf{v}_k \cdot \mathbf{v}_j) = 0


Every term with iji \neq j vanishes because vivj=0\mathbf{v}_i \cdot \mathbf{v}_j = 0. Only the jj-th term survives: cjvj2=0c_j \|\mathbf{v}_j\|^2 = 0. Since vj0\mathbf{v}_j \neq \mathbf{0}, vj2>0\|\mathbf{v}_j\|^2 > 0, so cj=0c_j = 0. This works for every jj, so all coefficients are zero.

The key mechanism is that orthogonality isolates each coefficient. Dotting with vj\mathbf{v}_j kills every other term, leaving cjc_j alone. This is why orthogonal bases make coordinates computable by individual dot products — the same isolation principle that proves independence also extracts coordinates.

Orthonormal Sets

An orthonormal set is an orthogonal set where every vector additionally has unit length: vi=1\|\mathbf{v}_i\| = 1 for all ii. The two conditions together can be written compactly using the Kronecker delta:

vivj=δij={1if i=j0if ij\mathbf{v}_i \cdot \mathbf{v}_j = \delta_{ij} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases}


Any orthogonal set can be made orthonormal by normalizing each vector: v^i=vi/vi\hat{\mathbf{v}}_i = \mathbf{v}_i / \|\mathbf{v}_i\|. The directions are preserved, only the lengths change to 11.

The standard basis {e1,e2,,en}\{\mathbf{e}_1, \mathbf{e}_2, \dots, \mathbf{e}_n\} for Rn\mathbb{R}^n is orthonormal: eiej=δij\mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij} because each basis vector has a single 11 in a different position. It is the simplest orthonormal set, but far from the only one.

Orthogonal and Orthonormal Bases

An orthogonal basis is an orthogonal set that spans the space. An orthonormal basis is an orthonormal set that spans the space.

In Rn\mathbb{R}^n, an orthogonal set of nn nonzero vectors is automatically a basis — independence is guaranteed by orthogonality, and nn independent vectors in an nn-dimensional space automatically span. So the only check needed is: do I have nn pairwise-orthogonal nonzero vectors? If yes, they form a basis.

Orthonormal bases exist for every finite-dimensional inner product space. The Gram-Schmidt process constructs one from any given basis. This means the computational advantages of orthonormal bases are always available — any space that has a basis at all has an orthonormal one.

Coordinates via Dot Products

The defining computational advantage of orthogonal bases is that coordinates are extracted by individual dot products.

For an orthogonal basis {v1,,vn}\{\mathbf{v}_1, \dots, \mathbf{v}_n\}, the coordinate of x\mathbf{x} along vi\mathbf{v}_i is

ci=xvivivic_i = \frac{\mathbf{x} \cdot \mathbf{v}_i}{\mathbf{v}_i \cdot \mathbf{v}_i}


For an orthonormal basis {q1,,qn}\{\mathbf{q}_1, \dots, \mathbf{q}_n\}, the denominator is 11, and the formula simplifies to

ci=xqic_i = \mathbf{x} \cdot \mathbf{q}_i


No linear system needs to be solved. No matrix needs to be inverted. Each coordinate is computed independently by a single dot product.

Worked Example


Let {q1,q2,q3}\{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3\} be an orthonormal basis for R3\mathbb{R}^3 with q1=12(1,1,0)\mathbf{q}_1 = \frac{1}{\sqrt{2}}(1, 1, 0), q2=16(1,1,2)\mathbf{q}_2 = \frac{1}{\sqrt{6}}(1, -1, 2), q3=13(1,1,1)\mathbf{q}_3 = \frac{1}{\sqrt{3}}(-1, 1, 1).

For x=(3,1,2)\mathbf{x} = (3, 1, 2): c1=xq1=12(3+1+0)=42=22c_1 = \mathbf{x} \cdot \mathbf{q}_1 = \frac{1}{\sqrt{2}}(3 + 1 + 0) = \frac{4}{\sqrt{2}} = 2\sqrt{2}, c2=xq2=16(31+4)=66=6c_2 = \mathbf{x} \cdot \mathbf{q}_2 = \frac{1}{\sqrt{6}}(3 - 1 + 4) = \frac{6}{\sqrt{6}} = \sqrt{6}, c3=xq3=13(3+1+2)=0c_3 = \mathbf{x} \cdot \mathbf{q}_3 = \frac{1}{\sqrt{3}}(-3 + 1 + 2) = 0.

So x=22q1+6q2+0q3\mathbf{x} = 2\sqrt{2}\,\mathbf{q}_1 + \sqrt{6}\,\mathbf{q}_2 + 0 \cdot \mathbf{q}_3. The zero third coordinate means x\mathbf{x} has no component in the q3\mathbf{q}_3 direction — it is orthogonal to q3\mathbf{q}_3.

Orthogonal Matrices

An n×nn \times n matrix QQ is orthogonal if its columns form an orthonormal set. This is equivalent to QTQ=QQT=IQ^TQ = QQ^T = I, which is equivalent to Q1=QTQ^{-1} = Q^T.

The rows of an orthogonal matrix also form an orthonormal set — orthogonality of columns and rows go together.

The determinant of an orthogonal matrix is ±1\pm 1, since 1=det(I)=det(QTQ)=det(Q)21 = \det(I) = \det(Q^TQ) = \det(Q)^2. When det(Q)=+1\det(Q) = +1, the matrix represents a rotation. When det(Q)=1\det(Q) = -1, it represents a rotation composed with a reflection.

The defining geometric property is that orthogonal matrices preserve the dot product: (Qu)(Qv)=(Qu)T(Qv)=uTQTQv=uTv=uv(Q\mathbf{u}) \cdot (Q\mathbf{v}) = (Q\mathbf{u})^T(Q\mathbf{v}) = \mathbf{u}^TQ^TQ\mathbf{v} = \mathbf{u}^T\mathbf{v} = \mathbf{u} \cdot \mathbf{v}. Preserving the dot product automatically preserves lengths (Qx=x\|Q\mathbf{x}\| = \|\mathbf{x}\|), angles, and distances. An orthogonal matrix is a rigid motion of Rn\mathbb{R}^n — it rearranges vectors without distorting any geometric relationship.

Matrices with Orthonormal Columns

An m×nm \times n matrix QQ with m>nm > n can have orthonormal columns without being square. Such a matrix satisfies QTQ=InQ^TQ = I_n but QQTImQQ^T \neq I_m (the product QQTQQ^T is m×mm \times m and has rank n<mn < m).

The matrix QQTQQ^T is the projection matrix onto the column space of QQ. For any bRm\mathbf{b} \in \mathbb{R}^m, the vector QQTbQQ^T\mathbf{b} is the orthogonal projection of b\mathbf{b} onto the nn-dimensional subspace spanned by the columns of QQ.

These rectangular matrices with orthonormal columns are the natural output of the Gram-Schmidt process applied to the columns of a matrix. If AA is m×nm \times n with independent columns, Gram-Schmidt produces an m×nm \times n matrix QQ with orthonormal columns and an n×nn \times n upper triangular matrix RR such that A=QRA = QR. This is the thin QR decomposition.

Parseval's Identity and Bessel's Inequality

For an orthonormal basis {q1,,qn}\{\mathbf{q}_1, \dots, \mathbf{q}_n\} of Rn\mathbb{R}^n and any vector x\mathbf{x}, the coordinates ci=xqic_i = \mathbf{x} \cdot \mathbf{q}_i satisfy Parseval's identity:

x2=c12+c22++cn2=i=1n(xqi)2\|\mathbf{x}\|^2 = c_1^2 + c_2^2 + \cdots + c_n^2 = \sum_{i=1}^{n} (\mathbf{x} \cdot \mathbf{q}_i)^2


The squared length of x\mathbf{x} equals the sum of the squares of its coordinates. This is the Pythagorean theorem applied to the orthonormal decomposition x=c1q1++cnqn\mathbf{x} = c_1\mathbf{q}_1 + \cdots + c_n\mathbf{q}_n.

When the orthonormal set does not span — when k<nk < n — the sum accounts for only part of the length:

i=1k(xqi)2x2\sum_{i=1}^{k} (\mathbf{x} \cdot \mathbf{q}_i)^2 \leq \|\mathbf{x}\|^2


This is Bessel's inequality. The left side is the squared length of the projection of x\mathbf{x} onto Span{q1,,qk}\text{Span}\{\mathbf{q}_1, \dots, \mathbf{q}_k\}. The deficit x2(xqi)2\|\mathbf{x}\|^2 - \sum(\mathbf{x} \cdot \mathbf{q}_i)^2 is the squared length of the component orthogonal to the span. Equality holds if and only if x\mathbf{x} is already in the span, leaving no perpendicular remainder.