Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Linear Transformations






Functions That Preserve Linear Structure

A linear transformation is a function between vector spaces that respects addition and scalar multiplication. Every matrix defines one, and every linear transformation between finite-dimensional spaces can be encoded as a matrix. This correspondence is the bridge between abstract maps and concrete computation — it turns geometric questions into algebraic ones and algebraic results into geometric insight.



What a Linear Transformation Is

A linear transformation is a function T:VWT: V \to W between vector spaces that preserves the two fundamental operations. For all vectors u,vV\mathbf{u}, \mathbf{v} \in V and all scalars c,dc, d:

T(cu+dv)=cT(u)+dT(v)T(c\mathbf{u} + d\mathbf{v}) = cT(\mathbf{u}) + dT(\mathbf{v})


This single condition packages two requirements: TT preserves addition (T(u+v)=T(u)+T(v)T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})) and TT preserves scalar multiplication (T(cv)=cT(v)T(c\mathbf{v}) = cT(\mathbf{v})). A function satisfying both is called linear. A function violating either is not.

The space VV is the domain and WW is the codomain, both vector spaces over the same field. The terms "linear map," "linear operator" (when V=WV = W), and "linear transformation" are all synonymous. The full set of properties that linearity entails — and the strategies for verifying or disproving it — are developed on their own page.

Examples in Rⁿ

The prototypical example is matrix multiplication: T(x)=AxT(\mathbf{x}) = A\mathbf{x} for a fixed m×nm \times n matrix AA. Linearity follows from the distributive properties of matrix-vector multiplication: A(cu+dv)=cAu+dAvA(c\mathbf{u} + d\mathbf{v}) = cA\mathbf{u} + dA\mathbf{v}.

Several familiar operations are special cases. The zero transformation T(v)=0T(\mathbf{v}) = \mathbf{0} sends every vector to the origin — it corresponds to the zero matrix. The identity transformation T(v)=vT(\mathbf{v}) = \mathbf{v} leaves every vector unchanged — it corresponds to the identity matrix. The projection T(x,y)=(x,0)T(x, y) = (x, 0) drops the second coordinate — it corresponds to (1000)\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}. Rotation by a fixed angle θ\theta in R2\mathbb{R}^2 is linear, with matrix (cosθsinθsinθcosθ)\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}.

In each case, linearity can be verified directly from the definition. The matrix formulation makes the verification automatic — every matrix-vector product is linear by construction.

Non-Examples

Translation T(v)=v+bT(\mathbf{v}) = \mathbf{v} + \mathbf{b} with b0\mathbf{b} \neq \mathbf{0} is the most common non-example. It fails immediately: T(0)=b0T(\mathbf{0}) = \mathbf{b} \neq \mathbf{0}, but every linear transformation must send 0\mathbf{0} to 0\mathbf{0}.

The squaring function T(x)=x2T(x) = x^2 from R\mathbb{R} to R\mathbb{R} fails additivity: T(1+1)=4T(1 + 1) = 4 but T(1)+T(1)=2T(1) + T(1) = 2. The absolute value function T(x)=xT(x) = |x| fails homogeneity: T(12)=2T(-1 \cdot 2) = 2 but 1T(2)=2-1 \cdot T(2) = -2. Norms T(v)=vT(\mathbf{v}) = \|\mathbf{v}\| fail additivity by the triangle inequality.

Affine maps T(v)=Av+bT(\mathbf{v}) = A\mathbf{v} + \mathbf{b} are linear only when b=0\mathbf{b} = \mathbf{0}. The matrix part preserves linearity; the constant shift breaks it. Affine maps are important in geometry and optimization, but they are not linear transformations in the sense used here.

Examples Beyond Rⁿ

Linear transformations are not limited to matrix multiplication on column vectors. Any function between vector spaces that respects addition and scaling qualifies.

Differentiation T(p)=pT(p) = p' on the polynomial space Pn\mathcal{P}_n is linear: (p+q)=p+q(p + q)' = p' + q' and (cp)=cp(cp)' = cp'. Integration T(f)=axf(t)dtT(f) = \int_a^x f(t)\,dt on C[a,b]C[a, b] is linear by the linearity of the integral. The transpose map T(A)=ATT(A) = A^T on Rn×n\mathbb{R}^{n \times n} is linear: (A+B)T=AT+BT(A + B)^T = A^T + B^T and (cA)T=cAT(cA)^T = cA^T. The trace T(A)=tr(A)T(A) = \text{tr}(A) from Rn×n\mathbb{R}^{n \times n} to R\mathbb{R} is linear by additivity and scalar homogeneity of the trace.

These examples show that the concept reaches far beyond columns of numbers. Whenever a mathematical operation respects addition and scaling — and many fundamental operations do — it is a linear transformation, and the entire theory applies.

Determined by Action on a Basis

A linear transformation is completely determined by what it does to a basis. If B={v1,,vn}\mathcal{B} = \{\mathbf{v}_1, \dots, \mathbf{v}_n\} is a basis for VV and the images T(v1),,T(vn)T(\mathbf{v}_1), \dots, T(\mathbf{v}_n) are specified, then T(v)T(\mathbf{v}) is determined for every vector vV\mathbf{v} \in V.

The reason is that every vector in VV has a unique expression v=c1v1++cnvn\mathbf{v} = c_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n, and linearity forces

T(v)=c1T(v1)++cnT(vn)T(\mathbf{v}) = c_1T(\mathbf{v}_1) + \cdots + c_nT(\mathbf{v}_n)


Conversely, any choice of images for the basis vectors — any nn vectors in WW, with no constraints — defines a unique linear transformation. There are no compatibility conditions to satisfy; the basis images can be chosen freely.

This is the bridge to matrix representation. The columns of the matrix are precisely the images of the basis vectors, and the matrix encodes the entire transformation in a rectangular array of numbers.

Properties

Linearity has immediate consequences that go beyond the defining condition. The zero vector always maps to zero: T(0)=0T(\mathbf{0}) = \mathbf{0}. Negation is preserved: T(v)=T(v)T(-\mathbf{v}) = -T(\mathbf{v}). Arbitrary linear combinations are preserved: T(civi)=ciT(vi)T(\sum c_i \mathbf{v}_i) = \sum c_i T(\mathbf{v}_i).

The composition of two linear transformations is linear. If T:UVT: U \to V and S:VWS: V \to W are both linear, then ST:UWS \circ T: U \to W satisfies (ST)(cu+dv)=c(ST)(u)+d(ST)(v)(S \circ T)(c\mathbf{u} + d\mathbf{v}) = c(S \circ T)(\mathbf{u}) + d(S \circ T)(\mathbf{v}). When both maps are represented by matrices, composition corresponds to matrix multiplication.

A linear transformation is invertible if and only if it is bijective — both injective (trivial kernel) and surjective (image equals the codomain). The inverse of a linear transformation is itself linear. The full development of these properties, including strategies for proving and disproving linearity, is on its own page.

The Matrix Connection

Every linear transformation T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m can be written as T(x)=AxT(\mathbf{x}) = A\mathbf{x} for a unique m×nm \times n matrix AA whose columns are the images of the standard basis vectors: A=[T(e1)  T(e2)    T(en)]A = [T(\mathbf{e}_1) \; T(\mathbf{e}_2) \; \cdots \; T(\mathbf{e}_n)].

This gives a one-to-one correspondence between linear maps RnRm\mathbb{R}^n \to \mathbb{R}^m and m×nm \times n matrices. Every property of the transformation — its rank, determinant (when square), eigenvalues, image and kernel — can be read from the matrix. And every matrix operation — multiplication, inversion, decomposition — has a transformation-level interpretation.

For transformations between abstract vector spaces, the matrix depends on the choice of bases for both domain and codomain. Changing the bases changes the matrix but not the transformation. The relationship between different matrix representations of the same map is governed by similarity.

Geometry

In R2\mathbb{R}^2 and R3\mathbb{R}^3, linear transformations have vivid geometric meanings. Rotations spin every vector around the origin by a fixed angle. Reflections mirror across a line or plane through the origin. Projections flatten space onto a subspace. Shears tilt one axis relative to another. Scalings stretch or compress along coordinate directions.

Each of these transformations has an explicit matrix that encodes its geometric action. The determinant of the matrix measures how the transformation scales areas or volumes: det(A)|\det(A)| is the scaling factor, and the sign of det(A)\det(A) indicates whether orientation is preserved (++) or reversed (-). Orthogonal matrices (det=±1\det = \pm 1) preserve all lengths and angles — they are the rigid motions of linear algebra.

The singular value decomposition reveals the hidden geometry of any matrix: every linear transformation is a rotation, followed by a coordinate-axis scaling, followed by another rotation. Even the most complicated-looking matrix is just three simple geometric operations composed together.