A linear transformation is a function between vector spaces that respects addition and scalar multiplication. Every matrix defines one, and every linear transformation between finite-dimensional spaces can be encoded as a matrix. This correspondence is the bridge between abstract maps and concrete computation — it turns geometric questions into algebraic ones and algebraic results into geometric insight.
What a Linear Transformation Is
A linear transformation is a function T:V→W between vector spaces that preserves the two fundamental operations. For all vectors u,v∈V and all scalars c,d:
T(cu+dv)=cT(u)+dT(v)
This single condition packages two requirements: T preserves addition (T(u+v)=T(u)+T(v)) and T preserves scalar multiplication (T(cv)=cT(v)). A function satisfying both is called linear. A function violating either is not.
The space V is the domain and W is the codomain, both vector spaces over the same field. The terms "linear map," "linear operator" (when V=W), and "linear transformation" are all synonymous. The full set of properties that linearity entails — and the strategies for verifying or disproving it — are developed on their own page.
Examples in Rⁿ
The prototypical example is matrix multiplication: T(x)=Ax for a fixed m×n matrix A. Linearity follows from the distributive properties of matrix-vector multiplication: A(cu+dv)=cAu+dAv.
Several familiar operations are special cases. The zero transformation T(v)=0 sends every vector to the origin — it corresponds to the zero matrix. The identity transformation T(v)=v leaves every vector unchanged — it corresponds to the identity matrix. The projection T(x,y)=(x,0) drops the second coordinate — it corresponds to (1000). Rotation by a fixed angle θ in R2 is linear, with matrix (cosθsinθ−sinθcosθ).
In each case, linearity can be verified directly from the definition. The matrix formulation makes the verification automatic — every matrix-vector product is linear by construction.
Non-Examples
Translation T(v)=v+b with b=0 is the most common non-example. It fails immediately: T(0)=b=0, but every linear transformation must send 0 to 0.
The squaring function T(x)=x2 from R to R fails additivity: T(1+1)=4 but T(1)+T(1)=2. The absolute value function T(x)=∣x∣ fails homogeneity: T(−1⋅2)=2 but −1⋅T(2)=−2. Norms T(v)=∥v∥ fail additivity by the triangle inequality.
Affine maps T(v)=Av+b are linear only when b=0. The matrix part preserves linearity; the constant shift breaks it. Affine maps are important in geometry and optimization, but they are not linear transformations in the sense used here.
Examples Beyond Rⁿ
Linear transformations are not limited to matrix multiplication on column vectors. Any function between vector spaces that respects addition and scaling qualifies.
Differentiation T(p)=p′ on the polynomial space Pn is linear: (p+q)′=p′+q′ and (cp)′=cp′. Integration T(f)=∫axf(t)dt on C[a,b] is linear by the linearity of the integral. The transpose map T(A)=AT on Rn×n is linear: (A+B)T=AT+BT and (cA)T=cAT. The traceT(A)=tr(A) from Rn×n to R is linear by additivity and scalar homogeneity of the trace.
These examples show that the concept reaches far beyond columns of numbers. Whenever a mathematical operation respects addition and scaling — and many fundamental operations do — it is a linear transformation, and the entire theory applies.
Determined by Action on a Basis
A linear transformation is completely determined by what it does to a basis. If B={v1,…,vn} is a basis for V and the images T(v1),…,T(vn) are specified, then T(v) is determined for every vector v∈V.
The reason is that every vector in V has a unique expression v=c1v1+⋯+cnvn, and linearity forces
T(v)=c1T(v1)+⋯+cnT(vn)
Conversely, any choice of images for the basis vectors — any n vectors in W, with no constraints — defines a unique linear transformation. There are no compatibility conditions to satisfy; the basis images can be chosen freely.
This is the bridge to matrix representation. The columns of the matrix are precisely the images of the basis vectors, and the matrix encodes the entire transformation in a rectangular array of numbers.
Properties
Linearity has immediate consequences that go beyond the defining condition. The zero vector always maps to zero: T(0)=0. Negation is preserved: T(−v)=−T(v). Arbitrary linear combinations are preserved: T(∑civi)=∑ciT(vi).
The composition of two linear transformations is linear. If T:U→V and S:V→W are both linear, then S∘T:U→W satisfies (S∘T)(cu+dv)=c(S∘T)(u)+d(S∘T)(v). When both maps are represented by matrices, composition corresponds to matrix multiplication.
A linear transformation is invertible if and only if it is bijective — both injective (trivial kernel) and surjective (image equals the codomain). The inverse of a linear transformation is itself linear. The full development of these properties, including strategies for proving and disproving linearity, is on its own page.
The Matrix Connection
Every linear transformation T:Rn→Rm can be written as T(x)=Ax for a unique m×nmatrixA whose columns are the images of the standard basis vectors: A=[T(e1)T(e2)⋯T(en)].
This gives a one-to-one correspondence between linear maps Rn→Rm and m×n matrices. Every property of the transformation — its rank, determinant (when square), eigenvalues, image and kernel — can be read from the matrix. And every matrix operation — multiplication, inversion, decomposition — has a transformation-level interpretation.
For transformations between abstract vector spaces, the matrix depends on the choice of bases for both domain and codomain. Changing the bases changes the matrix but not the transformation. The relationship between different matrix representations of the same map is governed by similarity.
Geometry
In R2 and R3, linear transformations have vivid geometric meanings. Rotations spin every vector around the origin by a fixed angle. Reflections mirror across a line or plane through the origin. Projections flatten space onto a subspace. Shears tilt one axis relative to another. Scalings stretch or compress along coordinate directions.
Each of these transformations has an explicit matrix that encodes its geometric action. The determinant of the matrix measures how the transformation scales areas or volumes: ∣det(A)∣ is the scaling factor, and the sign of det(A) indicates whether orientation is preserved (+) or reversed (−). Orthogonal matrices (det=±1) preserve all lengths and angles — they are the rigid motions of linear algebra.
The singular value decomposition reveals the hidden geometry of any matrix: every linear transformation is a rotation, followed by a coordinate-axis scaling, followed by another rotation. Even the most complicated-looking matrix is just three simple geometric operations composed together.