Every linear transformation between finite-dimensional spaces can be represented by a matrix, and every matrix defines a linear transformation. The columns of the matrix are the images of the basis vectors — this single recipe converts an abstract function into a concrete array of numbers from which every property of the transformation can be extracted.
Every Linear Map from Rⁿ to Rᵐ Is Matrix Multiplication
If T:Rn→Rm is linear, there exists a unique m×nmatrixA such that
T(x)=Axfor every x∈Rn
This is not an optional representation — it is forced by linearity. Any vector x=x1e1+⋯+xnen maps to T(x)=x1T(e1)+⋯+xnT(en), and this is exactly the matrix-vector product Ax with A=[T(e1)T(e2)⋯T(en)].
The converse is equally immediate: every m×n matrix A defines a linear transformation x↦Ax. The correspondence is one-to-one — different matrices define different transformations, and different transformations produce different matrices. Linear maps Rn→Rm and m×n matrices are the same objects viewed from two perspectives.
Constructing the Standard Matrix
The recipe is direct: apply T to each standard basis vector e1,e2,…,en and arrange the results as columns:
A=∣T(e1)∣∣T(e2)∣⋯∣T(en)∣
Worked Example
Let T:R3→R2 be defined by T(x,y,z)=(2x−y+3z,4x+5z).
T(e1)=T(1,0,0)=(2,4)
T(e2)=T(0,1,0)=(−1,0)
T(e3)=T(0,0,1)=(3,5)
A=(24−1035)
Verification: Axyz=(2x−y+3z4x+5z)=T(x,y,z).
Reading the Matrix
Every piece of information about a linear transformation is encoded in its matrix.
Column j tells you where ej goes. If the first column of a 2×2 matrix is (3,−1)T, then (1,0) maps to (3,−1). The matrix is a complete lookup table: the image of any vector is computed by multiplication.
The size m×n records the dimensions of codomain (m rows) and domain (n columns). A 3×2 matrix represents a map from R2 to R3 — it embeds a plane into three-dimensional space. A 2×3 matrix represents a map from R3 to R2 — it compresses three dimensions down to two.
A square matrix (m=n) represents a transformation from a space to itself — a linear operator. Only operators can have eigenvalues, determinants, and traces.
Matrices for Abstract Vector Spaces
For a linear transformation T:V→W between abstract vector spaces, the matrix depends on a choice of basis for both V and W.
Fix a basis B={v1,…,vn} for V and a basis C={w1,…,wm} for W. Column j of the matrix [T]C←B is the C-coordinate vector of T(vj) — the scalars needed to express T(vj) as a linear combination of w1,…,wm.
Different bases give different matrices for the same transformation. The standard matrix for maps between Rn and Rm is the special case where both bases are standard. For abstract spaces like polynomial or function spaces, there is no "standard" basis in the same sense — every basis choice produces a different but equally valid matrix representation.
Worked Example: Differentiation
Let T:P2→P1 be defined by T(p)=p′ (differentiation). Choose the monomial basis {1,x,x2} for P2 and {1,x} for P1.
T(1)=0=0⋅1+0⋅x, so column 1 is (0,0)T.
T(x)=1=1⋅1+0⋅x, so column 2 is (1,0)T.
T(x2)=2x=0⋅1+2⋅x, so column 3 is (0,2)T.
[T]=(001002)
The 2×3 shape reflects dim(P1)=2 rows and dim(P2)=3 columns. The rank is 2 — differentiation maps P2 onto all of P1. The null space is one-dimensional, spanned by the constant polynomial 1 — the only polynomials whose derivative is zero.
Composition Corresponds to Matrix Multiplication
If T:U→V has matrix A (relative to appropriate bases) and S:V→W has matrix B, then the composition S∘T:U→W has matrix BA.
The order matches the composition: S acts after T, and B multiplies from the left. This is why matrix multiplication is defined as it is — the row-times-column rule encodes function composition.
Associativity of matrix multiplication (BC)A=B(CA) mirrors associativity of composition (R∘S)∘T=R∘(S∘T). Non-commutativity of multiplication AB=BA mirrors non-commutativity of composition S∘T=T∘S.
The identity transformation I:V→V has the identity matrixIn in any basis, and InA=AIn=A — the identity matrix is the multiplicative identity precisely because the identity transformation is the compositional identity.
The Identity and the Inverse
The identity transformation I:V→V sends every vector to itself. In any basis, its matrix is In.
If T has matrix A and T is invertible, then T−1 has matrix A−1. The transformation T is invertible if and only if A is invertible — the geometric and algebraic conditions coincide exactly. Composing T with T−1 gives the identity, and multiplying A by A−1 gives I. The determinant test (det(A)=0) simultaneously answers whether the transformation is bijective and whether the matrix has an inverse.
The Matrix Encodes Everything
Once a linear transformation is represented by a matrix, every property of the transformation becomes a matrix computation.
The rank of A equals the dimension of the image of T. The nullity equals the dimension of the kernel. The determinant (for square matrices) tells whether T is invertible and how it scales volumes. The eigenvalues reveal the scaling factors along invariant directions. The trace equals the sum of the eigenvalues. The singular values measure the maximum stretching in each orthogonal direction.
This is why matrices dominate computational linear algebra. Abstract transformations are conceptually powerful, but matrices are what computers operate on. The matrix representation converts every question about a linear map into a question about an array of numbers — and arrays of numbers are what algorithms are designed to handle.