Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Matric Representation of Linear Transformation






Encoding a Transformation as a Matrix

Every linear transformation between finite-dimensional spaces can be represented by a matrix, and every matrix defines a linear transformation. The columns of the matrix are the images of the basis vectors — this single recipe converts an abstract function into a concrete array of numbers from which every property of the transformation can be extracted.



Every Linear Map from Rⁿ to Rᵐ Is Matrix Multiplication

If T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m is linear, there exists a unique m×nm \times n matrix AA such that

T(x)=Axfor every xRnT(\mathbf{x}) = A\mathbf{x} \quad \text{for every } \mathbf{x} \in \mathbb{R}^n


This is not an optional representation — it is forced by linearity. Any vector x=x1e1++xnen\mathbf{x} = x_1\mathbf{e}_1 + \cdots + x_n\mathbf{e}_n maps to T(x)=x1T(e1)++xnT(en)T(\mathbf{x}) = x_1T(\mathbf{e}_1) + \cdots + x_nT(\mathbf{e}_n), and this is exactly the matrix-vector product AxA\mathbf{x} with A=[T(e1)  T(e2)    T(en)]A = [T(\mathbf{e}_1) \; T(\mathbf{e}_2) \; \cdots \; T(\mathbf{e}_n)].

The converse is equally immediate: every m×nm \times n matrix AA defines a linear transformation xAx\mathbf{x} \mapsto A\mathbf{x}. The correspondence is one-to-one — different matrices define different transformations, and different transformations produce different matrices. Linear maps RnRm\mathbb{R}^n \to \mathbb{R}^m and m×nm \times n matrices are the same objects viewed from two perspectives.

Constructing the Standard Matrix

The recipe is direct: apply TT to each standard basis vector e1,e2,,en\mathbf{e}_1, \mathbf{e}_2, \dots, \mathbf{e}_n and arrange the results as columns:

A=(T(e1)T(e2)T(en))A = \begin{pmatrix} | & | & & | \\ T(\mathbf{e}_1) & T(\mathbf{e}_2) & \cdots & T(\mathbf{e}_n) \\ | & | & & | \end{pmatrix}


Worked Example


Let T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 be defined by T(x,y,z)=(2xy+3z,  4x+5z)T(x, y, z) = (2x - y + 3z, \; 4x + 5z).

T(e1)=T(1,0,0)=(2,4)T(\mathbf{e}_1) = T(1, 0, 0) = (2, 4)

T(e2)=T(0,1,0)=(1,0)T(\mathbf{e}_2) = T(0, 1, 0) = (-1, 0)

T(e3)=T(0,0,1)=(3,5)T(\mathbf{e}_3) = T(0, 0, 1) = (3, 5)

A=(213405)A = \begin{pmatrix} 2 & -1 & 3 \\ 4 & 0 & 5 \end{pmatrix}


Verification: A(xyz)=(2xy+3z4x+5z)=T(x,y,z)A\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 2x - y + 3z \\ 4x + 5z \end{pmatrix} = T(x, y, z).

Reading the Matrix

Every piece of information about a linear transformation is encoded in its matrix.

Column jj tells you where ej\mathbf{e}_j goes. If the first column of a 2×22 \times 2 matrix is (3,1)T(3, -1)^T, then (1,0)(1, 0) maps to (3,1)(3, -1). The matrix is a complete lookup table: the image of any vector is computed by multiplication.

The size m×nm \times n records the dimensions of codomain (mm rows) and domain (nn columns). A 3×23 \times 2 matrix represents a map from R2\mathbb{R}^2 to R3\mathbb{R}^3 — it embeds a plane into three-dimensional space. A 2×32 \times 3 matrix represents a map from R3\mathbb{R}^3 to R2\mathbb{R}^2 — it compresses three dimensions down to two.

A square matrix (m=nm = n) represents a transformation from a space to itself — a linear operator. Only operators can have eigenvalues, determinants, and traces.

Matrices for Abstract Vector Spaces

For a linear transformation T:VWT: V \to W between abstract vector spaces, the matrix depends on a choice of basis for both VV and WW.

Fix a basis B={v1,,vn}\mathcal{B} = \{\mathbf{v}_1, \dots, \mathbf{v}_n\} for VV and a basis C={w1,,wm}\mathcal{C} = \{\mathbf{w}_1, \dots, \mathbf{w}_m\} for WW. Column jj of the matrix [T]CB[T]_{\mathcal{C} \leftarrow \mathcal{B}} is the C\mathcal{C}-coordinate vector of T(vj)T(\mathbf{v}_j) — the scalars needed to express T(vj)T(\mathbf{v}_j) as a linear combination of w1,,wm\mathbf{w}_1, \dots, \mathbf{w}_m.

Different bases give different matrices for the same transformation. The standard matrix for maps between Rn\mathbb{R}^n and Rm\mathbb{R}^m is the special case where both bases are standard. For abstract spaces like polynomial or function spaces, there is no "standard" basis in the same sense — every basis choice produces a different but equally valid matrix representation.

Worked Example: Differentiation

Let T:P2P1T: \mathcal{P}_2 \to \mathcal{P}_1 be defined by T(p)=pT(p) = p' (differentiation). Choose the monomial basis {1,x,x2}\{1, x, x^2\} for P2\mathcal{P}_2 and {1,x}\{1, x\} for P1\mathcal{P}_1.

T(1)=0=01+0xT(1) = 0 = 0 \cdot 1 + 0 \cdot x, so column 11 is (0,0)T(0, 0)^T.

T(x)=1=11+0xT(x) = 1 = 1 \cdot 1 + 0 \cdot x, so column 22 is (1,0)T(1, 0)^T.

T(x2)=2x=01+2xT(x^2) = 2x = 0 \cdot 1 + 2 \cdot x, so column 33 is (0,2)T(0, 2)^T.

[T]=(010002)[T] = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix}


The 2×32 \times 3 shape reflects dim(P1)=2\dim(\mathcal{P}_1) = 2 rows and dim(P2)=3\dim(\mathcal{P}_2) = 3 columns. The rank is 22 — differentiation maps P2\mathcal{P}_2 onto all of P1\mathcal{P}_1. The null space is one-dimensional, spanned by the constant polynomial 11 — the only polynomials whose derivative is zero.

Composition Corresponds to Matrix Multiplication

If T:UVT: U \to V has matrix AA (relative to appropriate bases) and S:VWS: V \to W has matrix BB, then the composition ST:UWS \circ T: U \to W has matrix BABA.

The order matches the composition: SS acts after TT, and BB multiplies from the left. This is why matrix multiplication is defined as it is — the row-times-column rule encodes function composition.

Associativity of matrix multiplication (BC)A=B(CA)(BC)A = B(CA) mirrors associativity of composition (RS)T=R(ST)(R \circ S) \circ T = R \circ (S \circ T). Non-commutativity of multiplication ABBAAB \neq BA mirrors non-commutativity of composition STTSS \circ T \neq T \circ S.

The identity transformation I:VVI: V \to V has the identity matrix InI_n in any basis, and InA=AIn=AI_n A = AI_n = A — the identity matrix is the multiplicative identity precisely because the identity transformation is the compositional identity.

The Identity and the Inverse

The identity transformation I:VVI: V \to V sends every vector to itself. In any basis, its matrix is InI_n.

If TT has matrix AA and TT is invertible, then T1T^{-1} has matrix A1A^{-1}. The transformation TT is invertible if and only if AA is invertible — the geometric and algebraic conditions coincide exactly. Composing TT with T1T^{-1} gives the identity, and multiplying AA by A1A^{-1} gives II. The determinant test (det(A)0\det(A) \neq 0) simultaneously answers whether the transformation is bijective and whether the matrix has an inverse.

The Matrix Encodes Everything

Once a linear transformation is represented by a matrix, every property of the transformation becomes a matrix computation.

The rank of AA equals the dimension of the image of TT. The nullity equals the dimension of the kernel. The determinant (for square matrices) tells whether TT is invertible and how it scales volumes. The eigenvalues reveal the scaling factors along invariant directions. The trace equals the sum of the eigenvalues. The singular values measure the maximum stretching in each orthogonal direction.

This is why matrices dominate computational linear algebra. Abstract transformations are conceptually powerful, but matrices are what computers operate on. The matrix representation converts every question about a linear map into a question about an array of numbers — and arrays of numbers are what algorithms are designed to handle.