Every linear transformation partitions its domain into two complementary pieces: the kernel, consisting of everything that maps to zero, and a complement that maps bijectively onto the image. The dimensions of the kernel and image are locked together by the rank-nullity theorem, and their relationship determines whether the transformation is injective, surjective, or neither.
The Image
The image (or range) of a linear transformation T:V→W is the set of all outputs:
Im(T)={T(v):v∈V}
The image is a subspace of W. It contains T(0)=0, and if T(u) and T(v) are in the image, then so is cT(u)+dT(v)=T(cu+dv) — closure under both operations follows from linearity.
When T(x)=Ax for a matrixA, the image is the column space of A: the set of all vectors expressible as linear combinations of the columns. The dimension of the image equals the rank of A.
The image answers the reachability question: a vector w∈W is in the image if and only if the equation T(v)=w — equivalently, Ax=w — has a solution.
The Kernel
The kernel (or null space) of T:V→W is the set of all inputs that map to zero:
ker(T)={v∈V:T(v)=0}
The kernel is a subspace of V. It contains 0 (since T(0)=0), and if T(u)=0 and T(v)=0, then T(cu+dv)=cT(u)+dT(v)=0, so cu+dv∈ker(T).
When T(x)=Ax, the kernel is the null space of A: all solutions to the homogeneous system Ax=0. Its dimension is the nullity, equal to n−rank(A).
The kernel measures the information lost by T. Vectors in the kernel are collapsed to 0 — they represent directions that the transformation annihilates. A larger kernel means more information is destroyed.
Injectivity
A linear transformation T is injective (one-to-one) if different inputs always produce different outputs: T(u)=T(v) implies u=v.
For linear maps, injectivity has an elegant equivalent: T is injective if and only if ker(T)={0}. The proof is short. If T(u)=T(v), then T(u−v)=T(u)−T(v)=0, so u−v∈ker(T). If the kernel is trivial, u−v=0 and u=v.
For matrix transformations, injectivity is equivalent to full column rank: rank(A)=n. This means every column is a pivot column, no free variables exist in Ax=0, the columns are linearly independent, and the determinant is nonzero (in the square case).
Injectivity means the transformation preserves distinctness — no two different inputs are confused with each other.
Surjectivity
A linear transformation T:V→W is surjective (onto) if Im(T)=W — every vector in the codomain is the image of some vector in the domain.
For matrix transformations, surjectivity is equivalent to full row rank: rank(A)=m. This means every row contains a pivot, the column space is all of Rm, and the systemAx=b has a solution for every right-hand side b.
Surjectivity means the transformation has no blind spots — every output is reachable from some input. Failure of surjectivity means the image is a proper subspace of the codomain: certain vectors in W are inherently unreachable, no matter what input is chosen.
Bijectivity and Isomorphisms
A linear transformation that is both injective and surjective is bijective. A bijective linear transformation is called an isomorphism — it establishes that the domain and codomain are structurally identical as vector spaces.
For a map T:V→W between spaces of equal dimension (dim(V)=dim(W)=n), the three conditions collapse: injective ⟺ surjective ⟺ bijective. Checking any one of the three establishes the other two. This is because the rank-nullity theorem forces dim(Im(T))+dim(ker(T))=n, and dim(Im(T))≤n=dim(W). If the kernel is trivial (injective), the image has dimension n and must equal all of W (surjective). If the image is all of W (surjective), the kernel must have dimension 0 (injective).
For matrix transformations between spaces of the same dimension, bijectivity is equivalent to the matrix being square and invertible.
The Rank-Nullity Theorem for Maps
For a linear transformation T:V→W with V finite-dimensional:
dim(Im(T))+dim(ker(T))=dim(V)
The domain dimensions split between what the map preserves and what it destroys. The image captures the dimensions that survive; the kernel captures the dimensions that are annihilated.
For matrix transformations T(x)=Ax, this becomes rank(A)+nullity(A)=n — the familiar rank-nullity theorem in concrete language.
The theorem constrains the interplay between injectivity and surjectivity. If dim(V)>dim(W), the image can have at most dim(W) dimensions, forcing the kernel to have at least dim(V)−dim(W) dimensions — the map cannot be injective. If dim(V)<dim(W), the image cannot fill all of W — the map cannot be surjective.
Dimension Constraints
The rank-nullity theorem imposes hard limits on what a linear transformation can achieve.
T:V→W can be injective only if dim(V)≤dim(W). A map from a larger space to a smaller one must collapse some directions — the kernel is forced to be nontrivial.
T:V→W can be surjective only if dim(V)≥dim(W). A map from a smaller space to a larger one cannot cover all directions — the image is a proper subspace.
T can be bijective only if dim(V)=dim(W). This is necessary but not sufficient — even with equal dimensions, the map must still have full rank.
These constraints apply to all linear maps, not just matrix transformations. They are consequences of the rank-nullity theorem and the dimension theory of vector spaces.
Computing the Image and Kernel
For a matrix transformation T(x)=Ax, the image and kernel are computed by row reduction.
The kernel is the null space of A: solve Ax=0, reduce to echelon form, and express the solution in parametric vector form. Each free variable contributes one basis vector for ker(T).
The image is the column space of A: row reduce A, identify the pivot columns, and take the corresponding columns of the original matrix A as a basis for Im(T).
Worked Example
For A=101213314, row reduction gives 100210310. Pivots in columns 1 and 2. The image has basis {(1,0,1),(2,1,3)} — two-dimensional. The kernel has one free variable (x3=t), giving ker(T)=Span{(−1,−1,1)} — one-dimensional. Check: 2+1=3=n.
The Fundamental Decomposition
The rank-nullity theorem has a structural interpretation that goes beyond dimension counting. The domain V decomposes as a direct sum:
V=ker(T)⊕(a complement of ker(T))
The transformation T kills everything in the kernel and maps the complement bijectively onto the image. Every vector v∈V splits as v=vk+vc where vk∈ker(T) and vc is in the complement. Then T(v)=T(vc), and the restriction of T to the complement is a bijection onto Im(T).
For matrix transformations, the four fundamental subspaces provide the natural complement: the row space of A is the orthogonal complement of the null space in Rn, and A maps the row space bijectively onto the column space. The null-space component is destroyed; the row-space component survives intact.