Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Image and Kernel of Linear Transformation






What a Transformation Hits and What It Kills

Every linear transformation partitions its domain into two complementary pieces: the kernel, consisting of everything that maps to zero, and a complement that maps bijectively onto the image. The dimensions of the kernel and image are locked together by the rank-nullity theorem, and their relationship determines whether the transformation is injective, surjective, or neither.



The Image

The image (or range) of a linear transformation T:VWT: V \to W is the set of all outputs:

Im(T)={T(v):vV}\text{Im}(T) = \{T(\mathbf{v}) : \mathbf{v} \in V\}


The image is a subspace of WW. It contains T(0)=0T(\mathbf{0}) = \mathbf{0}, and if T(u)T(\mathbf{u}) and T(v)T(\mathbf{v}) are in the image, then so is cT(u)+dT(v)=T(cu+dv)cT(\mathbf{u}) + dT(\mathbf{v}) = T(c\mathbf{u} + d\mathbf{v}) — closure under both operations follows from linearity.

When T(x)=AxT(\mathbf{x}) = A\mathbf{x} for a matrix AA, the image is the column space of AA: the set of all vectors expressible as linear combinations of the columns. The dimension of the image equals the rank of AA.

The image answers the reachability question: a vector wW\mathbf{w} \in W is in the image if and only if the equation T(v)=wT(\mathbf{v}) = \mathbf{w} — equivalently, Ax=wA\mathbf{x} = \mathbf{w} — has a solution.

The Kernel

The kernel (or null space) of T:VWT: V \to W is the set of all inputs that map to zero:

ker(T)={vV:T(v)=0}\ker(T) = \{\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}\}


The kernel is a subspace of VV. It contains 0\mathbf{0} (since T(0)=0T(\mathbf{0}) = \mathbf{0}), and if T(u)=0T(\mathbf{u}) = \mathbf{0} and T(v)=0T(\mathbf{v}) = \mathbf{0}, then T(cu+dv)=cT(u)+dT(v)=0T(c\mathbf{u} + d\mathbf{v}) = cT(\mathbf{u}) + dT(\mathbf{v}) = \mathbf{0}, so cu+dvker(T)c\mathbf{u} + d\mathbf{v} \in \ker(T).

When T(x)=AxT(\mathbf{x}) = A\mathbf{x}, the kernel is the null space of AA: all solutions to the homogeneous system Ax=0A\mathbf{x} = \mathbf{0}. Its dimension is the nullity, equal to nrank(A)n - \text{rank}(A).

The kernel measures the information lost by TT. Vectors in the kernel are collapsed to 0\mathbf{0} — they represent directions that the transformation annihilates. A larger kernel means more information is destroyed.

Injectivity

A linear transformation TT is injective (one-to-one) if different inputs always produce different outputs: T(u)=T(v)T(\mathbf{u}) = T(\mathbf{v}) implies u=v\mathbf{u} = \mathbf{v}.

For linear maps, injectivity has an elegant equivalent: TT is injective if and only if ker(T)={0}\ker(T) = \{\mathbf{0}\}. The proof is short. If T(u)=T(v)T(\mathbf{u}) = T(\mathbf{v}), then T(uv)=T(u)T(v)=0T(\mathbf{u} - \mathbf{v}) = T(\mathbf{u}) - T(\mathbf{v}) = \mathbf{0}, so uvker(T)\mathbf{u} - \mathbf{v} \in \ker(T). If the kernel is trivial, uv=0\mathbf{u} - \mathbf{v} = \mathbf{0} and u=v\mathbf{u} = \mathbf{v}.

For matrix transformations, injectivity is equivalent to full column rank: rank(A)=n\text{rank}(A) = n. This means every column is a pivot column, no free variables exist in Ax=0A\mathbf{x} = \mathbf{0}, the columns are linearly independent, and the determinant is nonzero (in the square case).

Injectivity means the transformation preserves distinctness — no two different inputs are confused with each other.

Surjectivity

A linear transformation T:VWT: V \to W is surjective (onto) if Im(T)=W\text{Im}(T) = W — every vector in the codomain is the image of some vector in the domain.

For matrix transformations, surjectivity is equivalent to full row rank: rank(A)=m\text{rank}(A) = m. This means every row contains a pivot, the column space is all of Rm\mathbb{R}^m, and the system Ax=bA\mathbf{x} = \mathbf{b} has a solution for every right-hand side b\mathbf{b}.

Surjectivity means the transformation has no blind spots — every output is reachable from some input. Failure of surjectivity means the image is a proper subspace of the codomain: certain vectors in WW are inherently unreachable, no matter what input is chosen.

Bijectivity and Isomorphisms

A linear transformation that is both injective and surjective is bijective. A bijective linear transformation is called an isomorphism — it establishes that the domain and codomain are structurally identical as vector spaces.

For a map T:VWT: V \to W between spaces of equal dimension (dim(V)=dim(W)=n\dim(V) = \dim(W) = n), the three conditions collapse: injective     \iff surjective     \iff bijective. Checking any one of the three establishes the other two. This is because the rank-nullity theorem forces dim(Im(T))+dim(ker(T))=n\dim(\text{Im}(T)) + \dim(\ker(T)) = n, and dim(Im(T))n=dim(W)\dim(\text{Im}(T)) \leq n = \dim(W). If the kernel is trivial (injective), the image has dimension nn and must equal all of WW (surjective). If the image is all of WW (surjective), the kernel must have dimension 00 (injective).

For matrix transformations between spaces of the same dimension, bijectivity is equivalent to the matrix being square and invertible.

The Rank-Nullity Theorem for Maps

For a linear transformation T:VWT: V \to W with VV finite-dimensional:

dim(Im(T))+dim(ker(T))=dim(V)\dim(\text{Im}(T)) + \dim(\ker(T)) = \dim(V)


The domain dimensions split between what the map preserves and what it destroys. The image captures the dimensions that survive; the kernel captures the dimensions that are annihilated.

For matrix transformations T(x)=AxT(\mathbf{x}) = A\mathbf{x}, this becomes rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n — the familiar rank-nullity theorem in concrete language.

The theorem constrains the interplay between injectivity and surjectivity. If dim(V)>dim(W)\dim(V) > \dim(W), the image can have at most dim(W)\dim(W) dimensions, forcing the kernel to have at least dim(V)dim(W)\dim(V) - \dim(W) dimensions — the map cannot be injective. If dim(V)<dim(W)\dim(V) < \dim(W), the image cannot fill all of WW — the map cannot be surjective.

Dimension Constraints

The rank-nullity theorem imposes hard limits on what a linear transformation can achieve.

T:VWT: V \to W can be injective only if dim(V)dim(W)\dim(V) \leq \dim(W). A map from a larger space to a smaller one must collapse some directions — the kernel is forced to be nontrivial.

T:VWT: V \to W can be surjective only if dim(V)dim(W)\dim(V) \geq \dim(W). A map from a smaller space to a larger one cannot cover all directions — the image is a proper subspace.

TT can be bijective only if dim(V)=dim(W)\dim(V) = \dim(W). This is necessary but not sufficient — even with equal dimensions, the map must still have full rank.

These constraints apply to all linear maps, not just matrix transformations. They are consequences of the rank-nullity theorem and the dimension theory of vector spaces.

Computing the Image and Kernel

For a matrix transformation T(x)=AxT(\mathbf{x}) = A\mathbf{x}, the image and kernel are computed by row reduction.

The kernel is the null space of AA: solve Ax=0A\mathbf{x} = \mathbf{0}, reduce to echelon form, and express the solution in parametric vector form. Each free variable contributes one basis vector for ker(T)\ker(T).

The image is the column space of AA: row reduce AA, identify the pivot columns, and take the corresponding columns of the original matrix AA as a basis for Im(T)\text{Im}(T).

Worked Example


For A=(123011134)A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 1 \\ 1 & 3 & 4 \end{pmatrix}, row reduction gives (123011000)\begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}. Pivots in columns 11 and 22. The image has basis {(1,0,1),(2,1,3)}\{(1, 0, 1), (2, 1, 3)\} — two-dimensional. The kernel has one free variable (x3=tx_3 = t), giving ker(T)=Span{(1,1,1)}\ker(T) = \text{Span}\{(-1, -1, 1)\} — one-dimensional. Check: 2+1=3=n2 + 1 = 3 = n.

The Fundamental Decomposition

The rank-nullity theorem has a structural interpretation that goes beyond dimension counting. The domain VV decomposes as a direct sum:

V=ker(T)(a complement of ker(T))V = \ker(T) \oplus (\text{a complement of } \ker(T))


The transformation TT kills everything in the kernel and maps the complement bijectively onto the image. Every vector vV\mathbf{v} \in V splits as v=vk+vc\mathbf{v} = \mathbf{v}_k + \mathbf{v}_c where vkker(T)\mathbf{v}_k \in \ker(T) and vc\mathbf{v}_c is in the complement. Then T(v)=T(vc)T(\mathbf{v}) = T(\mathbf{v}_c), and the restriction of TT to the complement is a bijection onto Im(T)\text{Im}(T).

For matrix transformations, the four fundamental subspaces provide the natural complement: the row space of AA is the orthogonal complement of the null space in Rn\mathbb{R}^n, and AA maps the row space bijectively onto the column space. The null-space component is destroyed; the row-space component survives intact.