Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Homogenous Systems of Equations






Systems with Zero Right-Hand Side

A homogeneous system Ax = 0 always has the trivial solution x = 0. The real question is whether nontrivial solutions exist — and when they do, the solution set is a subspace of Rⁿ whose structure is governed entirely by the rank of the coefficient matrix.



Definition

A homogeneous linear system is one where every equation has zero on the right-hand side:

Ax=0A\mathbf{x} = \mathbf{0}


The augmented matrix is [A0][A \mid \mathbf{0}]. Since the last column is all zeros, row operations on the augmented matrix never produce a contradictory row [0    0d][0 \; \cdots \; 0 \mid d] with d0d \neq 0. A homogeneous system is always consistent.

The vector x=0\mathbf{x} = \mathbf{0} satisfies every equation — this is the trivial solution. It always exists. The central question for a homogeneous system is never "does a solution exist?" but "does a nontrivial solution exist?"

When Do Nontrivial Solutions Exist?

Nontrivial solutions to Ax=0A\mathbf{x} = \mathbf{0} exist if and only if the rank of AA is less than nn, the number of unknowns. When rank(A)<n\text{rank}(A) < n, at least one free variable appears in the echelon form, and that free variable parametrizes a family of nonzero solutions.

For a square n×nn \times n matrix, nontrivial solutions exist if and only if det(A)=0\det(A) = 0. A nonzero determinant means full rank, which means no free variables, which means only the trivial solution.

One case is automatic: if the system has more unknowns than equations (n>mn > m), nontrivial solutions always exist. The rank of an m×nm \times n matrix cannot exceed mm, and when m<nm < n, the rank is strictly less than nn. This guarantees at least nmn - m free variables, producing an infinite family of nontrivial solutions. Fewer equations than unknowns always leaves room.

The Solution Set Is the Null Space

The set of all solutions to Ax=0A\mathbf{x} = \mathbf{0} is the null space of AA:

Null(A)={xRn:Ax=0}\text{Null}(A) = \{\mathbf{x} \in \mathbb{R}^n : A\mathbf{x} = \mathbf{0}\}


The null space is a subspace of Rn\mathbb{R}^n. It contains 0\mathbf{0}, and it is closed under addition and scalar multiplication: if Au=0A\mathbf{u} = \mathbf{0} and Av=0A\mathbf{v} = \mathbf{0}, then A(u+v)=0A(\mathbf{u} + \mathbf{v}) = \mathbf{0} and A(cu)=0A(c\mathbf{u}) = \mathbf{0}.

The dimension of the null space is the nullity: nullity(A)=nrank(A)\text{nullity}(A) = n - \text{rank}(A). When the nullity is 00, the null space is {0}\{\mathbf{0}\} and only the trivial solution exists. When the nullity is k>0k > 0, the null space is a kk-dimensional subspace, and the solution set contains infinitely many vectors forming a kk-dimensional flat through the origin.

Finding the Null Space

The algorithm is a direct application of Gaussian elimination. Row reduce AA to echelon form (reducing just AA — the zero augmented column adds nothing). Identify the pivot variables and the free variables. For each free variable, set it to 11 with all other free variables at 00, and solve for the pivot variables by back substitution. Each setting produces one basis vector for the null space.

Worked Example


A=(121032402812210)A = \begin{pmatrix} 1 & 2 & -1 & 0 & 3 \\ 2 & 4 & 0 & 2 & 8 \\ -1 & -2 & 2 & 1 & 0 \end{pmatrix}


Row reduce:

R22R1,  R3+R1(121030022200113)R312R2(121030022200002)\xrightarrow{R_2 - 2R_1,\; R_3 + R_1} \begin{pmatrix} 1 & 2 & -1 & 0 & 3 \\ 0 & 0 & 2 & 2 & 2 \\ 0 & 0 & 1 & 1 & 3 \end{pmatrix} \xrightarrow{R_3 - \frac{1}{2}R_2} \begin{pmatrix} 1 & 2 & -1 & 0 & 3 \\ 0 & 0 & 2 & 2 & 2 \\ 0 & 0 & 0 & 0 & 2 \end{pmatrix}


Pivots in columns 11, 33, 55. Free variables: x2=sx_2 = s, x4=tx_4 = t. Row 33: 2x5=0x5=02x_5 = 0 \Rightarrow x_5 = 0. Row 22: 2x3+2t=0x3=t2x_3 + 2t = 0 \Rightarrow x_3 = -t. Row 11: x1+2s+t+0=0x1=2stx_1 + 2s + t + 0 = 0 \Rightarrow x_1 = -2s - t.

Setting s=1,t=0s = 1, t = 0: v1=(2,1,0,0,0)\mathbf{v}_1 = (-2, 1, 0, 0, 0). Setting s=0,t=1s = 0, t = 1: v2=(1,0,1,1,0)\mathbf{v}_2 = (-1, 0, -1, 1, 0).

The null space is Span{v1,v2}\text{Span}\{\mathbf{v}_1, \mathbf{v}_2\}, a two-dimensional subspace of R5\mathbb{R}^5.

Parametric Vector Form

The general solution to Ax=0A\mathbf{x} = \mathbf{0} is a linear combination of the null-space basis vectors:

x=t1v1+t2v2++tkvk\mathbf{x} = t_1\mathbf{v}_1 + t_2\mathbf{v}_2 + \cdots + t_k\mathbf{v}_k


where v1,,vk\mathbf{v}_1, \dots, \mathbf{v}_k are the basis vectors found by the algorithm above and t1,,tkt_1, \dots, t_k are free parameters ranging over all real numbers. The number of parameters k=nrank(A)k = n - \text{rank}(A) is the nullity.

When k=0k = 0, the only solution is x=0\mathbf{x} = \mathbf{0}. When k=1k = 1, the solutions form a line through the origin in Rn\mathbb{R}^n. When k=2k = 2, a plane through the origin. In general, the solution set is a kk-dimensional subspace passing through the origin.

There is no particular solution xp\mathbf{x}_p to add because the right-hand side is 0\mathbf{0} — the zero vector is itself the particular solution. The entire solution set is the null space, unshifted.

The Superposition Principle

If x1\mathbf{x}_1 and x2\mathbf{x}_2 are solutions to Ax=0A\mathbf{x} = \mathbf{0}, then any linear combination c1x1+c2x2c_1\mathbf{x}_1 + c_2\mathbf{x}_2 is also a solution:

A(c1x1+c2x2)=c1Ax1+c2Ax2=c10+c20=0A(c_1\mathbf{x}_1 + c_2\mathbf{x}_2) = c_1 A\mathbf{x}_1 + c_2 A\mathbf{x}_2 = c_1\mathbf{0} + c_2\mathbf{0} = \mathbf{0}


This is precisely the statement that the solution set is a subspace — it is closed under addition and scalar multiplication. The superposition principle is the reason the general solution is a linear combination of basis vectors, and it is the reason the null space has the clean structure of a vector space rather than an arbitrary collection of points.

Superposition holds only for homogeneous systems. For a non-homogeneous system Ax=bA\mathbf{x} = \mathbf{b} with b0\mathbf{b} \neq \mathbf{0}, the sum of two solutions is generally not a solution: A(x1+x2)=b+b=2bbA(\mathbf{x}_1 + \mathbf{x}_2) = \mathbf{b} + \mathbf{b} = 2\mathbf{b} \neq \mathbf{b}.

Homogeneous vs. Non-Homogeneous

The homogeneous system Ax=0A\mathbf{x} = \mathbf{0} and the non-homogeneous system Ax=bA\mathbf{x} = \mathbf{b} are deeply connected. If xp\mathbf{x}_p is any particular solution to Ax=bA\mathbf{x} = \mathbf{b}, then every solution has the form

x=xp+xh\mathbf{x} = \mathbf{x}_p + \mathbf{x}_h


where xhNull(A)\mathbf{x}_h \in \text{Null}(A) is a solution to the homogeneous system. The particular solution accounts for b\mathbf{b}; the null-space component accounts for the freedom.

This decomposition has two immediate consequences. If the null space is trivial (nullity=0\text{nullity} = 0), the non-homogeneous system has at most one solution — either xp\mathbf{x}_p alone or nothing. If the null space is nontrivial (nullity>0\text{nullity} > 0), then either Ax=bA\mathbf{x} = \mathbf{b} has no solution or it has infinitely many — there is no middle ground.

The solution set of Ax=bA\mathbf{x} = \mathbf{b} is therefore a translated copy of the null space: the null space shifted by xp\mathbf{x}_p. The homogeneous system determines the shape and dimension of the solution set; the particular solution determines its position.

Homogeneous Systems and Linear Independence

Testing whether vectors v1,,vk\mathbf{v}_1, \dots, \mathbf{v}_k are linearly independent is equivalent to checking whether the homogeneous system Ac=0A\mathbf{c} = \mathbf{0} has only the trivial solution, where A=[v1  v2    vk]A = [\mathbf{v}_1 \; \mathbf{v}_2 \; \cdots \; \mathbf{v}_k].

The equation c1v1+c2v2++ckvk=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k = \mathbf{0} is literally the system Ac=0A\mathbf{c} = \mathbf{0}. If the null space of AA is trivial, the only solution is c=0\mathbf{c} = \mathbf{0} and the vectors are independent. If the null space is nontrivial, some nonzero c\mathbf{c} satisfies the equation, providing an explicit dependence relation — the entries of c\mathbf{c} are the coefficients that express one vector as a combination of the others.

This is the computational link between homogeneous systems and independence. Row reducing AA and checking for free variables is the standard algorithm for deciding independence, and the null-space basis vectors encode the dependence relations when they exist.

The Eigenvalue Connection

The eigenvalue equation Ax=λxA\mathbf{x} = \lambda\mathbf{x} can be rewritten as

(AλI)x=0(A - \lambda I)\mathbf{x} = \mathbf{0}


This is a homogeneous system with coefficient matrix AλIA - \lambda I. Eigenvectors are precisely the nontrivial solutions. They exist when and only when AλIA - \lambda I is singular — that is, when det(AλI)=0\det(A - \lambda I) = 0.

The values of λ\lambda satisfying this determinant condition are the eigenvalues. For each eigenvalue λ\lambda, the set of all solutions to (AλI)x=0(A - \lambda I)\mathbf{x} = \mathbf{0} is the eigenspace — the null space of AλIA - \lambda I. The dimension of this eigenspace is the nullity of AλIA - \lambda I, which equals nrank(AλI)n - \text{rank}(A - \lambda I).

This rewriting connects homogeneous systems directly to spectral theory. Every eigenvalue problem is, at its core, a question about when a particular homogeneous system has nontrivial solutions. The machinery of row reduction, null spaces, and rank that governs homogeneous systems is the same machinery that computes eigenvectors and eigenspaces.