Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Linear Independence






When No Vector Is Redundant

A set of vectors is linearly independent if none of them can be built from the others. This is the formal way of saying that every vector in the set contributes something genuinely new — remove any one of them and the span shrinks. Independence is one half of what makes a basis: the half that eliminates waste.



Definition

A set of vectors {v1,v2,,vk}\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} in a vector space VV is linearly independent if the equation

c1v1+c2v2++ckvk=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k = \mathbf{0}


has only the trivial solution c1=c2==ck=0c_1 = c_2 = \cdots = c_k = 0. If a nontrivial solution exists — any assignment of scalars, not all zero, that produces 0\mathbf{0} — the set is linearly dependent.

Dependence means at least one vector in the set can be expressed as a linear combination of the others. If cj0c_j \neq 0 in a nontrivial relation, then dividing by cjc_j isolates vj\mathbf{v}_j:

vj=c1cjv1cj1cjvj1cj+1cjvj+1ckcjvk\mathbf{v}_j = -\frac{c_1}{c_j}\mathbf{v}_1 - \cdots - \frac{c_{j-1}}{c_j}\mathbf{v}_{j-1} - \frac{c_{j+1}}{c_j}\mathbf{v}_{j+1} - \cdots - \frac{c_k}{c_j}\mathbf{v}_k


A single nonzero vector is always independent — the equation cv=0c\mathbf{v} = \mathbf{0} with v0\mathbf{v} \neq \mathbf{0} forces c=0c = 0. The zero vector, by contrast, is always dependent: 10=01 \cdot \mathbf{0} = \mathbf{0} is a nontrivial relation. Any set containing the zero vector is therefore dependent.

Geometric Interpretation in Rⁿ

In R2\mathbb{R}^2 and R3\mathbb{R}^3, independence has clean geometric meaning.

Two vectors in R2\mathbb{R}^2 are dependent if and only if they are parallel — one is a scalar multiple of the other. They point along the same line through the origin, so neither adds a direction that the other does not already cover. Two non-parallel vectors are independent and span all of R2\mathbb{R}^2.

Three vectors in R3\mathbb{R}^3 are dependent if and only if they are coplanar — all three lie in a single plane through the origin. The third vector can be written as a combination of the first two, so it provides no new reach. Three non-coplanar vectors are independent and span all of R3\mathbb{R}^3.

In Rn\mathbb{R}^n, at most nn vectors can be independent. A set of n+1n + 1 or more vectors in Rn\mathbb{R}^n is automatically dependent, regardless of what the vectors are. This is because Rn\mathbb{R}^n has dimension nn, and no independent set can exceed the dimension.

Testing Independence: The Homogeneous System

For vectors in Rm\mathbb{R}^m, independence can be tested by row reduction. Arrange v1,,vk\mathbf{v}_1, \dots, \mathbf{v}_k as columns of an m×km \times k matrix AA. The independence equation c1v1++ckvk=0c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0} is equivalent to the homogeneous system Ac=0A\mathbf{c} = \mathbf{0}.

Row reduce AA. The vectors are independent if and only if every column contains a pivot — that is, there are no free variables. If any column lacks a pivot, the corresponding variable is free, a nontrivial solution exists, and the set is dependent.

Worked Example: Independent Set


Test whether v1=(1,2,1)\mathbf{v}_1 = (1, 2, -1), v2=(0,1,3)\mathbf{v}_2 = (0, 1, 3), v3=(2,0,1)\mathbf{v}_3 = (2, 0, 1) are independent. Form the matrix and reduce:

A=(102210131)R22R1,  R3+R1(102014033)R33R2(1020140015)A = \begin{pmatrix} 1 & 0 & 2 \\ 2 & 1 & 0 \\ -1 & 3 & 1 \end{pmatrix} \xrightarrow{R_2 - 2R_1,\; R_3 + R_1} \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & -4 \\ 0 & 3 & 3 \end{pmatrix} \xrightarrow{R_3 - 3R_2} \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & -4 \\ 0 & 0 & 15 \end{pmatrix}


Three pivots in three columns — no free variables. The set is independent.

Worked Example: Dependent Set


Test v1=(1,0,2)\mathbf{v}_1 = (1, 0, 2), v2=(3,1,4)\mathbf{v}_2 = (3, 1, 4), v3=(5,2,6)\mathbf{v}_3 = (5, 2, 6), v4=(0,1,2)\mathbf{v}_4 = (0, 1, -2) in R3\mathbb{R}^3. The matrix is 3×43 \times 4 — four columns but only three rows, so at most three pivots. At least one column must be free. The set is dependent without any computation, simply because four vectors in R3\mathbb{R}^3 cannot be independent.

Testing Independence: The Determinant

When the number of vectors equals the dimension of the space, the independence question reduces to a single number. Arrange nn vectors in Rn\mathbb{R}^n as columns of an n×nn \times n matrix AA. The set is independent if and only if

det(A)0\det(A) \neq 0


This follows from the invertibility equivalence: AA is invertible if and only if its columns are independent, and AA is invertible if and only if det(A)0\det(A) \neq 0.

Worked Example


Test whether (1,3)(1, 3) and (2,5)(2, 5) are independent in R2\mathbb{R}^2:

det(1235)=56=10\det\begin{pmatrix} 1 & 2 \\ 3 & 5 \end{pmatrix} = 5 - 6 = -1 \neq 0


The vectors are independent.

The determinant test applies only to the square case — exactly nn vectors in Rn\mathbb{R}^n. For fewer than nn vectors, or for vectors in an abstract vector space, the row-reduction approach or the definition itself must be used.

Properties of Independent Sets

Several structural facts constrain how independence behaves under set operations.

Every subset of an independent set is independent. If no nontrivial combination of {v1,,vk}\{\mathbf{v}_1, \dots, \mathbf{v}_k\} gives 0\mathbf{0}, then no nontrivial combination of a subset can either — fewer vectors means fewer coefficients, all of which must still be zero.

Adding a vector w\mathbf{w} to an independent set {v1,,vk}\{\mathbf{v}_1, \dots, \mathbf{v}_k\} preserves independence if and only if wSpan{v1,,vk}\mathbf{w} \notin \text{Span}\{\mathbf{v}_1, \dots, \mathbf{v}_k\}. If w\mathbf{w} is already in the span, it can be written as a combination of the existing vectors, creating a dependence relation. If it is outside the span, no such relation exists.

In an nn-dimensional space, any independent set has at most nn elements. An independent set with exactly nn elements is automatically a basis — the spanning condition comes for free once the count reaches the dimension. An independent set with fewer than nn elements can always be extended to a basis by adding more vectors.

Removing a vector from a dependent set may or may not restore independence. It depends on which vector is removed and which vectors participate in the dependence relation.

Dependence Relations

When a set is dependent, the nontrivial solution c1v1++ckvk=0c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0} is called a dependence relation. It identifies which vectors participate in the redundancy: those with nonzero coefficients.

A common misconception is that dependence means every vector in the set is a combination of the others. This is false. Consider {(1,0),(0,1),(2,3)}\{(1, 0), (0, 1), (2, 3)\} in R2\mathbb{R}^2. The set is dependent because (2,3)=2(1,0)+3(0,1)(2, 3) = 2(1, 0) + 3(0, 1), but neither (1,0)(1, 0) nor (0,1)(0, 1) is a combination of the other two alone — removing either of the first two would make the remaining pair independent.

The dependence relation 2(1,0)+3(0,1)+(1)(2,3)=(0,0)2(1, 0) + 3(0, 1) + (-1)(2, 3) = (0, 0) has all three coefficients nonzero, so all three vectors participate. But the vector that can be "removed" without shrinking the span is (2,3)(2, 3), because it is the one that lies in the span of the other two — not because it has the largest coefficient.

When the null space of the column matrix has dimension greater than 11, there are multiple linearly independent dependence relations, and different vectors can be expressed in terms of different subsets. The pivot/free column structure from row reduction clarifies which vectors are redundant.

The Wronskian Test for Functions

In function spaces, the column-matrix test does not apply directly because the vectors are functions, not finite tuples. The Wronskian provides an alternative.

Given nn functions f1,,fnf_1, \dots, f_n, each differentiable at least n1n - 1 times, the Wronskian is the determinant of the matrix whose rows are successive derivatives:

W(f1,,fn)(x)=det(f1(x)f2(x)fn(x)f1(x)f2(x)fn(x)f1(n1)(x)f2(n1)(x)fn(n1)(x))W(f_1, \dots, f_n)(x) = \det\begin{pmatrix} f_1(x) & f_2(x) & \cdots & f_n(x) \\ f_1'(x) & f_2'(x) & \cdots & f_n'(x) \\ \vdots & \vdots & \ddots & \vdots \\ f_1^{(n-1)}(x) & f_2^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x) \end{pmatrix}


If W(f1,,fn)(x0)0W(f_1, \dots, f_n)(x_0) \neq 0 at any single point x0x_0, the functions are linearly independent.

The converse requires caution. A Wronskian that vanishes identically does not always imply dependence unless the functions are known to be solutions of a single linear ordinary differential equation with continuous coefficients. Without that structural guarantee, counterexamples exist where the Wronskian is zero everywhere yet the functions are independent.

Independence in Abstract Vector Spaces

The definition of independence — the only combination giving 0\mathbf{0} is the trivial one — applies in any vector space, not just Rn\mathbb{R}^n.

In the polynomial space Pn\mathcal{P}_n, the set {1,x,x2,,xn}\{1, x, x^2, \dots, x^n\} is independent. The equation c0+c1x+c2x2++cnxn=0c_0 + c_1 x + c_2 x^2 + \cdots + c_n x^n = 0 for all xx forces every coefficient to be zero, because a nonzero polynomial of degree at most nn can have at most nn roots, but this equation must hold for every real number. Since there are n+1n + 1 vectors and dim(Pn)=n+1\dim(\mathcal{P}_n) = n + 1, this independent set is a basis.

In the matrix space R2×2\mathbb{R}^{2 \times 2}, the four matrices

E11=(1000),E12=(0100),E21=(0010),E22=(0001)E_{11} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad E_{12} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \quad E_{21} = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \quad E_{22} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}


are independent. The equation c1E11+c2E12+c3E21+c4E22=Oc_1 E_{11} + c_2 E_{12} + c_3 E_{21} + c_4 E_{22} = O gives (c1c2c3c4)=(0000)\begin{pmatrix} c_1 & c_2 \\ c_3 & c_4 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}, which forces all four scalars to zero.

In abstract spaces where vectors are not columns of numbers, the row-reduction shortcut is unavailable. Verification returns to the definition: write down the combination equal to 0\mathbf{0} and show that all coefficients must vanish.