Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Linear Systems Calculator


Solve and visualize systems of linear equations

Input Mode
Method

System of Equations (3×3)

3 equations, 3 unknowns

Coefficients

1.x+x+x=
2.x+x+x=
3.x+x+x=

Solution

Enter values, choose a method, and click Solve

Gaussian Elimination

Reduces the augmented matrix [A|b] to row echelon form via forward elimination, then uses back-substitution to find the solution. Time complexity O(n^3).

Learn more








Gaussian Elimination

Gaussian elimination reduces an augmented matrix [Ab][A|\mathbf{b}] to row echelon form (REF) through three elementary row operations: swapping rows, scaling a row by a nonzero constant, and adding a multiple of one row to another.

The process works column by column from left to right. For each column, a pivot element is selected (using partial pivoting for numerical stability), and all entries below the pivot are eliminated. The result is an upper triangular system that is solved via back-substitution, working from the last equation upward.

a11x1+a12x2+=b1a_{11}x_1 + a_{12}x_2 + \cdots = b_1

a22x2+=b2a_{22}x_2 + \cdots = b_2

\ddots


Gaussian elimination has O(n3)O(n^3) time complexity and is the foundation of most direct solvers for linear systems.

Gauss-Jordan Elimination

Gauss-Jordan elimination extends Gaussian elimination by continuing to reduce the matrix to reduced row echelon form (RREF). After creating zeros below each pivot (forward elimination), it also eliminates all entries above each pivot (backward elimination) and scales each pivot to 1.

The result is an identity-like structure on the left side of the augmented matrix, and the solution vector appears directly in the rightmost column — no back-substitution is needed.

Gauss-Jordan is slightly more expensive than standard Gaussian elimination (roughly 50% more operations) but produces a cleaner final form. It is also the method used to compute matrix inverses by applying the process to [AI][A|I].

Cramer's Rule

Cramer's Rule solves a square system Ax=bA\mathbf{x} = \mathbf{b} by expressing each unknown as a ratio of determinants:

xi=det(Ai)det(A)x_i = \frac{\det(A_i)}{\det(A)}


where AiA_i is the matrix AA with its ii-th column replaced by b\mathbf{b}. The method requires det(A)0\det(A) \neq 0, meaning the system must have a unique solution.

While Cramer's Rule is elegant and useful for theoretical analysis and small systems (2x2, 3x3), it becomes computationally impractical for large systems due to O(n!)O(n!) complexity of naive determinant computation. For practical computation, Gaussian elimination is preferred.

Inverse Method

The inverse method solves Ax=bA\mathbf{x} = \mathbf{b} by computing x=A1b\mathbf{x} = A^{-1}\mathbf{b} directly. This requires AA to be square and non-singular (det(A)0\det(A) \neq 0).

The calculator finds A1A^{-1} using Gauss-Jordan elimination on the augmented matrix [AI][A|I]. Row operations transform the left side into II, and the right side becomes A1A^{-1}.

The inverse method is most useful when solving multiple systems with the same coefficient matrix but different right-hand sides, since A1A^{-1} only needs to be computed once. For a single system, Gaussian elimination is more efficient because computing the full inverse requires more operations than directly solving the system.

Types of Solutions

A system of linear equations has exactly one of three solution types:

Unique solution -- the system has exactly one set of values satisfying all equations. This occurs when the coefficient matrix has full rank (rank equals the number of unknowns). The graph shows lines intersecting at a single point.

Infinitely many solutions -- the system is consistent but underdetermined. This occurs when the rank is less than the number of unknowns and no contradictions exist. Solutions form a line, plane, or higher-dimensional subspace.

No solution -- the system is inconsistent. This occurs when row reduction produces a row of the form [0  0    0    c][0 \; 0 \; \cdots \; 0 \; | \; c] where c0c \neq 0. Geometrically, this means parallel lines or planes that never intersect.

Augmented Matrix Representation

The augmented matrix [Ab][A|\mathbf{b}] combines the coefficient matrix and the constants vector into a single matrix for efficient manipulation. For a system with mm equations and nn unknowns:

[a11a12a1nb1a21a22a2nb2am1am2amnbm]\left[\begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{array}\right]


Row operations on the augmented matrix correspond exactly to valid algebraic manipulations of the equations. The calculator supports both equation input (showing the algebraic form) and matrix input (showing the augmented matrix directly).

Row Echelon Form vs Reduced Row Echelon Form

Row echelon form (REF) is the result of Gaussian elimination. Requirements: all zero rows are at the bottom, each leading entry (pivot) is to the right of the one above it, and all entries below each pivot are zero. REF requires back-substitution to find the solution.

Reduced row echelon form (RREF) is the result of Gauss-Jordan elimination. In addition to REF requirements: every pivot equals 1, and every pivot is the only nonzero entry in its column. The solution can be read directly from the final matrix.

RREF is unique for any given matrix, while REF is not (different pivot choices produce different REF forms). Both forms preserve the solution set of the original system.

Related Tools and Concepts

This linear systems calculator solves systems using four methods with step-by-step breakdowns and a 2D graph for two-variable systems. For matrix-specific operations like determinants, inverses, LU decomposition, and Kronecker products, use the Matrix Operations Calculator.

For vector-level computations including dot products, cross products, projections, Gram-Schmidt orthogonalization, and linear independence checks, use the Vector Operations Calculator. Related topics include eigenvalues, rank, null space, and least squares solutions.