Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Matrices






Rectangular Arrays of Numbers

A matrix is one of the most versatile objects in mathematics. It encodes systems of equations, represents linear transformations, stores data in structured form, and serves as the computational backbone of nearly every topic in linear algebra. Understanding what matrices are, how to read them, and how their parts relate to one another is the starting point for everything that follows.



What a Matrix Is

A matrix is a rectangular array of numbers arranged in rows and columns. The standard notation uses a capital letter for the matrix and a lowercase letter with two subscripts for its entries: the matrix AA has entry aija_{ij} in row ii and column jj. The shorthand A=(aij)A = (a_{ij}) means "the matrix whose (i,j)(i,j) entry is aija_{ij}."

In full generality, an m×nm \times n matrix looks like

A=(a11a12a1na21a22a2nam1am2amn)A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix}


The entries can be real numbers, complex numbers, or elements of any algebraic field. Throughout this site, entries are real unless explicitly stated otherwise.

Dimensions, Rows, and Columns

The size of a matrix is described by two numbers: mm rows and nn columns, written m×nm \times n. The notation ARm×nA \in \mathbb{R}^{m \times n} states that AA is an m×nm \times n matrix with real entries. The total number of entries is mnm \cdot n. Order matters: a 3×53 \times 5 matrix and a 5×35 \times 3 matrix have different shapes and are never equal, regardless of their entries.

Row ii of AA is the horizontal slice (ai1,ai2,,ain)(a_{i1}, a_{i2}, \dots, a_{in}), a 1×n1 \times n vector. Column jj is the vertical slice (a1j,a2j,,amj)T(a_{1j}, a_{2j}, \dots, a_{mj})^T, an m×1m \times 1 vector. The main diagonal consists of the entries where the row index equals the column index: a11,a22,,akka_{11}, a_{22}, \dots, a_{kk} with k=min(m,n)k = \min(m, n). The diagonal is defined for any matrix, not just square ones, though it is most prominent in the square case.

A matrix with m=nm = n is called square, and square matrices occupy a special position. Only square matrices can have a determinant, an inverse, eigenvalues, or a trace. A column vector in Rn\mathbb{R}^n is simply an n×1n \times 1 matrix, a row vector is a 1×n1 \times n matrix, and a scalar is a 1×11 \times 1 matrix. Matrices unify all of these objects under a single framework.

Matrix Equality and the Zero Matrix

Two matrices AA and BB are equal if and only if they have the same dimensions and every pair of corresponding entries matches: aij=bija_{ij} = b_{ij} for all ii and jj. A single mismatched entry makes the matrices unequal. If the dimensions differ, the matrices are never equal — a 2×32 \times 3 matrix cannot equal a 3×23 \times 2 matrix no matter what numbers they contain.

The zero matrix is the m×nm \times n matrix whose every entry is zero, written OO or 0m×n0_{m \times n}. It serves as the additive identity: A+O=AA + O = A for any matrix AA of the same size. Strictly speaking, there is a different zero matrix for each pair (m,n)(m, n), but the same symbol is used for all of them, with the dimensions understood from context.

Matrices as Collections of Vectors

An m×nm \times n matrix can be viewed as a collection of nn column vectors in Rm\mathbb{R}^m, arranged side by side:

A=(a1a2an)A = \begin{pmatrix} | & | & & | \\ \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n \\ | & | & & | \end{pmatrix}


Equivalently, it is a stack of mm row vectors in Rn\mathbb{R}^n. Both perspectives are useful, and choosing the right one often simplifies a problem considerably. The column view connects the matrix to concepts like span, linear independence, and column space. The row view connects it to systems of equations and row space.

The column perspective also gives a powerful interpretation of the matrix-vector product. If x=(x1,x2,,xn)T\mathbf{x} = (x_1, x_2, \dots, x_n)^T, then

Ax=x1a1+x2a2++xnanA\mathbf{x} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n


The product AxA\mathbf{x} is a linear combination of the columns of AA, weighted by the entries of x\mathbf{x}. This single observation underlies the theory of linear systems, transformations, and virtually everything else involving matrices.

Matrix Arithmetic at a Glance

Matrices support several operations, each with its own rules and dimension requirements.

Addition is entry-by-entry: (A+B)ij=aij+bij(A + B)_{ij} = a_{ij} + b_{ij}. Both matrices must have the same dimensions. Scalar multiplication scales every entry: (cA)ij=caij(cA)_{ij} = c \cdot a_{ij}. These two operations together give the set of all m×nm \times n matrices the structure of a vector space of dimension mnmn.

Matrix multiplication is more involved. For AA of size m×nm \times n and BB of size n×pn \times p, the product ABAB has size m×pm \times p, with each entry computed as the dot product of a row of AA with a column of BB: (AB)ij=k=1naikbkj(AB)_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj}. The number of columns of AA must equal the number of rows of BB; otherwise the product is undefined.

The transpose ATA^T swaps rows and columns: (AT)ij=aji(A^T)_{ij} = a_{ji}. An m×nm \times n matrix becomes n×mn \times m.

One property that distinguishes matrix arithmetic from ordinary arithmetic is that multiplication is not commutative. In general, ABBAAB \neq BA, even when both products are defined. This asymmetry has far-reaching consequences throughout linear algebra.

Special Matrix Shapes

Certain structural patterns appear so frequently that they have their own names and dedicated theory. A diagonal matrix has nonzero entries only on the main diagonal, making its arithmetic trivially simple — products, powers, and inverses all reduce to operations on the diagonal entries alone. The identity matrix II is the diagonal matrix with every diagonal entry equal to 11, serving as the multiplicative identity: AI=IA=AAI = IA = A.

A symmetric matrix satisfies A=ATA = A^T, meaning it is unchanged by transposition. Symmetric matrices have the remarkable property that all their eigenvalues are real and their eigenvectors can be chosen to be mutually orthogonal. A triangular matrix has all entries either above or below the diagonal equal to zero, making its determinant and eigenvalues readable directly from the diagonal.

An orthogonal matrix satisfies QTQ=IQ^T Q = I, meaning its columns form an orthonormal set and its transpose is its inverse. Orthogonal matrices preserve lengths and angles, making them the algebraic counterpart of rotations and reflections.

These and several other types — including skew-symmetric, nilpotent, idempotent, and permutation matrices — each carry structural guarantees that simplify computation and deepen understanding.

The Inverse of a Matrix

A square matrix AA is called invertible if there exists a matrix A1A^{-1} satisfying AA1=A1A=IAA^{-1} = A^{-1}A = I. When it exists, the inverse is unique and effectively "undoes" the action of AA: if AA maps x\mathbf{x} to b\mathbf{b}, then A1A^{-1} maps b\mathbf{b} back to x\mathbf{x}.

Not every square matrix has an inverse. The dividing line is the determinant: AA is invertible if and only if det(A)0\det(A) \neq 0. For a 2×22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, the inverse has the explicit formula

A1=1adbc(dbca)A^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}


which breaks down precisely when adbc=0ad - bc = 0. For larger matrices, the inverse can be computed by row reduction or through the adjugate formula, though in practice solving Ax=bAx = \mathbf{b} directly is almost always more efficient than computing A1A^{-1}.

Rank and Trace

Two scalar quantities extracted from a matrix appear throughout linear algebra.

The rank of an m×nm \times n matrix is the number of linearly independent rows, which always equals the number of linearly independent columns. It measures the "effective dimensionality" of the matrix — how many of its rows or columns carry genuinely new information. The rank satisfies 0rank(A)min(m,n)0 \leq \text{rank}(A) \leq \min(m, n). When rank(A)=min(m,n)\text{rank}(A) = \min(m, n), the matrix is said to have full rank, meaning no row or column is redundant.

The trace is defined only for square matrices: tr(A)=a11+a22++ann\text{tr}(A) = a_{11} + a_{22} + \cdots + a_{nn}, the sum of the diagonal entries. Despite its simplicity, the trace encodes deep information. It equals the sum of the eigenvalues, it is invariant under changes of basis, and it satisfies the cyclic property tr(AB)=tr(BA)\text{tr}(AB) = \text{tr}(BA), which makes it a fundamental tool in both theoretical and applied contexts.

Matrices and Systems of Equations

A system of mm linear equations in nn unknowns can be written compactly as

Ax=bAx = \mathbf{b}


where AA is the m×nm \times n coefficient matrix, x\mathbf{x} is the n×1n \times 1 vector of unknowns, and b\mathbf{b} is the m×1m \times 1 vector of right-hand sides. The augmented matrix [Ab][A \mid \mathbf{b}] appends b\mathbf{b} as an extra column, creating the compact representation used in Gaussian elimination.

Whether the system has no solution, exactly one solution, or infinitely many solutions depends entirely on the rank of AA relative to the rank of the augmented matrix [Ab][A \mid \mathbf{b}]. When AA is square and invertible, the unique solution is x=A1b\mathbf{x} = A^{-1}\mathbf{b}. When AA is rectangular or singular, the analysis requires the rank and the structure of the null space.

Matrices as Linear Transformations

Every m×nm \times n matrix AA defines a function from Rn\mathbb{R}^n to Rm\mathbb{R}^m by the rule xAx\mathbf{x} \mapsto A\mathbf{x}. This function is a linear transformation: it preserves addition (A(x+y)=Ax+AyA(\mathbf{x} + \mathbf{y}) = A\mathbf{x} + A\mathbf{y}) and scalar multiplication (A(cx)=cAxA(c\mathbf{x}) = cA\mathbf{x}).

The columns of AA reveal exactly what the transformation does to the standard basis. The first column a1\mathbf{a}_1 is the image of e1\mathbf{e}_1, the second column a2\mathbf{a}_2 is the image of e2\mathbf{e}_2, and so on. Once the images of the basis vectors are known, the image of any vector follows by linearity.

When AA is square and invertible, the transformation is bijective — every output has exactly one input, and the inverse transformation is given by A1A^{-1}. When AA is singular, the transformation collapses at least one dimension, mapping Rn\mathbb{R}^n onto a proper subspace of Rm\mathbb{R}^m. The rank of AA is the dimension of this image, and the null space captures everything that gets sent to zero.

This perspective transforms matrices from static tables of numbers into active geometric objects that rotate, stretch, compress, reflect, and project.