Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Operations on Matrices






Manipulating Matrices

Matrices support a family of operations — addition, scalar multiplication, matrix multiplication, transposition, and exponentiation — each with its own rules and dimension requirements. Matrix multiplication stands apart from the rest: it is not commutative, it demands compatible dimensions, and it admits several geometric and algebraic interpretations that make it one of the richest operations in all of mathematics.



Matrix Addition

Two matrices of the same size can be added entry by entry. If AA and BB are both m×nm \times n, their sum is the m×nm \times n matrix with entries

(A+B)ij=aij+bij(A + B)_{ij} = a_{ij} + b_{ij}


For example,

(142305)+(316024)=(434321)\begin{pmatrix} 1 & 4 \\ -2 & 3 \\ 0 & 5 \end{pmatrix} + \begin{pmatrix} 3 & -1 \\ 6 & 0 \\ 2 & -4 \end{pmatrix} = \begin{pmatrix} 4 & 3 \\ 4 & 3 \\ 2 & 1 \end{pmatrix}


If the dimensions do not match, the sum is undefined — there is no way to add a 2×32 \times 3 matrix to a 3×23 \times 2 matrix.

Addition is commutative (A+B=B+AA + B = B + A) and associative ((A+B)+C=A+(B+C)(A + B) + C = A + (B + C)). The zero matrix OO of the same size serves as the additive identity (A+O=AA + O = A), and the additive inverse of AA is A=(aij)-A = (-a_{ij}), so A+(A)=OA + (-A) = O.

Matrix Subtraction

Subtraction is defined as addition of the negative:

AB=A+(B)A - B = A + (-B)


Entry by entry, (AB)ij=aijbij(A - B)_{ij} = a_{ij} - b_{ij}. The same dimension requirement applies — both matrices must have identical shapes. There is nothing deeper here than combining addition and negation, but it appears often enough to warrant its own notation.

Scalar Multiplication

Multiplying a matrix by a scalar cc scales every entry:

(cA)ij=caij(cA)_{ij} = c \cdot a_{ij}


For example,

2(134052)=(2680104)-2 \begin{pmatrix} 1 & 3 & -4 \\ 0 & 5 & 2 \end{pmatrix} = \begin{pmatrix} -2 & -6 & 8 \\ 0 & -10 & -4 \end{pmatrix}


Scalar multiplication distributes over matrix addition (c(A+B)=cA+cBc(A + B) = cA + cB), distributes over scalar addition ((c+d)A=cA+dA(c + d)A = cA + dA), associates with itself (c(dA)=(cd)Ac(dA) = (cd)A), and has 11 as its identity (1A=A1 \cdot A = A). Multiplying by 00 produces the zero matrix.

Linear Combinations of Matrices

Given matrices A1,A2,,AkA_1, A_2, \dots, A_k of the same size and scalars c1,c2,,ckc_1, c_2, \dots, c_k, the expression

c1A1+c2A2++ckAkc_1 A_1 + c_2 A_2 + \cdots + c_k A_k


is a linear combination of matrices. Addition and scalar multiplication together give the set of all m×nm \times n matrices the structure of a vector space. The dimension of this space is mnmn — one degree of freedom for each entry. The standard basis consists of the mnmn matrices that have a single 11 in one position and zeros everywhere else.

Matrix Multiplication — Definition

For AA of size m×nm \times n and BB of size n×pn \times p, the product ABAB is an m×pm \times p matrix whose (i,j)(i,j) entry is the dot product of row ii of AA with column jj of BB:

(AB)ij=k=1naikbkj(AB)_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj}


The number of columns of AA must equal the number of rows of BB. If this compatibility condition fails, the product is undefined.

Worked Example


(103214)(512306)\begin{pmatrix} 1 & 0 & 3 \\ 2 & -1 & 4 \end{pmatrix} \begin{pmatrix} 5 & 1 \\ 2 & -3 \\ 0 & 6 \end{pmatrix}


The left matrix is 2×32 \times 3 and the right is 3×23 \times 2, so the product is 2×22 \times 2. Computing each entry:

(1)(5)+(0)(2)+(3)(0)=5,(1)(1)+(0)(3)+(3)(6)=19(1)(5) + (0)(2) + (3)(0) = 5, \quad (1)(1) + (0)(-3) + (3)(6) = 19


(2)(5)+(1)(2)+(4)(0)=8,(2)(1)+(1)(3)+(4)(6)=29(2)(5) + (-1)(2) + (4)(0) = 8, \quad (2)(1) + (-1)(-3) + (4)(6) = 29


AB=(519829)AB = \begin{pmatrix} 5 & 19 \\ 8 & 29 \end{pmatrix}


Each entry required n=3n = 3 multiplications and n1=2n - 1 = 2 additions. The full product required m×p=4m \times p = 4 such computations.

Matrix Multiplication — Properties

Matrix multiplication obeys several familiar algebraic rules and violates one that is deeply ingrained from scalar arithmetic.

Associativity holds: (AB)C=A(BC)(AB)C = A(BC) whenever all products are defined. Distribution holds on both sides: A(B+C)=AB+ACA(B + C) = AB + AC and (A+B)C=AC+BC(A + B)C = AC + BC. Scalars pass through freely: c(AB)=(cA)B=A(cB)c(AB) = (cA)B = A(cB). The identity matrix satisfies AI=IA=AAI = IA = A whenever the dimensions are compatible.

Commutativity, however, fails. In general, ABBAAB \neq BA, even when both products happen to be defined. For a concrete counterexample, take A=(1200)A = \begin{pmatrix} 1 & 2 \\ 0 & 0 \end{pmatrix} and B=(0034)B = \begin{pmatrix} 0 & 0 \\ 3 & 4 \end{pmatrix}. Then AB=(6800)AB = \begin{pmatrix} 6 & 8 \\ 0 & 0 \end{pmatrix} while BA=(0036)BA = \begin{pmatrix} 0 & 0 \\ 3 & 6 \end{pmatrix}.

Two further properties distinguish matrix multiplication from scalar multiplication. The product of two nonzero matrices can be zero: if A=(1224)A = \begin{pmatrix} 1 & 2 \\ 2 & 4 \end{pmatrix} and B=(2412)B = \begin{pmatrix} 2 & -4 \\ -1 & 2 \end{pmatrix}, then AB=OAB = O even though neither AA nor BB is zero. Cancellation also fails: AB=ACAB = AC does not imply B=CB = C unless AA is invertible.

Matrix Multiplication — Column and Row Interpretations

The entry-by-entry formula is the most common way to define matrix multiplication, but two alternative viewpoints often provide sharper insight.

The column interpretation says that column jj of ABAB is obtained by multiplying AA times column jj of BB:

AB=(Ab1Ab2Abp)AB = \begin{pmatrix} A\mathbf{b}_1 & A\mathbf{b}_2 & \cdots & A\mathbf{b}_p \end{pmatrix}


Each column of the product is a linear combination of the columns of AA, with weights given by the corresponding column of BB. This is the view that connects matrix multiplication to linear transformations: the product ABAB applies the transformation AA to each column of BB independently.

The row interpretation says that row ii of ABAB equals row ii of AA times the entire matrix BB. Each row of the product is a linear combination of the rows of BB, weighted by the entries in the corresponding row of AA.

A third perspective writes the product as a sum of rank-one outer products:

AB=k=1n(column k of A)(row k of B)AB = \sum_{k=1}^{n} (\text{column } k \text{ of } A)(\text{row } k \text{ of } B)


Each term is an m×pm \times p matrix of rank at most one, and their sum is the full product. This decomposition appears in low-rank approximation theory and in the analysis of the singular value decomposition.

The Transpose

The transpose of an m×nm \times n matrix AA is the n×mn \times m matrix ATA^T obtained by converting rows into columns:

(AT)ij=aji(A^T)_{ij} = a_{ji}


For example,

A=(123456)AT=(142536)A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{pmatrix} \quad \Longrightarrow \quad A^T = \begin{pmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{pmatrix}


The transpose satisfies (AT)T=A(A^T)^T = A, distributes over addition ((A+B)T=AT+BT(A + B)^T = A^T + B^T), and commutes with scalar multiplication ((cA)T=cAT(cA)^T = cA^T). The product rule reverses the order:

(AB)T=BTAT(AB)^T = B^T A^T


This reversal is a frequent source of errors and is worth memorizing as a pattern: transposing a product is like reading it backward.

A matrix satisfying A=ATA = A^T is called symmetric. For any matrix AA of any shape, the products ATAA^T A and AATAA^T are both symmetric — this is immediate from the product rule, since (ATA)T=AT(AT)T=ATA(A^T A)^T = A^T (A^T)^T = A^T A.

Matrix Powers

For a square matrix AA, powers are defined by repeated multiplication:

A0=I,A1=A,Ak=AAAk factorsA^0 = I, \quad A^1 = A, \quad A^k = \underbrace{A \cdot A \cdots A}_{k \text{ factors}}


The usual exponent laws hold: AjAk=Aj+kA^j A^k = A^{j+k} and (Aj)k=Ajk(A^j)^k = A^{jk}. When AA is invertible, negative powers are defined as Ak=(A1)kA^{-k} = (A^{-1})^k, extending the exponent laws to all integers.

One rule from scalar arithmetic does not carry over. Since matrix multiplication is not commutative, the identity (AB)k=AkBk(AB)^k = A^k B^k is false in general. Expanding (AB)2=ABAB(AB)^2 = ABAB, there is no way to rearrange this into A2B2=AABBA^2 B^2 = AABB without commutativity.

Powers of specific matrix types are particularly well-behaved. For a diagonal matrix D=diag(d1,,dn)D = \text{diag}(d_1, \dots, d_n), the kk-th power is Dk=diag(d1k,,dnk)D^k = \text{diag}(d_1^k, \dots, d_n^k) — each diagonal entry is raised to the kk-th power independently. This simplicity is one of the main reasons diagonalization is so useful: writing A=PDP1A = PDP^{-1} gives Ak=PDkP1A^k = PD^kP^{-1}, reducing an expensive matrix power to a cheap diagonal power.

Elementary Matrices

An elementary matrix is the result of performing a single row operation on the identity matrix. There are three types, corresponding to the three row operations: swapping two rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another.

The key property is that left-multiplying a matrix AA by an elementary matrix EE performs the corresponding row operation on AA. If EE swaps rows 22 and 33 of the identity, then EAEA swaps rows 22 and 33 of AA. If EE scales row 11 of the identity by 55, then EAEA scales row 11 of AA by 55.

Every elementary matrix is invertible, and its inverse is another elementary matrix of the same type: the inverse of a row swap is the same row swap, the inverse of scaling by kk is scaling by 1/k1/k, and the inverse of adding cc times row ii to row jj is subtracting cc times row ii from row jj.

This leads to a structural result: every invertible matrix can be written as a product of elementary matrices. Since Gaussian elimination reduces an invertible matrix to the identity through a sequence of row operations, each operation corresponds to an elementary matrix, and reversing the sequence expresses the original matrix as their product. This factorization is more conceptual than computational, but it underpins the theoretical foundations of the determinant and the inverse.

Matrix Decompositions

A matrix decomposition (or factorization) expresses a matrix as a product of simpler matrices with known structure. Decompositions are among the most powerful tools in computational linear algebra, converting hard problems into sequences of easy ones.

The LU decomposition writes A=LUA = LU where LL is lower triangular and UU is upper triangular. It captures the essence of Gaussian elimination in matrix form and makes solving linear systems with multiple right-hand sides efficient: once LL and UU are known, each system reduces to two triangular solves.

The QR decomposition writes A=QRA = QR where QQ is orthogonal and RR is upper triangular. It is the foundation of least-squares computation and several eigenvalue algorithms.

The Cholesky decomposition writes A=LLTA = LL^T for symmetric positive definite matrices, achieving the work of LU in roughly half the computation by exploiting symmetry.

The eigendecomposition writes A=PDP1A = PDP^{-1} where DD is diagonal, placing the eigenvalues on the diagonal and the eigenvectors in the columns of PP. It applies only to diagonalizable matrices.

The singular value decomposition writes A=UΣVTA = U\Sigma V^T where UU and VV are orthogonal and Σ\Sigma is diagonal with nonnegative entries. Unlike the eigendecomposition, the SVD exists for every matrix of every shape. It reveals the rank, the fundamental subspaces, and the best low-rank approximation to AA, making it one of the most broadly applicable tools in the subject.

Each of these decompositions has its own page with full derivations and worked examples.