Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Matrix Calculator


Professional matrix operations calculator

Select Operation Type








Matrix Transpose

The transpose of a matrix AA swaps its rows and columns. If AA is an m×nm \times n matrix, then ATA^T is n×mn \times m, with each entry satisfying (AT)ij=Aji(A^T)_{ij} = A_{ji}.

AijT=AjiA^T_{ij} = A_{ji}


Key properties include (AT)T=A(A^T)^T = A, (AB)T=BTAT(AB)^T = B^T A^T, and (A+B)T=AT+BT(A + B)^T = A^T + B^T. A matrix equal to its own transpose is called symmetric. The transpose is fundamental in computing dot products, defining orthogonal matrices, and constructing the normal equation for least squares problems.

Matrix Determinant

The determinant is a scalar value computed from a square matrix that captures essential properties of the transformation it represents. A matrix is invertible if and only if its determinant is nonzero.

For a 2×22 \times 2 matrix:

det(A)=a11a22a12a21\det(A) = a_{11}a_{22} - a_{12}a_{21}


For larger matrices, the determinant is computed via cofactor expansion or row reduction. Geometrically, det(A)|\det(A)| measures the factor by which the matrix scales areas or volumes. Key properties: det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B), det(AT)=det(A)\det(A^T) = \det(A), and det(cA)=cndet(A)\det(cA) = c^n \det(A) for an n×nn \times n matrix.

Matrix Inverse

The inverse of a square matrix AA, denoted A1A^{-1}, satisfies AA1=A1A=IAA^{-1} = A^{-1}A = I. A matrix has an inverse only when its determinant is nonzero.

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \text{adj}(A)


The calculator finds the inverse using Gauss-Jordan elimination on the augmented matrix [AI][A | I]. Row operations transform the left side into II, and the right side becomes A1A^{-1}. The inverse is used to solve linear systems (x=A1bx = A^{-1}b), compute matrix equations, and analyze transformations.

Matrix Trace

The trace of a square matrix is the sum of its diagonal elements:

tr(A)=i=1naii\text{tr}(A) = \sum_{i=1}^{n} a_{ii}


The trace has several important properties. It is linear: tr(A+B)=tr(A)+tr(B)\text{tr}(A + B) = \text{tr}(A) + \text{tr}(B) and tr(cA)=ctr(A)\text{tr}(cA) = c \cdot \text{tr}(A). It is cyclic: tr(ABC)=tr(BCA)=tr(CAB)\text{tr}(ABC) = \text{tr}(BCA) = \text{tr}(CAB). The trace equals the sum of the eigenvalues of the matrix, making it useful in spectral analysis and matrix diagnostics.

Matrix Rank

The rank of a matrix is the number of linearly independent rows or columns. It equals the dimension of the column space (or row space) and is found by reducing the matrix to row echelon form and counting pivot positions.

rank(A)min(m,n)\text{rank}(A) \leq \min(m, n)


A matrix with rank equal to its smaller dimension has full rank. Rank determines whether a linear system Ax=bAx = b has a unique solution, infinitely many solutions, or no solution. The rank-nullity theorem states rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n where nn is the number of columns.

LU Decomposition

LU decomposition factors a square matrix into a product of a lower triangular matrix LL and an upper triangular matrix UU, with a permutation matrix PP tracking row swaps:

PA=LUPA = LU


LL has ones on the diagonal with elimination multipliers below. UU is the result of Gaussian elimination. This factorization is efficient for solving multiple systems with the same coefficient matrix but different right-hand sides, since forward and back substitution are much cheaper than full elimination. LU decomposition is also used to compute determinants efficiently: det(A)=det(L)det(U)det(P)\det(A) = \det(L) \cdot \det(U) \cdot \det(P).

Matrix Addition

Matrix addition sums corresponding elements of two matrices with identical dimensions. If AA and BB are both m×nm \times n:

(A+B)ij=aij+bij(A + B)_{ij} = a_{ij} + b_{ij}


Addition is commutative (A+B=B+AA + B = B + A) and associative ((A+B)+C=A+(B+C)(A + B) + C = A + (B + C)). The zero matrix acts as the additive identity. Subtraction works identically, replacing each sum with a difference: (AB)ij=aijbij(A - B)_{ij} = a_{ij} - b_{ij}. Both operations preserve matrix dimensions.

Matrix Subtraction

Matrix subtraction computes the element-wise difference between two matrices of the same dimensions:

(AB)ij=aijbij(A - B)_{ij} = a_{ij} - b_{ij}


Subtraction is equivalent to adding the negation: AB=A+(B)A - B = A + (-B). Unlike addition, subtraction is not commutative: ABBAA - B \neq B - A in general. The result has the same dimensions as both input matrices. Subtraction is used in computing residuals, error matrices, and the commutator operation [A,B]=ABBA[A, B] = AB - BA.

Matrix Multiplication

Matrix multiplication combines an m×nm \times n matrix AA with an n×pn \times p matrix BB to produce an m×pm \times p result. The number of columns in AA must equal the number of rows in BB.

(AB)ij=k=1naikbkj(AB)_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj}


Each entry is the dot product of a row from AA with a column from BB. Multiplication is associative and distributive over addition, but not commutative: ABBAAB \neq BA in general. Matrix multiplication models composition of linear transformations and is the core operation in systems of equations, computer graphics, and machine learning.

Element-wise (Hadamard) Product

The Hadamard product, denoted ABA \odot B, multiplies corresponding elements of two matrices with the same dimensions:

(AB)ij=aijbij(A \odot B)_{ij} = a_{ij} \cdot b_{ij}


Unlike standard matrix multiplication, the Hadamard product is commutative (AB=BAA \odot B = B \odot A) and preserves dimensions. It appears in signal processing, neural network computations (gradient masking, attention mechanisms), and statistics. The Schur product theorem states that if AA and BB are both positive semidefinite, then ABA \odot B is also positive semidefinite.

Kronecker Product

The Kronecker product ABA \otimes B replaces each element aija_{ij} of AA with the block aijBa_{ij} \cdot B. If AA is m×nm \times n and BB is p×qp \times q, the result is mp×nqmp \times nq.

The Kronecker product is bilinear and associative but not commutative. Key properties include (AB)(CD)=(AC)(BD)(A \otimes B)(C \otimes D) = (AC) \otimes (BD) when dimensions allow, and det(AB)=det(A)qdet(B)m\det(A \otimes B) = \det(A)^q \cdot \det(B)^m for square matrices. It is used in quantum computing (tensor product of state spaces), control theory, and signal processing.

Commutator

The commutator of two square matrices is defined as:

[A,B]=ABBA[A, B] = AB - BA


When [A,B]=0[A, B] = 0 (the zero matrix), AA and BB commute, meaning their multiplication order does not matter. The commutator is antisymmetric: [A,B]=[B,A][A, B] = -[B, A] and satisfies the Jacobi identity: [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0. The commutator is fundamental in Lie algebra, quantum mechanics (where it relates to the uncertainty principle), and differential geometry.

Anti-commutator

The anti-commutator of two square matrices is defined as:

{A,B}=AB+BA\{A, B\} = AB + BA


It is the symmetric counterpart of the commutator. The anti-commutator is always symmetric: {A,B}={B,A}\{A, B\} = \{B, A\}. In quantum mechanics, anti-commutation relations define fermionic operators (particles obeying the Pauli exclusion principle), while commutation relations define bosonic operators. The anti-commutator also appears in Clifford algebras and Jordan algebras.

Scalar Multiplication

Scalar multiplication multiplies every element of a matrix by a single number cc:

(cA)ij=caij(cA)_{ij} = c \cdot a_{ij}


This scales the entire matrix uniformly. For the determinant, det(cA)=cndet(A)\det(cA) = c^n \det(A) where nn is the matrix size. Scalar multiplication is commutative (cA=AccA = Ac), associative (c(dA)=(cd)Ac(dA) = (cd)A), and distributes over matrix addition (c(A+B)=cA+cBc(A + B) = cA + cB). Geometrically, it scales the linear transformation represented by the matrix.

Scalar Addition and Subtraction

Scalar addition adds a constant cc to every element of a matrix:

(A+c)ij=aij+c(A + c)_{ij} = a_{ij} + c


Scalar subtraction works the same way with a minus sign. Note that this is different from adding cIcI (scalar times identity), which only affects diagonal elements. Scalar addition shifts all entries by the same amount and is used in data normalization, bias adjustments, and preprocessing steps in numerical computation.

Matrix Power

Matrix power raises a square matrix to a non-negative integer exponent through repeated multiplication:

An=AAAn timesA^n = \underbrace{A \cdot A \cdots A}_{n \text{ times}}


By convention, A0=IA^0 = I (the identity matrix). The calculator uses exponentiation by squaring for efficiency, reducing the cost from O(n)O(n) multiplications to O(logn)O(\log n). Matrix powers appear in Markov chains (transition probabilities over nn steps), solving linear recurrences, graph theory (counting paths of length nn), and computing the matrix exponential.

Related Tools and Concepts

This matrix calculator covers single-matrix analysis, two-matrix operations, and scalar operations. For solving systems of linear equations (Gaussian elimination, Gauss-Jordan, Cramer's Rule, and inverse method), use the dedicated Linear Systems Calculator.

Related linear algebra topics include eigenvalues and eigenvectors, singular value decomposition (SVD), QR decomposition, vector operations, and matrix norms. These tools and concepts build on the foundational operations available in this calculator.