Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Types of Matrices






Special Forms and Their Properties

Certain matrices have structural patterns — zeros in prescribed positions, symmetry across the diagonal, orthonormal columns — that guarantee specific algebraic and geometric behaviors. Recognizing these patterns often transforms a difficult computation into a straightforward one and determines which theorems apply.



Square Matrices

A matrix with equal numbers of rows and columns — nn rows and nn columns — is called square, and is said to have order nn. Square matrices occupy a privileged position in linear algebra because several fundamental concepts are defined exclusively for them.

Only square matrices have a determinant. Only square matrices can be invertible. Only square matrices have eigenvalues and a trace. Powers AkA^k are defined only when AA is square, since the product AAA \cdot A requires the number of columns to equal the number of rows. Every type discussed on this page is a square matrix with additional structure imposed on top.

The Identity Matrix

The n×nn \times n identity matrix InI_n has ones on the main diagonal and zeros elsewhere:

I3=(100010001)I_3 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}


It is the multiplicative identity: AI=IA=AAI = IA = A for any matrix AA with compatible dimensions. As a linear transformation, II is the map that sends every vector to itself.

The identity is simultaneously diagonal, symmetric, orthogonal, upper triangular, and lower triangular. Its determinant is 11, its inverse is itself, every eigenvalue is 11, its trace equals nn, and Ik=II^k = I for every non-negative integer kk. The subscript nn is dropped when the size is clear from context.

Diagonal Matrices

A diagonal matrix has nonzero entries only on the main diagonal:

D=diag(d1,d2,,dn)=(d1000d2000dn)D = \text{diag}(d_1, d_2, \dots, d_n) = \begin{pmatrix} d_1 & 0 & \cdots & 0 \\ 0 & d_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & d_n \end{pmatrix}


Diagonal matrices are the easiest matrices to work with. Their arithmetic reduces to operations on the diagonal entries alone:

diag(d1,,dn)diag(e1,,en)=diag(d1e1,,dnen)\text{diag}(d_1, \dots, d_n) \cdot \text{diag}(e_1, \dots, e_n) = \text{diag}(d_1 e_1, \dots, d_n e_n)


Dk=diag(d1k,,dnk)D^k = \text{diag}(d_1^k, \dots, d_n^k)


D1=diag(1/d1,,1/dn)D^{-1} = \text{diag}(1/d_1, \dots, 1/d_n)


The inverse exists if and only if every diagonal entry is nonzero. The determinant is det(D)=d1d2dn\det(D) = d_1 d_2 \cdots d_n, and the eigenvalues are the diagonal entries themselves. As a transformation, a diagonal matrix scales each coordinate axis independently — stretching along axes where di>1|d_i| > 1 and compressing where di<1|d_i| < 1.

Triangular Matrices

An upper triangular matrix has all entries below the main diagonal equal to zero:

U=(u11u12u130u22u2300u33)U = \begin{pmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{pmatrix}


A lower triangular matrix has all entries above the main diagonal equal to zero:

L=(l1100l21l220l31l32l33)L = \begin{pmatrix} l_{11} & 0 & 0 \\ l_{21} & l_{22} & 0 \\ l_{31} & l_{32} & l_{33} \end{pmatrix}


Triangular matrices share several convenient properties with diagonal matrices. The determinant is the product of the diagonal entries. The eigenvalues are the diagonal entries. The product of two upper triangular matrices is upper triangular, and the same holds for lower triangular matrices. The inverse of an invertible upper triangular matrix is also upper triangular.

These properties make triangular matrices the natural endpoint of Gaussian elimination. Row reduction converts a general matrix into upper triangular form, and the LU decomposition factors a matrix into lower and upper triangular components, reducing system-solving to two simple back-substitution passes.

Symmetric Matrices

A square matrix is symmetric if it equals its own transpose: A=ATA = A^T, meaning aij=ajia_{ij} = a_{ji} for every pair of indices. The matrix is determined by its entries on and above the diagonal — everything below is a mirror image.

Symmetric matrices arise constantly in practice. Covariance matrices, Hessians in optimization, adjacency matrices of undirected graphs, and distance matrices are all symmetric. Any product of the form ATAA^T A or AATAA^T is symmetric regardless of the shape of AA, since (ATA)T=AT(AT)T=ATA(A^T A)^T = A^T (A^T)^T = A^T A.

The spectral properties of real symmetric matrices are exceptionally clean. Every eigenvalue is real — no complex eigenvalues can appear. Eigenvectors corresponding to distinct eigenvalues are automatically orthogonal. And the spectral theorem guarantees that every real symmetric matrix can be diagonalized by an orthogonal matrix: A=QDQTA = Q D Q^T where QQ is orthogonal and DD is diagonal. This is a much stronger conclusion than ordinary diagonalizability, which requires only an invertible change-of-basis matrix.

A symmetric matrix is called positive definite if xTAx>0\mathbf{x}^T A \mathbf{x} > 0 for every nonzero vector x\mathbf{x}. Positive definiteness is equivalent to all eigenvalues being strictly positive, and it guarantees the existence of the Cholesky decomposition A=LLTA = LL^T.

Skew-Symmetric Matrices

A square matrix is skew-symmetric if AT=AA^T = -A, meaning aij=ajia_{ij} = -a_{ji} for all i,ji, j. Setting i=ji = j forces aii=aiia_{ii} = -a_{ii}, so every diagonal entry must be zero.

Every square matrix admits a unique decomposition into a symmetric part and a skew-symmetric part:

A=12(A+AT)+12(AAT)A = \frac{1}{2}(A + A^T) + \frac{1}{2}(A - A^T)


The first term is symmetric, the second is skew-symmetric, and this splitting is unique.

The eigenvalues of a real skew-symmetric matrix are either zero or purely imaginary — they come in conjugate pairs ±bi\pm bi with real eigenvalues restricted to zero. For matrices of odd order, the determinant is always zero: det(A)=det(AT)=det(A)=(1)ndet(A)\det(A) = \det(A^T) = \det(-A) = (-1)^n \det(A), and when nn is odd, this forces det(A)=0\det(A) = 0. For even order, the determinant can be nonzero.

In R3\mathbb{R}^3, the cross product a×b\mathbf{a} \times \mathbf{b} can be written as [a]×b[\mathbf{a}]_\times \mathbf{b}, where [a]×[\mathbf{a}]_\times is the 3×33 \times 3 skew-symmetric matrix

[a]×=(0a3a2a30a1a2a10)[\mathbf{a}]_\times = \begin{pmatrix} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{pmatrix}


This reformulates the cross product as a matrix-vector multiplication.

Orthogonal Matrices

A square matrix QQ is orthogonal if its transpose equals its inverse:

QTQ=QQT=Iequivalently,Q1=QTQ^T Q = QQ^T = I \qquad \text{equivalently,} \quad Q^{-1} = Q^T


This means the columns of QQ form an orthonormal set: each column has unit length, and distinct columns are perpendicular. The same is true of the rows.

The determinant of an orthogonal matrix is ±1\pm 1, since 1=det(I)=det(QTQ)=det(Q)21 = \det(I) = \det(Q^T Q) = \det(Q)^2. When det(Q)=+1\det(Q) = +1, the matrix is a rotation. When det(Q)=1\det(Q) = -1, it involves a reflection.

The defining geometric property is that orthogonal matrices preserve lengths: Qx=x\|Q\mathbf{x}\| = \|\mathbf{x}\| for every vector x\mathbf{x}. They also preserve dot products (QxQy=xyQ\mathbf{x} \cdot Q\mathbf{y} = \mathbf{x} \cdot \mathbf{y}) and therefore angles between vectors. A transformation that preserves all distances and angles is called an isometry, and the orthogonal matrices are precisely the linear isometries.

Common examples include rotation matrices in R2\mathbb{R}^2 and R3\mathbb{R}^3, reflection matrices across any line or plane through the origin, and permutation matrices that reorder coordinates. The inverse of an orthogonal matrix is its transpose — making it the cheapest matrix inverse to compute.

Nilpotent and Idempotent Matrices

A square matrix AA is nilpotent if some positive power of it equals the zero matrix: Ak=OA^k = O for some integer k1k \geq 1. The smallest such kk is called the index of nilpotency. Every eigenvalue of a nilpotent matrix is zero, which forces both the determinant and the trace to vanish.

Nilpotent matrices have a useful algebraic consequence: the matrix IAI - A is always invertible, with inverse given by the finite geometric series

(IA)1=I+A+A2++Ak1(I - A)^{-1} = I + A + A^2 + \cdots + A^{k-1}


The series terminates because Ak=OA^k = O, so there is no convergence issue.

A square matrix AA is idempotent if A2=AA^2 = A — applying the transformation twice is the same as applying it once. The eigenvalues of an idempotent matrix can only be 00 or 11, since λ2=λ\lambda^2 = \lambda implies λ=0\lambda = 0 or λ=1\lambda = 1. A striking identity links the rank and the trace: rank(A)=tr(A)\text{rank}(A) = \text{tr}(A), because the trace counts the eigenvalues equal to 11, which is the dimension of the image.

Geometrically, idempotent matrices are projections. They project Rn\mathbb{R}^n onto the column space of AA along the null space. If AA is also symmetric, the projection is orthogonal.

Involutory and Permutation Matrices

A square matrix is involutory if A2=IA^2 = I — it is its own inverse. The eigenvalues of an involutory matrix must satisfy λ2=1\lambda^2 = 1, so they are restricted to +1+1 and 1-1. Reflections are the prototypical example: reflecting twice across the same line or plane returns every vector to its starting point.

The matrix (0110)\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} is involutory — it swaps the two coordinates and swapping twice restores the original. More generally, any matrix of the form 2PI2P - I, where PP is idempotent, is involutory.

A permutation matrix is a square matrix with exactly one entry equal to 11 in each row and each column, and all other entries zero. Left-multiplying a matrix AA by a permutation matrix PP reorders the rows of AA according to the permutation. Right-multiplying reorders the columns.

Permutation matrices are orthogonal (P1=PTP^{-1} = P^T), their determinant is +1+1 or 1-1 depending on whether the permutation is even or odd, and the product of two permutation matrices is another permutation matrix. They appear in the LU decomposition with partial pivoting, where row swaps are tracked by a permutation matrix: PA=LUPA = LU.

Singular and Nonsingular Matrices

The classification of a square matrix as singular or nonsingular is not a structural pattern like symmetry or triangularity — it is a behavioral property that depends on the values of the entries.

A singular matrix has determinant zero. Its columns are linearly dependent, its rank is less than nn, and the system Ax=bAx = \mathbf{b} fails to have a unique solution for every b\mathbf{b}. As a transformation, a singular matrix collapses at least one dimension — its image is a proper subspace of Rn\mathbb{R}^n.

A nonsingular (invertible) matrix has nonzero determinant, full rank, and linearly independent columns and rows. The system Ax=bAx = \mathbf{b} has exactly one solution for every right-hand side, and the inverse A1A^{-1} exists.

Any matrix type can be singular or nonsingular depending on its entries. A diagonal matrix is singular if any diagonal entry is zero. A triangular matrix is singular if any diagonal entry is zero. An orthogonal matrix is never singular, since its determinant is ±1\pm 1. A nilpotent matrix (other than the zero matrix of order 11) is always singular, since all its eigenvalues are zero.

Summary of Matrix Types

The defining property of each type, together with its most important consequence, can be collected for quick reference.

The identity matrix (Iij=δijI_{ij} = \delta_{ij}) is the multiplicative identity. Diagonal matrices (off-diagonal entries all zero) have trivially simple powers, products, and inverses. Upper and lower triangular matrices (zeros below or above the diagonal) have eigenvalues visible on the diagonal. Symmetric matrices (A=ATA = A^T) have real eigenvalues and orthogonal eigenvectors. Skew-symmetric matrices (A=ATA = -A^T) have zero diagonal and purely imaginary eigenvalues. Orthogonal matrices (QT=Q1Q^T = Q^{-1}) preserve lengths and angles. Nilpotent matrices (Ak=OA^k = O) have all eigenvalues zero. Idempotent matrices (A2=AA^2 = A) are projections with rank=tr\text{rank} = \text{tr}. Involutory matrices (A2=IA^2 = I) are their own inverse. Permutation matrices (one 11 per row and column) reorder coordinates and are always orthogonal.

These categories are not mutually exclusive. The identity matrix is diagonal, symmetric, orthogonal, triangular, idempotent, and involutory simultaneously. A 1×11 \times 1 zero matrix is diagonal, symmetric, skew-symmetric, triangular, nilpotent, and singular. Recognizing which types a given matrix belongs to is often the fastest route to understanding its behavior.