Certain matrices have structural patterns — zeros in prescribed positions, symmetry across the diagonal, orthonormal columns — that guarantee specific algebraic and geometric behaviors. Recognizing these patterns often transforms a difficult computation into a straightforward one and determines which theorems apply.
Square Matrices
A matrix with equal numbers of rows and columns — n rows and n columns — is called square, and is said to have order n. Square matrices occupy a privileged position in linear algebra because several fundamental concepts are defined exclusively for them.
Only square matrices have a determinant. Only square matrices can be invertible. Only square matrices have eigenvalues and a trace. Powers Ak are defined only when A is square, since the product A⋅A requires the number of columns to equal the number of rows. Every type discussed on this page is a square matrix with additional structure imposed on top.
The Identity Matrix
The n×n identity matrix In has ones on the main diagonal and zeros elsewhere:
I3=100010001
It is the multiplicative identity: AI=IA=A for any matrix A with compatible dimensions. As a linear transformation, I is the map that sends every vector to itself.
The identity is simultaneously diagonal, symmetric, orthogonal, upper triangular, and lower triangular. Its determinant is 1, its inverse is itself, every eigenvalue is 1, its trace equals n, and Ik=I for every non-negative integer k. The subscript n is dropped when the size is clear from context.
Diagonal Matrices
A diagonal matrix has nonzero entries only on the main diagonal:
The inverse exists if and only if every diagonal entry is nonzero. The determinant is det(D)=d1d2⋯dn, and the eigenvalues are the diagonal entries themselves. As a transformation, a diagonal matrix scales each coordinate axis independently — stretching along axes where ∣di∣>1 and compressing where ∣di∣<1.
Triangular Matrices
An upper triangular matrix has all entries below the main diagonal equal to zero:
U=u1100u12u220u13u23u33
A lower triangular matrix has all entries above the main diagonal equal to zero:
L=l11l21l310l22l3200l33
Triangular matrices share several convenient properties with diagonal matrices. The determinant is the product of the diagonal entries. The eigenvalues are the diagonal entries. The product of two upper triangular matrices is upper triangular, and the same holds for lower triangular matrices. The inverse of an invertible upper triangular matrix is also upper triangular.
These properties make triangular matrices the natural endpoint of Gaussian elimination. Row reduction converts a general matrix into upper triangular form, and the LU decomposition factors a matrix into lower and upper triangular components, reducing system-solving to two simple back-substitution passes.
Symmetric Matrices
A square matrix is symmetric if it equals its own transpose: A=AT, meaning aij=aji for every pair of indices. The matrix is determined by its entries on and above the diagonal — everything below is a mirror image.
Symmetric matrices arise constantly in practice. Covariance matrices, Hessians in optimization, adjacency matrices of undirected graphs, and distance matrices are all symmetric. Any product of the form ATA or AAT is symmetric regardless of the shape of A, since (ATA)T=AT(AT)T=ATA.
The spectral properties of real symmetric matrices are exceptionally clean. Every eigenvalue is real — no complex eigenvalues can appear. Eigenvectors corresponding to distinct eigenvalues are automatically orthogonal. And the spectral theorem guarantees that every real symmetric matrix can be diagonalized by an orthogonal matrix: A=QDQT where Q is orthogonal and D is diagonal. This is a much stronger conclusion than ordinary diagonalizability, which requires only an invertible change-of-basis matrix.
A symmetric matrix is called positive definite if xTAx>0 for every nonzero vector x. Positive definiteness is equivalent to all eigenvalues being strictly positive, and it guarantees the existence of the Cholesky decompositionA=LLT.
Skew-Symmetric Matrices
A square matrix is skew-symmetric if AT=−A, meaning aij=−aji for all i,j. Setting i=j forces aii=−aii, so every diagonal entry must be zero.
Every square matrix admits a unique decomposition into a symmetric part and a skew-symmetric part:
A=21(A+AT)+21(A−AT)
The first term is symmetric, the second is skew-symmetric, and this splitting is unique.
The eigenvalues of a real skew-symmetric matrix are either zero or purely imaginary — they come in conjugate pairs ±bi with real eigenvalues restricted to zero. For matrices of odd order, the determinant is always zero: det(A)=det(AT)=det(−A)=(−1)ndet(A), and when n is odd, this forces det(A)=0. For even order, the determinant can be nonzero.
In R3, the cross producta×b can be written as [a]×b, where [a]× is the 3×3 skew-symmetric matrix
[a]×=0a3−a2−a30a1a2−a10
This reformulates the cross product as a matrix-vector multiplication.
Orthogonal Matrices
A square matrix Q is orthogonal if its transpose equals its inverse:
QTQ=QQT=Iequivalently,Q−1=QT
This means the columns of Q form an orthonormal set: each column has unit length, and distinct columns are perpendicular. The same is true of the rows.
The determinant of an orthogonal matrix is ±1, since 1=det(I)=det(QTQ)=det(Q)2. When det(Q)=+1, the matrix is a rotation. When det(Q)=−1, it involves a reflection.
The defining geometric property is that orthogonal matrices preserve lengths: ∥Qx∥=∥x∥ for every vector x. They also preserve dot products (Qx⋅Qy=x⋅y) and therefore angles between vectors. A transformation that preserves all distances and angles is called an isometry, and the orthogonal matrices are precisely the linear isometries.
Common examples include rotation matrices in R2 and R3, reflection matrices across any line or plane through the origin, and permutation matrices that reorder coordinates. The inverse of an orthogonal matrix is its transpose — making it the cheapest matrix inverse to compute.
Nilpotent and Idempotent Matrices
A square matrix A is nilpotent if some positive power of it equals the zero matrix: Ak=O for some integer k≥1. The smallest such k is called the index of nilpotency. Every eigenvalue of a nilpotent matrix is zero, which forces both the determinant and the trace to vanish.
Nilpotent matrices have a useful algebraic consequence: the matrix I−A is always invertible, with inverse given by the finite geometric series
(I−A)−1=I+A+A2+⋯+Ak−1
The series terminates because Ak=O, so there is no convergence issue.
A square matrix A is idempotent if A2=A — applying the transformation twice is the same as applying it once. The eigenvalues of an idempotent matrix can only be 0 or 1, since λ2=λ implies λ=0 or λ=1. A striking identity links the rank and the trace: rank(A)=tr(A), because the trace counts the eigenvalues equal to 1, which is the dimension of the image.
Geometrically, idempotent matrices are projections. They project Rn onto the column space of A along the null space. If A is also symmetric, the projection is orthogonal.
Involutory and Permutation Matrices
A square matrix is involutory if A2=I — it is its own inverse. The eigenvalues of an involutory matrix must satisfy λ2=1, so they are restricted to +1 and −1. Reflections are the prototypical example: reflecting twice across the same line or plane returns every vector to its starting point.
The matrix (0110) is involutory — it swaps the two coordinates and swapping twice restores the original. More generally, any matrix of the form 2P−I, where P is idempotent, is involutory.
A permutation matrix is a square matrix with exactly one entry equal to 1 in each row and each column, and all other entries zero. Left-multiplying a matrix A by a permutation matrix P reorders the rows of A according to the permutation. Right-multiplying reorders the columns.
Permutation matrices are orthogonal (P−1=PT), their determinant is +1 or −1 depending on whether the permutation is even or odd, and the product of two permutation matrices is another permutation matrix. They appear in the LU decomposition with partial pivoting, where row swaps are tracked by a permutation matrix: PA=LU.
Singular and Nonsingular Matrices
The classification of a square matrix as singular or nonsingular is not a structural pattern like symmetry or triangularity — it is a behavioral property that depends on the values of the entries.
A singular matrix has determinant zero. Its columns are linearly dependent, its rank is less than n, and the system Ax=b fails to have a unique solution for every b. As a transformation, a singular matrix collapses at least one dimension — its image is a proper subspace of Rn.
A nonsingular (invertible) matrix has nonzero determinant, full rank, and linearly independent columns and rows. The system Ax=b has exactly one solution for every right-hand side, and the inverseA−1 exists.
Any matrix type can be singular or nonsingular depending on its entries. A diagonal matrix is singular if any diagonal entry is zero. A triangular matrix is singular if any diagonal entry is zero. An orthogonal matrix is never singular, since its determinant is ±1. A nilpotent matrix (other than the zero matrix of order 1) is always singular, since all its eigenvalues are zero.
Summary of Matrix Types
The defining property of each type, together with its most important consequence, can be collected for quick reference.
The identity matrix (Iij=δij) is the multiplicative identity. Diagonal matrices (off-diagonal entries all zero) have trivially simple powers, products, and inverses. Upper and lower triangular matrices (zeros below or above the diagonal) have eigenvalues visible on the diagonal. Symmetric matrices (A=AT) have real eigenvalues and orthogonal eigenvectors. Skew-symmetric matrices (A=−AT) have zero diagonal and purely imaginary eigenvalues. Orthogonal matrices (QT=Q−1) preserve lengths and angles. Nilpotent matrices (Ak=O) have all eigenvalues zero. Idempotent matrices (A2=A) are projections with rank=tr. Involutory matrices (A2=I) are their own inverse. Permutation matrices (one 1 per row and column) reorder coordinates and are always orthogonal.
These categories are not mutually exclusive. The identity matrix is diagonal, symmetric, orthogonal, triangular, idempotent, and involutory simultaneously. A 1×1 zero matrix is diagonal, symmetric, skew-symmetric, triangular, nilpotent, and singular. Recognizing which types a given matrix belongs to is often the fastest route to understanding its behavior.