Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools


Vectors Calculator


Professional vector operations calculator

Select Operation Type








Vector Magnitude

The magnitude (or length) of a vector v=(v1,v2,,vn)\mathbf{v} = (v_1, v_2, \ldots, v_n) measures its distance from the origin:

v=v12+v22++vn2||\mathbf{v}|| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}


The magnitude is always non-negative and equals zero only for the zero vector. It is also called the Euclidean norm or L2 norm. In physics, the magnitude of a velocity vector gives speed, and the magnitude of a force vector gives the force strength.

Unit Vector

A unit vector has magnitude 1 and indicates direction only. Given a nonzero vector v\mathbf{v}, the corresponding unit vector is:

v^=vv\hat{\mathbf{v}} = \frac{\mathbf{v}}{||\mathbf{v}||}


The standard basis vectors e1,e2,,en\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n are unit vectors pointing along each coordinate axis. Unit vectors are essential in defining directions for projections, constructing orthonormal bases, and representing orientations in physics and computer graphics.

Vector Normalization

Normalization is the process of converting a vector to a unit vector by dividing each component by the magnitude. The result preserves direction while setting the length to 1.

Normalization is undefined for the zero vector since division by zero is not possible. It is widely used in machine learning (feature normalization), computer graphics (surface normals), and physics (direction vectors). Normalized vectors simplify many formulas because v^=1||\hat{\mathbf{v}}|| = 1 eliminates magnitude terms.

Sum of Components

The sum of components adds all entries of a vector:

S=v1+v2++vnS = v_1 + v_2 + \cdots + v_n


This operation is useful for checking whether a vector sums to zero (important in certain probability and physics contexts), computing averages when divided by nn, and verifying conservation laws. It is a special case of the dot product with the all-ones vector: S=v1S = \mathbf{v} \cdot \mathbf{1}.

L1 Norm (Manhattan Norm)

The L1 norm sums the absolute values of all vector components:

v1=v1+v2++vn||\mathbf{v}||_1 = |v_1| + |v_2| + \cdots + |v_n|


Also called the Manhattan norm or taxicab norm, it measures distance along axis-aligned paths rather than straight lines. The L1 norm is central to sparse optimization problems, LASSO regression, and compressed sensing, where it promotes solutions with many zero entries. It is always greater than or equal to the L2 norm.

L2 Norm (Euclidean Norm)

The L2 norm is the standard Euclidean length of a vector, identical to the magnitude:

v2=v12+v22++vn2||\mathbf{v}||_2 = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}


It is the default norm in most mathematical and engineering contexts. The L2 norm measures straight-line distance from the origin and is used in least squares problems, Ridge regression, and defining orthogonality. The relationship v22=vv||\mathbf{v}||_2^2 = \mathbf{v} \cdot \mathbf{v} connects it to the dot product.

Infinity Norm (Max Norm)

The infinity norm returns the largest absolute component value:

v=max(v1,v2,,vn)||\mathbf{v}||_\infty = \max(|v_1|, |v_2|, \ldots, |v_n|)


It measures the worst-case component magnitude and is used in numerical analysis for bounding errors, defining matrix norms, and evaluating convergence criteria. The infinity norm is always less than or equal to the L1 norm and represents the limit of the LpL_p norms as pp approaches infinity.

Vector Addition

Vector addition sums corresponding components of two vectors with the same dimensionality:

(A+B)i=ai+bi(\mathbf{A} + \mathbf{B})_i = a_i + b_i


Geometrically, placing vector B\mathbf{B} at the tip of A\mathbf{A} gives the resultant. Addition is commutative (A+B=B+A\mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A}) and associative. The zero vector is the additive identity. Vector addition models superposition of forces, velocities, and displacements in physics.

Vector Subtraction

Vector subtraction computes the component-wise difference:

(AB)i=aibi(\mathbf{A} - \mathbf{B})_i = a_i - b_i


The result vector points from the tip of B\mathbf{B} to the tip of A\mathbf{A}. Subtraction is equivalent to adding the negation: AB=A+(B)\mathbf{A} - \mathbf{B} = \mathbf{A} + (-\mathbf{B}). It is used to compute displacement vectors, distances, and relative positions between points.

Dot Product

The dot product (inner product) of two vectors produces a scalar:

AB=i=1naibi=ABcosθ\mathbf{A} \cdot \mathbf{B} = \sum_{i=1}^{n} a_i b_i = ||\mathbf{A}|| \, ||\mathbf{B}|| \cos\theta


A dot product of zero means the vectors are orthogonal (perpendicular). The sign indicates whether the angle between them is acute (positive) or obtuse (negative). The dot product is foundational in projections, computing work in physics, and defining angles in any dimension.

Cross Product

The cross product is defined only in 3D and produces a vector perpendicular to both inputs:

A×B=(a2b3a3b2,  a3b1a1b3,  a1b2a2b1)\mathbf{A} \times \mathbf{B} = (a_2 b_3 - a_3 b_2, \; a_3 b_1 - a_1 b_3, \; a_1 b_2 - a_2 b_1)


The magnitude A×B=ABsinθ||\mathbf{A} \times \mathbf{B}|| = ||\mathbf{A}|| \, ||\mathbf{B}|| \sin\theta equals the area of the parallelogram formed by the two vectors. The direction follows the right-hand rule. The cross product is anticommutative: A×B=(B×A)\mathbf{A} \times \mathbf{B} = -(\mathbf{B} \times \mathbf{A}). It is used in physics for torque, angular momentum, and surface normals.

Angle Between Vectors

The angle between two nonzero vectors is computed using the dot product:

θ=arccos(ABAB)\theta = \arccos\left(\frac{\mathbf{A} \cdot \mathbf{B}}{||\mathbf{A}|| \, ||\mathbf{B}||}\right)


The result is between 0 and π\pi radians (0 to 180 degrees). An angle of π/2\pi/2 (90 degrees) indicates orthogonality. This formula generalizes the concept of angle to any number of dimensions and is used in similarity measures (cosine similarity), physics, and geometry.

Distance Between Vectors

The Euclidean distance between two vectors is the magnitude of their difference:

d(A,B)=AB=i=1n(aibi)2d(\mathbf{A}, \mathbf{B}) = ||\mathbf{A} - \mathbf{B}|| = \sqrt{\sum_{i=1}^{n}(a_i - b_i)^2}


This is the straight-line distance between the two points in nn-dimensional space. Distance is always non-negative, equals zero only when the vectors are identical, and satisfies the triangle inequality. It is the foundation of clustering algorithms like k-means, nearest neighbor methods, and error measurement.

Vector Projection

The projection of A\mathbf{A} onto B\mathbf{B} gives the component of A\mathbf{A} in the direction of B\mathbf{B}:

projB(A)=ABBBB\text{proj}_{\mathbf{B}}(\mathbf{A}) = \frac{\mathbf{A} \cdot \mathbf{B}}{\mathbf{B} \cdot \mathbf{B}} \, \mathbf{B}


The scalar projection (the coefficient) tells how much of A\mathbf{A} lies along B\mathbf{B}. Projection is central to the Gram-Schmidt process, least squares approximation, and decomposing forces in physics. Together with the rejection, it splits any vector into parallel and perpendicular components.

Vector Rejection

The rejection of A\mathbf{A} from B\mathbf{B} is the component of A\mathbf{A} perpendicular to B\mathbf{B}:

rejB(A)=AprojB(A)\text{rej}_{\mathbf{B}}(\mathbf{A}) = \mathbf{A} - \text{proj}_{\mathbf{B}}(\mathbf{A})


The rejection vector is always orthogonal to B\mathbf{B}. Together, projection and rejection form an orthogonal decomposition: A=projB(A)+rejB(A)\mathbf{A} = \text{proj}_{\mathbf{B}}(\mathbf{A}) + \text{rej}_{\mathbf{B}}(\mathbf{A}). This decomposition is used in the Gram-Schmidt process and in computing the distance from a point to a line.

Linear Combination

A linear combination multiplies each vector by a scalar coefficient and sums the results:

c1v1+c2v2++ckvkc_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k


The set of all possible linear combinations of a set of vectors forms their span. Linear combinations are the fundamental building block of linear algebra: solving systems of equations, expressing transformations, and constructing subspaces all reduce to finding appropriate coefficients.

Span Check

A span check determines whether a target vector can be expressed as a linear combination of the given vectors. The calculator sets up the system [v1v2vk]x=t[\mathbf{v}_1 | \mathbf{v}_2 | \cdots | \mathbf{v}_k] \mathbf{x} = \mathbf{t} and solves via row reduction.

If a solution exists, the target vector is in the span, and the coefficients are reported. If the system is inconsistent, the target is not reachable from the given vectors. Span checking is fundamental to understanding vector spaces and subspaces.

Linear Independence

A set of vectors is linearly independent if no vector can be written as a linear combination of the others. Equivalently, the only solution to:

c1v1+c2v2++ckvk=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k = \mathbf{0}


is c1=c2==ck=0c_1 = c_2 = \cdots = c_k = 0. The calculator checks this by forming the matrix with the vectors as columns and computing its rank. If rank equals the number of vectors, they are independent. A set of nn linearly independent vectors in Rn\mathbb{R}^n forms a basis.

Orthogonality Check

A set of vectors is orthogonal if every pair has a dot product of zero. The calculator computes vivj\mathbf{v}_i \cdot \mathbf{v}_j for all iji \neq j and reports whether all results are zero.

Orthogonal vectors are always linearly independent. An orthogonal set where every vector also has unit length is called orthonormal. Orthogonal and orthonormal bases simplify projections, decompositions, and coordinate transformations because components can be found independently via dot products.

Gram-Schmidt Process

The Gram-Schmidt process converts a set of linearly independent vectors into an orthonormal basis. For each vector in sequence:

1. Subtract the projections onto all previously computed basis vectors
2. Normalize the result to unit length

uk=vkj=1k1projuj(vk),u^k=ukuk\mathbf{u}_k = \mathbf{v}_k - \sum_{j=1}^{k-1} \text{proj}_{\mathbf{u}_j}(\mathbf{v}_k), \quad \hat{\mathbf{u}}_k = \frac{\mathbf{u}_k}{||\mathbf{u}_k||}


Gram-Schmidt is the foundation of QR decomposition and is used in numerical methods, signal processing, and constructing coordinate systems.

Matrix Form

The matrix form operation arranges a set of vectors as columns of a matrix. If vectors v1,v2,,vk\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_k each have nn components, the resulting matrix is n×kn \times k.

This representation connects vector operations to matrix operations: the column space of the matrix equals the span of the vectors, the rank tells how many are linearly independent, and the determinant (for square matrices) indicates whether the vectors form a basis. Matrix form is the bridge between vector geometry and matrix algebra.

Related Tools and Concepts

This vector calculator covers single-vector analysis, two-vector operations, and multi-vector computations. For matrix-specific operations like determinants, inverses, LU decomposition, and scalar operations, use the Matrix Operations Calculator.

For solving systems of linear equations with methods like Gaussian elimination and Cramer's Rule, use the Linear Systems Calculator. Related topics include eigenvalues and eigenvectors, singular value decomposition, matrix norms, and linear transformations.