Gram-Schmidt Calculator

Enter your vectors using the Number of Vectors, Vector Size, and vector component fields to apply the Gram-Schmidt orthogonalization process. You get back a complete orthonormal basis — each orthogonalized vector u₁, u₂, u₃ along with its normalized form, displayed in a step-friendly results table.

How many vectors to orthogonalize

Number of components per vector

Results

‖u₁‖ (Norm of First Orthogonal Vector)

--

‖u₂‖ (Norm of Second Orthogonal Vector)

--

‖u₃‖ (Norm of Third Orthogonal Vector)

--

u₁ · u₂ (Orthogonality Check, ≈ 0)

--

u₁ · u₃ (Orthogonality Check, ≈ 0)

--

u₂ · u₃ (Orthogonality Check, ≈ 0)

--

Norms of Orthogonal Basis Vectors

Results Table

Frequently Asked Questions

What is the Gram-Schmidt process?

The Gram-Schmidt process is an algorithm that takes a set of linearly independent vectors and converts them into an orthogonal (or orthonormal) set that spans the same subspace. It works by iteratively subtracting the projection of each new vector onto all previously computed orthogonal vectors.

What does orthogonal mean for vectors?

Two vectors are orthogonal if their dot product equals zero, meaning they are perpendicular to each other. An orthogonal basis is a set of vectors that are all mutually orthogonal — every pair has a dot product of zero.

What is the difference between orthogonal and orthonormal?

An orthogonal set of vectors has all pairs mutually perpendicular (dot product = 0). An orthonormal set goes further — each vector is also normalized to have a length (norm) of exactly 1. The Gram-Schmidt process produces an orthonormal basis by both orthogonalizing and then dividing each vector by its norm.

Can I apply Gram-Schmidt to linearly dependent vectors?

No, the Gram-Schmidt process requires linearly independent vectors as input. If one or more vectors are linearly dependent, the process will produce a zero vector at some step, since the projection will fully account for that vector. In practice you'd need to discard linearly dependent vectors before applying the algorithm.

How do I perform Gram-Schmidt orthogonalization manually?

Start with u₁ = v₁. For each subsequent vector vₖ, subtract its projection onto every previously computed uⱼ: uₖ = vₖ − Σ(proj_{uⱼ}(vₖ)). The projection of v onto u is (u·v / ‖u‖²) × u. Finally, normalize each uₖ by dividing by its norm to get unit vectors.

How do I find the second base vector if v₂ = (4, 2, 1) and u₁ = (3, -2, 4)?

Compute the projection of v₂ onto u₁: proj = (u₁·v₂ / ‖u₁‖²) × u₁. Here u₁·v₂ = 3×4 + (-2)×2 + 4×1 = 12, and ‖u₁‖² = 9 + 4 + 16 = 29. So proj = (12/29)(3, -2, 4) ≈ (1.241, -0.828, 1.655). Then u₂ = v₂ − proj ≈ (2.759, 2.828, -0.655).

What is an orthonormal basis used for?

Orthonormal bases are fundamental in linear algebra, signal processing, machine learning, and quantum mechanics. They simplify computations — for example, expressing a vector in an orthonormal basis requires only simple dot products, and transformations like QR decomposition rely on them. They also form the basis of techniques like Principal Component Analysis (PCA).

What does the orthogonality check (dot product ≈ 0) mean in the results?

After orthogonalization, the dot products between any two output vectors u₁, u₂, u₃ should equal zero (or very close to zero due to floating-point rounding). A result near zero confirms the process succeeded and the vectors are truly orthogonal to each other.

More Math Tools