Question about Gram-Schmidt orthogonalization

  • Thread starter pamparana
  • Start date
In summary, the Gram-Schmidt factorization process finds orthonormal vectors U, V, W that are orthogonal to each other. It is also true that V is orthogonal to u, and W is orthogonal to both u and v. This can be explained by the fact that U, V, and W form a basis for a 2D subspace, with W lying in the null space which is orthogonal to the row space of the 2D subspace. This is important in understanding the QR factorization process, which results in an upper triangular matrix and is dependent on this statement being true. The Gram-Schmidt process starts with a vector and finds an orthonormal vector in the same direction, but when given additional
  • #1
pamparana
128
0
Hello everyone,

I have a query regarding the Gram-Schmidt factorization:

Say I have 3 independent vectors, u, v, w and I used the factorization scheme to get U, V, W vectors that are orthonormal to each other.

So U, V, W are orthogonal to each other.

Is it also true that V is orthogonal to u (small u)
and W is orthogonal to small u and v.

In my mind, I am quite convinced it is so. However, what is the exact mathematical reason for this. I think that W is going to be orthogonal to the whole plane described by u and v and V is going to be orthogonal to the line described by u. I am just a tad unsure why this would be.

I am trying to understand that when we do QR factorization why we get an upper triangular matrix and that seems to depend on the above statement being true.

Many thanks,

Luca

Edit: I thought about this a bit more and have the following explanation. Please let me know if it sounds plausible

If we consider the vectors U and V. They form the basis for a 2D subspace in a higher dimension space. So, the vector W lies in the null space of this 2D subspace and this null space will be orthogonal to the row space of the 2D subspace. Hence, W is orthogonal to u and v as well as they lie in the same subspace.

Does this make sense?
 
Last edited:
Physics news on Phys.org
  • #2
Yes, it is true. When you look at the process you can see two things going on (I'm assuming you maintain the vectors in their current order of u,v,w)

1) You orthogonalize your current vector by removing the components in the space spanned by your previous (now orthonormal) vectors
2) You normalize your vector


Ignore step 2. Here's your job:
Prove that if you have a set of vectors u1, u2,..., uk linearly independent and you do Grahm Schmidt to get U1,..., Uk then the spaces spanned by u1,...ur and U1[/sub,...Ur are the same for each r (use induction. The base case is trivial).

Then since Ur+1 when inner producted with any element of the space spanned by U1,...Ur must come out to 0 (since it's orthogonal to each of those) Ur+1 must be orthogonal to u1,... ur as you suspected

If you're really and truly only interested in the case of three vectors, you don't even have to do induction, and can just prove the above directly, but it's not as satisfying
 
  • #3
The Gram-Schmidt process starts with vector u and finds U that has length 1 and is in the same direction as u. It then, given vector v, finds v' that is orthogonal to vector U and then V that has length 1 and is in the same direction as v', not v. Finally, given vector w, it finds w' that is orthogonal to both U and V and then W that has length 1 and is in the same direction as w', not w. U is in the same direction as u, V and W are not necessarily in the same direction as v and w.

Of course, you could do the Gram-Schmidt orthogonalization process starting with v or w instead of u which would change the result.
 

FAQ: Question about Gram-Schmidt orthogonalization

What is Gram-Schmidt orthogonalization?

Gram-Schmidt orthogonalization is a mathematical process used to find an orthogonal basis for a given vector space. It involves taking a set of linearly independent vectors and transforming them into a set of orthogonal vectors, which are perpendicular to each other. This process is commonly used in fields such as linear algebra and signal processing.

Why is Gram-Schmidt orthogonalization important?

Gram-Schmidt orthogonalization is important because it allows us to simplify and solve complex mathematical problems involving vector spaces. By finding an orthogonal basis, we can easily compute dot products, projections, and other operations on the vectors. This process is also used to reduce the dimensionality of a problem, making it easier to analyze and solve.

How does Gram-Schmidt orthogonalization work?

The Gram-Schmidt orthogonalization process involves three steps. First, we take the first vector in the set and normalize it. Next, we subtract the projection of this vector onto the second vector from the second vector. This creates a new vector that is orthogonal to the first. Then, we repeat this process for each subsequent vector in the set until we have a set of orthogonal vectors. Finally, we normalize all of the vectors to obtain an orthonormal basis.

What are the applications of Gram-Schmidt orthogonalization?

Gram-Schmidt orthogonalization has many applications in mathematics and science. It is commonly used in linear algebra to find orthogonal bases, solve systems of equations, and compute eigenvalues and eigenvectors. It is also used in signal processing to filter and analyze data. Additionally, this process has applications in physics, engineering, and computer science.

Are there any limitations to Gram-Schmidt orthogonalization?

While Gram-Schmidt orthogonalization is a powerful tool, it does have its limitations. One limitation is that the process can be numerically unstable, leading to errors in the final results. Additionally, this method may not work for all types of vector spaces, such as those with non-orthogonal bases. In these cases, alternative methods such as QR decomposition may be more suitable.

Similar threads

Replies
7
Views
1K
Replies
14
Views
2K
Replies
20
Views
2K
Replies
5
Views
1K
Replies
9
Views
2K
Replies
9
Views
1K
Back
Top