Why Are My Gram-Schmidt Results Different on Wolfram|Alpha and MATLAB?

  • MHB
  • Thread starter nedf
  • Start date
  • Tags
    Process
In summary, Wolframalpha is a computational knowledge engine that provides solutions for various computations. However, when computing the second term v2 = x2 - (x2.v1)/(v1.v1) * v1, the result may be different from that provided by other sources such as Matlab. This is because the dot product for complex numbers requires a conjugation, which may be defined differently in different sources. Additionally, the orthonormal (normalized) basis is typically not unique for a given matrix.
Physics news on Phys.org
  • #2
nedf said:
Wolframalpha provides this solution:
Wolfram|Alpha: Computational Knowledge EngineHowever when i compute the second term v2 = x2 - (x2.v1)/(v1.v1) * v1
The result is different from that of above. What's wrong?
Wolfram|Alpha: Computational Knowledge Engine)

Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
 
  • #3
I like Serena said:
Hi nedf! Welcome to MHB! ;)

Did you take into account that the dot product for complex numbers requires a conjugation?
That is, $\mathbf a \cdot \mathbf b = \sum a_i\overline{b_i}$.
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

Also, i computed the orth on matlab:
o1nZzHA.png


Why is it different from Wolfram|Alpha: Computational Knowledge Engine?
peUHk3W.png


Is the orthonormal (normalized) basis unique for a given matrix?
 
Last edited:
  • #4
nedf said:
Thanks.
At the end of the page on https://www.mathworks.com/help/matlab/ref/dot.html
Why is the dot product defined this way instead? Wouldnt the answer be different?
$\mathbf a \cdot \mathbf b = \sum\overline a_i{b_i}$

It's a matter of convention. Both dot products are valid inner products. And one is the conjugate of the other.
However, your formula for the Gram-Schmidt process assumes the $\sum a_i\overline {b_i}$ version.
Otherwise the dot product in the fraction should have been the other way around.
So the results will be the same - if we use the proper formula.
And your Gram-Schmidt formula is incompatible with mathworks's variant.

Note that with the standard $\sum a_i\overline{b_i}$ we have:
$$\mathbf v_1\cdot \mathbf v_2 = \mathbf v_1 \cdot\left( \mathbf x_2-\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \mathbf v_1 \cdot \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\mathbf v_1\right)
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf x_2 \cdot \mathbf v_1}{\mathbf v_1 \cdot \mathbf v_1}\right)^*\mathbf v_1 \cdot \mathbf v_1 \\
= \mathbf v_1 \cdot \mathbf x_2- \left(\frac{\mathbf v_1 \cdot \mathbf x_2}{\mathbf v_1 \cdot \mathbf v_1}\right)\mathbf v_1 \cdot \mathbf v_1
= \mathbf v_1 \cdot \mathbf x_2 - \mathbf v_1 \cdot \mathbf x_2 = 0
$$

Also, i computed the orth on matlab:Why is it different from Wolfram|Alpha: Computational Knowledge Engine?Is the orthonormal (normalized) basis unique for a given matrix?

And no, an orthonormal basis is typically not unique.
Consider for instance $\mathbb R^3$.
The standard orthonormal basis is $\{(1,0,0),(0,1,0),(0,0,1)\}$.
But $\{(1,0,0),(0,1/\sqrt 2,1/\sqrt 2),(0,1/\sqrt 2,-1/\sqrt 2)\}$ is also an orthonormal basis.
 

FAQ: Why Are My Gram-Schmidt Results Different on Wolfram|Alpha and MATLAB?

1. What is the gram-schmidt process?

The gram-schmidt process is a mathematical procedure used to transform a set of linearly independent vectors into a new set of orthogonal vectors. It is commonly used in linear algebra and is named after mathematicians Jørgen Pedersen Gram and Erhard Schmidt.

2. Why is the gram-schmidt process important?

The gram-schmidt process is important because it allows us to find an orthogonal basis for a given vector space. This can be useful in many applications, such as solving systems of linear equations and finding eigenvalues and eigenvectors.

3. How does the gram-schmidt process work?

The gram-schmidt process involves a series of steps in which each vector in the original set is projected onto the orthogonal complement of the vectors that have already been processed. This results in a new set of orthogonal vectors. The process can be repeated to further improve the orthogonality of the vectors.

4. What are the benefits of using the gram-schmidt process?

One of the main benefits of using the gram-schmidt process is that it allows us to work with orthogonal vectors, which can simplify calculations and make them easier to understand. It also helps in reducing errors and making computations more efficient.

5. Are there any limitations to the gram-schmidt process?

While the gram-schmidt process is a useful tool, it does have some limitations. One limitation is that it may not always produce an orthonormal basis, which is a set of orthogonal vectors with unit length. It may also introduce numerical errors, especially when working with very large or very small vectors.

Back
Top