Prove mutually non-zero orthogonal vectors are linearly independent

In summary: It is NOT "sum of two vectors is non-0" followed by "sum of 3 is non-0" followed by "...". And, of course, if the "vectors" are not actually vectors, it is not possible to prove that they are linearly independent. It is not possible to "prove" something about something that is not defined.
  • #1
unscientific
1,734
13

Homework Statement



Let a1, a2, ... an be vectors in Rn and assume that they are mutually perpendicular and none of them equals 0. Prove that they are linearly independent.


Homework Equations





The Attempt at a Solution



Consider βiai + βjaj ≠ 0 for all i, j

=> βiai + βjaj + βkak ≠ 0 for all i, j, k.

Therefore β1a1 + β2a2 + ... + βnan ≠ 0 (Linearly independent)
 
Physics news on Phys.org
  • #2
I do not understand your attempt.
 
  • #3
voko said:
I do not understand your attempt.

I first start off with adding 2 vectors, and showing they are non-zero. Then I add a third one, which is also non-zero. Then i add everything to show it is also non-zero.
 
  • #4
unscientific said:
I first start off with adding 2 vectors, and showing they are non-zero. Then I add a third one, which is also non-zero. Then i add everything to show it is also non-zero.

I do not see how you show that any of those sums is non-zero. You just state that it is. You might as well state the end result immediately, it will be just as (un)justified.
 
  • Like
Likes 1 person
  • #5
unscientific said:
Therefore β1a1 + β2a2 + ... + βnan ≠ 0 (Linearly independent)
This is NOT the definition of linear independence. The equation β1a1 + β2a2 + ... + βnan = 0 appears in the definition for linear independence, and in the definition for linear dependence.

How then do we distinguish between a set of vectors that is linearly independent from one that is linearly dependent?

What you showed above, with the ≠ symbol, doesn't appear in either definition.
 
  • #6
Mark44 said:
This is NOT the definition of linear independence. The equation β1a1 + β2a2 + ... + βnan = 0 appears in the definition for linear independence, and in the definition for linear dependence.

How then do we distinguish between a set of vectors that is linearly independent from one that is linearly dependent?

What you showed above, with the ≠ symbol, doesn't appear in either definition.

I think the first step is to show that the vector sum of any 2 vectors is non-zero. But since all the vectors are mutually orthogonal, sum of both can't be zero?

Quick proof:

Assume βiai + βjaj = 0

This implies that ai = -(βji)aj is parallel to aj.
([itex]\Rightarrow[/itex][itex]\Leftarrow[/itex])
So any two mutually orthogonal vectors are linearly independent. By mathematical induction, Ʃβiai ≠ 0.
 
  • #7
Consider the inner product of vector β1a12a2+...+βnan with vector ai and show that it is zero only if βi=0.

Therefore β1a12a2+...+βnan = 0 iff all βi are zero.
 
  • #8
unscientific said:
I think the first step is to show that the vector sum of any 2 vectors is non-zero. But since all the vectors are mutually orthogonal, sum of both can't be zero?

Quick proof:

Assume βiai + βjaj = 0

This implies that ai = -(βji)aj is parallel to aj.
([itex]\Rightarrow[/itex][itex]\Leftarrow[/itex])
So any two mutually orthogonal vectors are linearly independent. By mathematical induction, Ʃβiai ≠ 0.
You have not said anything about the [itex]\beta_i[/itex] not all being 0. Obviously if [itex]\beta_i= 0[/itex] for all i, that sum is 0.
 
  • #9
By mathematical induction, Ʃβiai ≠ 0.

You have, almost, proved the base of induction. But you still have to prove the ##n \rightarrow n + 1## induction step.
 
  • #10
voko said:
You have, almost, proved the base of induction. But you still have to prove the ##n \rightarrow n + 1## induction step.

I don't think induction works here. Even though sum of any 2 vectors = 0, it doesn't mean that adding a third vector won't make it zero.

Assume the sum of 3 vectors = 0, it implies the third vector added is parallel to the sum of the two (No contradiction, as the statement says that the vectors are mutually orthogonal and nothing is said about the orthogonality between a vector and a sum of vectors.

I think the right way is to take the inner product of any vector with respect to the entire sum.
 
  • #11
unscientific said:
I don't think induction works here. Even though sum of any 2 vectors = 0, it doesn't mean that adding a third vector won't make it zero.

Assume the sum of 3 vectors = 0, it implies the third vector added is parallel to the sum of the two (No contradiction, as the statement says that the vectors are mutually orthogonal and nothing is said about the orthogonality between a vector and a sum of vectors.

Well, this can in fact be proved, too, but this is probably more difficult than what you have to do.

I think the right way is to take the inner product of any vector with respect to the entire sum.

Why does that have to be any vector?
 
  • #12
voko said:
Well, this can in fact be proved, too, but this is probably more difficult than what you have to do.



Why does that have to be any vector?

1. Add any 2 vectors, show that they are non-zero.

2. Add a third vector, take inner product of that sum with that vector just added. Non-zero.

3. Carry on process till last vector.

4. QED
 
  • #13
This does not prove linear independence. Use the definition of the latter.
 
  • #14
Unscientific, the basic problem appears to be that you have an incorrect idea of what the definition of "independent" is!

It is NOT
1. Add any 2 vectors, show that they are non-zero.

2. Add a third vector, take inner product of that sum with that vector just added. Non-zero.

3. Carry on process till last vector.

4. QED

A set of vectors [tex]\{v_1, v_2, ..., v_n\}[/tex] is "independent" if an only if the only way [tex]\beta_1v_1+ \beta_2v_2+ ... + \beta_nv_n= 0[/tex] is if [tex]\beta_1= \beta_2= ...= \beta_n[/tex].

It is NOT just a matter of adding vectors and saying the sum is not 0. It is the the only way a linear combination of them can be 0 is if all coefficients are 0. That is equivalent to the statement that no one of the vectors can be written as a linear combination of the other.

So to prove a set of vectors is linearly independent is to start, "Suppose [tex]\beta_1v_1+ \beta_2v_2+ ...+ \beta_nv_n= 0[/tex]" and show that every one of the \betas is equal to 0. Here the only condition on the vectors is that they are "mutually perpendicular"- and no where have you used that condition.

What would you get if you took the dot product of [tex]\beta_1v_1+ \beta_2v_2+ ...+ \beta_nv_n[/tex] with each of [tex]v_1[/tex], [tex]v_2[/tex], ..., [tex]v_n[/tex] in turn?
 
  • Like
Likes 1 person
  • #15
HallsofIvy said:
Unscientific, the basic problem appears to be that you have an incorrect idea of what the definition of "independent" is!

It is NOT


A set of vectors [tex]\{v_1, v_2, ..., v_n\}[/tex] is "independent" if an only if the only way [tex]\beta_1v_1+ \beta_2v_2+ ... + \beta_nv_n= 0[/tex] is if [tex]\beta_1= \beta_2= ...= \beta_n[/tex].

It is NOT just a matter of adding vectors and saying the sum is not 0. It is the the only way a linear combination of them can be 0 is if all coefficients are 0. That is equivalent to the statement that no one of the vectors can be written as a linear combination of the other.

So to prove a set of vectors is linearly independent is to start, "Suppose [tex]\beta_1v_1+ \beta_2v_2+ ...+ \beta_nv_n= 0[/tex]" and show that every one of the \betas is equal to 0. Here the only condition on the vectors is that they are "mutually perpendicular"- and no where have you used that condition.

What would you get if you took the dot product of [tex]\beta_1v_1+ \beta_2v_2+ ...+ \beta_nv_n[/tex] with each of [tex]v_1[/tex], [tex]v_2[/tex], ..., [tex]v_n[/tex] in turn?

Yup, sorry for not being concise. What i meant by "add any 2 vectors" I mean adding βiai.
 
  • #16
HallsofIvy said:
Unscientific, the basic problem appears to be that you have an incorrect idea of what the definition of "independent" is!

It is NOT A set of vectors [tex]\{v_1, v_2, ..., v_n\}[/tex] is "independent" if an only if the only way [tex]\beta_1v_1+ \beta_2v_2+ ... + \beta_nv_n= 0[/tex] is if [tex]\beta_1= \beta_2= ...= \beta_n[/tex].

It is NOT just a matter of adding vectors and saying the sum is not 0. It is the the only way a linear combination of them can be 0 is if all coefficients are 0. That is equivalent to the statement that no one of the vectors can be written as a linear combination of the other.

So to prove a set of vectors is linearly independent is to start, "Suppose [tex]\beta_1v_1+ \beta_2v_2+ ...+ \beta_nv_n= 0[/tex]" and show that every one of the \betas is equal to 0. Here the only condition on the vectors is that they are "mutually perpendicular"- and no where have you used that condition.

What would you get if you took the dot product of [tex]\beta_1v_1+ \beta_2v_2+ ...+ \beta_nv_n[/tex] with each of [tex]v_1[/tex], [tex]v_2[/tex], ..., [tex]v_n[/tex] in turn?
1. Add any 2 vectors, show that they are non-zero.

βiai + βjaj can't be zero, otherwise

ai = -(βji)aj implying they are parallel. Contradiction, as they are orthogonal.2. Add a third vector, take inner product of that sum with that vector just added. Non-zero.

βiai + βjaj + βkak.

Taking inner product, the first two innerproducts give 0, due to orthogonality. The last one, which is with itself, gives non-zero due to positivity of the norm.

3. Carry on process till last vector.

4. QED (assuming coefficients are non-zero)

I hope this is clear enough..thanks for the help guys!
 
  • #17
unscientific said:
1. Add any 2 vectors, show that they are non-zero.

βiai + βjaj can't be zero, otherwise

What if ##\beta_i = \beta_j = 0##. Doesn't the expression give ##0##?

ai = -(βji)aj implying they are parallel. Contradiction, as they are orthogonal.

What if ##\beta_i=0##, won't you divide by ##0##?
Why are parallel vectors not orthogonal?
 
  • #18
micromass said:
What if ##\beta_i = \beta_j = 0##. Doesn't the expression give ##0##?



What if ##\beta_i=0##, won't you divide by ##0##?
Why are parallel vectors not orthogonal?

I'm assuming all coefficients are non-zero.
 
  • #19
unscientific said:
I'm assuming all coefficients are non-zero.

Well, you need to say this. And why can you assume this anyway?
 
  • #20
micromass said:
Well, you need to say this. And why can you assume this anyway?

Because I am choosing them to be non-zero, in order to work towards the proof of linear independence. No point choosing any of them to be zero. (I thought this was straightforward enough not to say..)
 
  • #21
unscientific said:
Because I am choosing them to be non-zero, in order to work towards the proof of linear independence. No point choosing any of them to be zero. (I thought this was straightforward enough not to say..)

That's not what linear independence states. It says that ##\beta_i\mathbf{v}_i + \beta_j\mathbf{w}_j \neq \mathbf{0}## whenever ##\beta_i## and ##\beta_j## are both nonzero. So it can certainly happen that one of the ##\beta_i## is zero.
 
  • #22
micromass said:
That's not what linear independence states. It says that ##\beta_i\mathbf{v}_i + \beta_j\mathbf{w}_j \neq \mathbf{0}## whenever ##\beta_i## and ##\beta_j## are both nonzero. So it can certainly happen that one of the ##\beta_i## is zero.

Yes I get what you mean. But the question wants to show the linear independence of all vectors from a1 to an! If you let any of the coefficients be 0, then you are at most showing linear independence of all vectors except that one you let the coefficient be 0.
 
  • #23
unscientific said:
Yes I get what you mean. But the question wants to show the linear independence of all vectors from a1 to an! If you let any of the coefficients be 0, then you are at most showing linear independence of all vectors except that one you let the coefficient be 0.

You are required to prove that the entire set of vectors is linearly independent. By definition, you must prove that their linear combination is zero only when all the coefficients are zero. If you prove any other statement, then you need also a proof that your statement leads to linear independence per the original definition.
 
  • Like
Likes 1 person
  • #24
voko said:
You are required to prove that the entire set of vectors is linearly independent. By definition, you must prove that their linear combination is zero only when all the coefficients are zero. If you prove any other statement, then you need also a proof that your statement leads to linear independence per the original definition.

That's right, thanks for putting it in a more elegant way!
 

FAQ: Prove mutually non-zero orthogonal vectors are linearly independent

What does it mean for vectors to be mutually non-zero orthogonal?

Two vectors are mutually non-zero orthogonal if they are perpendicular to each other and both have non-zero magnitudes.

2. How do you prove that mutually non-zero orthogonal vectors are linearly independent?

To prove that mutually non-zero orthogonal vectors are linearly independent, we need to show that the only solution to the equation c1v1 + c2v2 = 0 (where v1 and v2 are the two vectors and c1 and c2 are constants) is when c1 = c2 = 0.

3. Can mutually non-zero orthogonal vectors be linearly dependent?

No, mutually non-zero orthogonal vectors cannot be linearly dependent. If they were, it would mean that one vector can be written as a multiple of the other, which is not possible if they are perpendicular.

4. What is the significance of proving that mutually non-zero orthogonal vectors are linearly independent?

Proving that mutually non-zero orthogonal vectors are linearly independent is important in many areas of mathematics and science, such as linear algebra, physics, and engineering. It allows us to determine the number of independent variables in a system and find solutions to equations involving these vectors.

5. How can we use the concept of mutually non-zero orthogonal vectors in real-world applications?

The concept of mutually non-zero orthogonal vectors has many real-world applications, particularly in fields such as computer graphics, robotics, and signal processing. For example, in computer graphics, orthogonal vectors are used to represent the direction of light sources and the orientation of objects in 3D space.

Similar threads

Replies
4
Views
2K
Replies
2
Views
2K
Replies
6
Views
2K
Replies
11
Views
1K
Replies
7
Views
2K
Replies
3
Views
2K
Back
Top