Proving Linear Independence of u & v in Inner-Product Space V

In summary: I really appreciate your help!In summary, to prove that if u and v are given non-zero vectors in the arbitrary inner-product space V, and <u,v>=0, then {u,v} is a linearly independent subset of V, we use the properties of inner products to show that the only possible solution to the equation au+bv=0 is when a=b=0. This is because the inner product of any vector with itself must be greater than zero, making it impossible for the coefficients a and b to both be nonzero. Therefore, {u,v} is a linearly independent subset of V.
  • #1
DanielFaraday
87
0

Homework Statement


Prove that if u and v are given non-zero vectors in the arbitrary inner-product space V, and are such that <u,v>=0, then {u,v} is a linearly independent subset of V.

Homework Equations

The Attempt at a Solution


I have no idea where to start. It's hard to prove because the inner-product could be defined in various ways. I can't just pick one and go with it, since another equally valid definition may not work.

I am still working on this and I will post my attempt as soon as I have one, but until then I was hoping to get some ideas for a starting point.
 
Last edited:
Physics news on Phys.org
  • #2
Hi Daniel - I would try starting with the definition of linear independence

u, v are linearly independent if the only solution to the equation
c1.u + c2.v = 0, for scalars c1, c2, is c1=c2=0

then you could try writing an arbitrary vector as a linear combination of u&v, and look at the inner product with u or v
 
  • #3
Thanks for the input! It was very helpful. Here's what I have so far. I'm not sure where the inner-product comes into play, though:

Let [tex]\pmb{u}=\left(u_1,u_2,u_3\right)[/tex] and let [tex]\pmb{v}=\left(v_1,v_2,v_3\right)[/tex].
If {u, v} is a linearly independent subset of V, then the only solution to [tex]c_1\pmb{u}+c_2\pmb{v}=0[/tex] is when [tex]c_1=c_2=0[/tex].

Thus, [tex]c_1\pmb{u}+c_2\pmb{v}=c_1\left(u_1,u_2,u_3\right)+c_2\left(v_1,v_2,v_3\right)=0[/tex].

Since [tex]c_1[/tex] and [tex]c_2[/tex] are coefficients for separate terms, and since it is given that u and v are non-zero vectors, the only way the above equation could be true is if [tex]c_1=c_2=0[/tex]
QED

Okay, this doesn't seem like much of a proof. Any ideas?
 
Last edited:
  • #4
It's not much of a proof. You don't need to split u and v into components. If c1*u+c2*v=0, what conclusions can you draw from <u,c1*u+c2*v>=0 and <v,c1*u+c2*v>=0, remembering <u,v>=0? That's what lanedance was suggesting. Use some properties of the inner product.
 
Last edited:
  • #5
Okay, this proof is actually easier than I thought. Tell me what you think. (I am using a and b for the constants to make it easier to type).

If {u, v} is a linearly independent subset of V, then the only solution to au+bv=0 is when a=b=0. We seek to prove that this is in fact the case.

Let u, v, and w be non-zero vectors.
The properties of inner products require that <au+bv,w>=a<u,w>+b<v,w>=0.
Since a and b are coefficients for separate terms, we know that a=b=0.
 
  • #6
Gack! No, that doesn't tell me anything since w could be anything. Put w=u. Try 0=<au+bv,u>=a<u,u>+b<v,u>. <v,u>=0. So 0=a<u,u>. What do the properties of the inner product tell you about <u,u> if u is nonzero?
 
  • #7
Dick said:
Gack! No, that doesn't tell me anything since w could be anything. Put w=u. Try 0=<au+bv,u>=a<u,u>+b<v,u>. <v,u>=0. So 0=a<u,u>. What do the properties of the inner product tell you about <u,u> if u is nonzero?

Let's try it again, then:

If {u, v} is a linearly independent subset of V, then the only solution to au+bv=0 is when a=b=0. We seek to prove that this is in fact the case.

The properties of inner products require that <au+bv,u>=a<u,u>+b<v,u>=0.
The properties of inner products also require that <u,v>=<v,u>, so <v,u>=0.
We are left with a<u,u>=0.
However, the properties of inner products also require that <u,u> be greater than zero for a nonzero vector u.
Thus, we know that <u,u> is greater than zero, which means a=0.

Now we repeat this process, replacing u with v in the first argument:

The properties of inner products require that <au+bv,v>=a<u,v>+b<v,v>=0.
Since <u,v>=0, we are left with b<v,v>=0.
However, the properties of inner products also require that <v,v> be greater than zero for a nonzero vector v.
Thus, we know that <v,v> is greater than zero, which means b=0.

Thus, both a and b must be equal to zero.

QED
 
Last edited:
  • #8
Yes, yes, yes, yes. That's it. Exactly.
 
  • #9
Dick said:
Yes, yes, yes, yes. That's it. Exactly.

Thank you so much!
 

FAQ: Proving Linear Independence of u & v in Inner-Product Space V

What is the definition of linear independence in an inner-product space?

Linear independence in an inner-product space V is defined as a set of vectors u and v that cannot be written as a linear combination of each other, where the coefficients are all zero. This means that no vector in the set can be expressed as a scalar multiple of another vector in the set.

How do you prove linear independence in an inner-product space?

To prove linear independence in an inner-product space V, we can use the Gram-Schmidt process. This process involves orthogonalizing the set of vectors and then checking if the resulting set is orthonormal. If the resulting set is orthonormal, then the set of vectors is linearly independent. Another method is to use the definition and solve for the coefficients of the linear combination. If all coefficients are equal to zero, then the set of vectors is linearly independent.

Can two vectors be linearly independent in one inner-product space but not in another?

Yes, it is possible for two vectors to be linearly independent in one inner-product space but not in another. This is because the definition of linear independence depends on the inner product being used. If the inner product is different, then the definition of linear independence will also be different.

What is the role of the inner product in proving linear independence?

The inner product plays a crucial role in proving linear independence as it is the basis for the definition of linear independence. The inner product measures the angle between two vectors and determines whether they are orthogonal or not. This information is used in the Gram-Schmidt process and in solving for the coefficients of the linear combination.

Can a set of vectors be linearly independent if one vector is a scalar multiple of another vector in the set?

No, a set of vectors cannot be linearly independent if one vector is a scalar multiple of another vector in the set. This is because if one vector can be written as a scalar multiple of another vector, then it can be expressed as a linear combination of the other vector with a coefficient of the scalar multiple. This violates the definition of linear independence, where all coefficients in the linear combination must be zero.

Similar threads

Back
Top