How Can We Prove Linear Dependence in Vector Spaces?

In summary, linear dependence is a mathematical concept that describes the relationship between variables, where one variable can be expressed as a linear combination of the others. It is important to prove linear dependence in order to understand the relationship between variables and make accurate predictions. This can be done through methods such as using the determinant or rank of a matrix. The difference between linear dependence and linear independence is that in the former, one variable can be expressed as a linear combination of the others, while in the latter, the variables are unique and not related in this way. An example of linear dependence is the relationship between temperature and heat energy, where temperature can be expressed as a linear combination of heat energy.
  • #1
autre
117
0
I have to prove:

Let [itex]u_{1}[/itex] and [itex]u_{2}[/itex] be nonzero vectors in vector space [itex]U[/itex]. Show that {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} is linearly dependent iff [itex]u_{1}[/itex] is a scalar multiple of [itex]u_{2}[/itex] or vice-versa.

My attempt at a proof:

([itex]\rightarrow[/itex]) Let {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} be linearly dependent. Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1} \not= \alpha_{2} [/itex]...I'm stuck here in this direction

([itex]\leftarrow[/itex]) Fairly trivial. Let and [itex]u_{1} = -u_{2}[/itex]. Then [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] but [itex]\alpha_{1} \not= \alpha_{2} [/itex].

Any ideas?
 
Physics news on Phys.org
  • #2
autre said:
Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1} \not= \alpha_{2} [/itex]...I'm stuck here in this direction
Look at the definition of linear dependence again. That's not what linear dependence tells you about the scalars. It tells you that [itex] \alpha_1[/itex] and [itex] \alpha_2 [/itex] are not both...? Fixing this definition will also help finish the proof.

([itex]\leftarrow[/itex]) Fairly trivial. Let and [itex]u_{1} = -u_{2}[/itex]. Then [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] but [itex]\alpha_{1} \not= \alpha_{2} [/itex].

Maybe I'm missing something, but you can't just assume that [itex] u_1 = -u_2[/itex] to prove the reverse direction. You're only given that one is a scalar multiple of the other, so you only know [itex] u_1 = c u_2 [/itex] for some scalar c.
 
  • #3
"[itex]\rightarrow[/itex]"

[itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex]

looking at the definition, what is the condition on [itex]\alpha_{1}[/itex] and [itex]\alpha_{2}[/itex] for {[itex]u_{1}, u_{2}[/itex]} to be linearly dependant?

"[itex]\leftarrow[/itex]"

in this part you have to assume [itex]u_{1} = c u_{2}[/itex], perhaps the negative you put in your original will give you a hint as to what to do for the first part.
 
Last edited:
  • #4
Thanks for the input guys.

Look at the definition of linear dependence again.

([itex]\rightarrow[/itex]) Let {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} be linearly dependent. Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1}, \alpha_{2}[/itex] are not both [itex]0[/itex]. Therefore, if [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex], [itex]\alpha_{1}u_{1} = -\alpha_{2}u_{2}[/itex]. Is that good?

For part 2:

([itex]\leftarrow[/itex]) Let and [itex]u_{1} = cu_{2}[/itex]. Why does that mean that the coefficients aren't both [itex]0[/itex]?
 
  • #5
now you're getting somewhere for the "[itex]\rightarrow[/itex]" part.
so if [itex]\alpha_{1}u_{1} = -\alpha_{2}u{2}[/itex] where either [itex]\alpha_{1}[/itex] or [itex]\alpha_{2}[/itex] is non zero (or maybe both are non-zero), what can you do now that you couldn't before?

for the second part, you have to somehow relate [itex]u_{1} = c u_{2}[/itex] to your findings from part one.
 
  • #6
If [itex]\alpha_{1}u_{1} = -\alpha_{2}u_{2}[/itex], then one is a scalar multiple of another as required by the direction, right? What more do I need to do?
 
  • #7
I know it seems obvious, but you have to explicitly state:
assume one of [itex]\alpha_{1}, \alpha_{2}[/itex] is non zero (by the definition of linear dependance). for the sake of argument we take [itex]\alpha_{1}[/itex] to be the non zero coefficiant, and since it is non-zero we can divide both sides by that coefficiant.
which leads us to : [itex] u_{1} = \frac{-\alpha_{2}u_{2}}{\alpha_{1}} [/itex]. therefore if the set {[itex]u_{1}, u_{2}[/itex]} is linearly dependant, one must be a scalar multiple of the other as desired.

the "[itex]\leftarrow[/itex]" is just a reversal of "[itex]\rightarrow[/itex]"
its a lot more powerful to prove a set of [itex]n[/itex] elements than one of just 2, if you're looking for good practice i'd suggest trying that.
 
  • #8
autre said:
For part 2:

([itex]\leftarrow[/itex]) Let and [itex]u_{1} = cu_{2}[/itex]. Why does that mean that the coefficients aren't both [itex]0[/itex]?

So now you need to find scalars [itex] \alpha_1, \alpha_2[/itex] not both zero such that [itex] \alpha_1 u_1 + \alpha_2 u_2 =0 [/itex]. Can you see a way to use the information [itex] u_1 = c u_2 [/itex] to choose scalars so this is true? Try rearranging the equation in your post.
 
  • #9
as gordonj005 pointed out, the proof of (→) breaks down into 2 cases.

you can avoid this difficulty by noting that, in point of fact:

[itex]\alpha_1u_1 = -\alpha_2u_2 \implies \alpha_1,\alpha_2 \neq 0[/itex] since, for example:

[itex]\alpha_1 = 0 \implies -\alpha_2u_2 = 0 \implies \alpha_2 = 0[/itex] since [itex]u_2 \neq 0[/itex].

so you are free to divide by α1 or α2.

you almost had the (←) in your first go-round. your mistake was this: assuming α1 = 1. just use "c" where c is the multiple of u1 that u2 is.

why do you know that c ≠ 0 (because u1 is _______)?
 
  • #10
why do you know that c ≠ 0 (because u1 is _______)?

a non-zero vector! Just curious, how would I go about this part of the proof if it weren't specified that u1, u2 were nonzero vectors?
 
  • #11
suppose u1 = 0. then {u1,u2} is linearly dependent no matter what u2 is:

au1 + 0u2 = 0, for any non-zero value of a.

the same goes if u2 = 0.

so the statement:{u1,u2} is linearly dependent iff u1 is a scalar multiple of u2 (and vice-versa), is no longer true.

however, in actual practice, no one ever tries to decide if the 0-vector is part of a basis, because including it automatically makes a set linearly dependent. so one just wants to decide if a set of non-zero vectors is linearly independent or not.
 

FAQ: How Can We Prove Linear Dependence in Vector Spaces?

What is linear dependence?

Linear dependence is a mathematical concept that describes the relationship between two or more variables, where one variable can be expressed as a linear combination of the others. In other words, if one variable can be written as a multiple of the other variables, then they are considered linearly dependent.

Why is it important to prove linear dependence?

Proving linear dependence is important because it helps us understand the relationship between variables and determine if one variable can be predicted or explained by the others. This information is crucial in many scientific fields, such as physics and economics, where understanding the relationships between variables is essential for making accurate predictions and models.

How do you prove linear dependence?

There are a few different methods for proving linear dependence, but the most common one is by using the determinant of a matrix. If the determinant is equal to zero, then the variables are linearly dependent. Another method is by using the rank of a matrix - if the rank is less than the number of variables, then they are linearly dependent.

What is the difference between linear dependence and linear independence?

Linear dependence and linear independence are two opposite concepts. As mentioned earlier, linear dependence means that one variable can be expressed as a linear combination of the others. On the other hand, linear independence means that the variables are not related in such a way; they are unique and cannot be expressed as a linear combination of each other.

Can you give an example of linear dependence?

One example of linear dependence is the relationship between temperature and heat energy. Temperature and heat energy are linearly dependent because temperature can be expressed as a linear combination of heat energy. In other words, temperature is equal to a constant multiplied by the heat energy.

Back
Top