- #1
sponsoredwalk
- 533
- 5
Hi, first off - Go easy on me, I'm only learning
My book is talking about linear independence.
As I understand the concept it means that a vector has to be in a different direction, i.e. non-collinear, with the vector in comparison.
Mathematically, my book has defined linearly independent vectors as;
[tex] a \overline{u} \ + \ b \overline{v} \ = \ 0 \ with \ a \ = \ 0 \ and \ b = \ 0 [/tex]
The book says that linearly dependent vectors are those in which the scalars a & b in the above are not equal to zero.
The reason is that you could solve for either vector, if either scalar was non-zero, and then describe one vector as a scalar multiple of the other.
Okay, that seems about right.
My problem comes from the following;
[tex] assume \ \overline{u} \ and \ \overline{v} \ to \ be \ linearly \ independent [/tex]
[tex] a \overline{u} \ + \ b \overline{v} \ = \ \alpha \overline{u} \ + \ \beta \overline{v} [/tex]
[tex] ( \ a \ - \ \alpha \ ) \overline{u} \ + \ ( \ b \ - \ \beta \ ) \overline{v} \ = \ 0 [/tex]
This must imply that a=α & that b=β.
This must also imply that a-α=0 & that b-β=0.
But shouldn't a & α, b & β, all have to be equal to zero anyway?
Isn't the above saying that zero minus zero equals zero?
If all the scalar coefficients are not equal to zero then the vectors must be linearly dependent, I get the feeling that we are trying to define a way to say that you can set a vector to have non-zero coefficents if you set it equal to itself & reference it to itself. But if you do this aren't you defining linearly dependent vectors simply by the fact that the scalars are non-zero?
What am I not realizing?
Please go easy, I don't know anything about basis or anything as the book is trying to define the concept using the above principle.
My book is talking about linear independence.
As I understand the concept it means that a vector has to be in a different direction, i.e. non-collinear, with the vector in comparison.
Mathematically, my book has defined linearly independent vectors as;
[tex] a \overline{u} \ + \ b \overline{v} \ = \ 0 \ with \ a \ = \ 0 \ and \ b = \ 0 [/tex]
The book says that linearly dependent vectors are those in which the scalars a & b in the above are not equal to zero.
The reason is that you could solve for either vector, if either scalar was non-zero, and then describe one vector as a scalar multiple of the other.
Okay, that seems about right.
My problem comes from the following;
[tex] assume \ \overline{u} \ and \ \overline{v} \ to \ be \ linearly \ independent [/tex]
[tex] a \overline{u} \ + \ b \overline{v} \ = \ \alpha \overline{u} \ + \ \beta \overline{v} [/tex]
[tex] ( \ a \ - \ \alpha \ ) \overline{u} \ + \ ( \ b \ - \ \beta \ ) \overline{v} \ = \ 0 [/tex]
This must imply that a=α & that b=β.
This must also imply that a-α=0 & that b-β=0.
But shouldn't a & α, b & β, all have to be equal to zero anyway?
Isn't the above saying that zero minus zero equals zero?
If all the scalar coefficients are not equal to zero then the vectors must be linearly dependent, I get the feeling that we are trying to define a way to say that you can set a vector to have non-zero coefficents if you set it equal to itself & reference it to itself. But if you do this aren't you defining linearly dependent vectors simply by the fact that the scalars are non-zero?
What am I not realizing?
Please go easy, I don't know anything about basis or anything as the book is trying to define the concept using the above principle.