Vector independence proof question

In summary, the vector v_1 is a linear combination of the previous vectors, v_1=a_1v_1+..+a_i-1v_i-1, but is not a linear combination of the other vectors.
  • #1
transgalactic
1,395
0
prove that vectors [tex]v_1[/tex],..,[tex]v_n[/tex] on a vectorinc space V over field F
are linearly dependant if and only if there is an index 1<=i<=n
so [tex]v_i[/tex] is a lenear combination of the previus vectors by its index
[tex]v_1[/tex],..,[tex]v_{i-1}[/tex]
??

i got a prove but i can't fully understand it:
suppose v_i is a lenear combination of its previous
v_i=a_1v_1+..+a_i-1v_i-1

we transfer v_i on the other side
0=-1v_1+a_1v_1+..a_i-1v_i-1

then they say that
0c_i+..+0v_1
so it lenear dependant

(why??)
then they pick an index
all the index are 0 except the first one which is not
[tex]i_0=max(i|a_i\neq0)[/tex] for which a_i differs 0

but its true only for i_0>=2

so we get the expression

[tex]
v_{i0}=(\frac{-a_1}{}a_{i0})v_1+..+()v_i
[/tex]
so there is a lenear dependence and we proved it.


the lecturer was in a hury
can you fill the gaps
make sense out of it
??
 
Physics news on Phys.org
  • #2
What does it mean for a set of vectors to be linearly dependent?
 
  • #3
it means that there is no vector in this set which could be written as a manipulation
of the other vectors
 
  • #4
That's your own definition. What is the exact definition? That is, a set of vectors v1, v2, v3, ... , vn in a vector space over a field F is linearly dependent iff _____________. You fill in the blank
 
  • #5
their determinant differs zero
their row reduction doesn't give us a row of zeros
their dim(Ker)=0

thats the only option i can think of
 
  • #6
transgalactic said:
their determinant differs zero
No, these are just vectors, not a matrix. Even if you created a matrix by entering these vectors as columns, there is no guarantee that the matrix would be square. A matrix has to be square in order for its determinant to be defined.
transgalactic said:
their row reduction doesn't give us a row of zeros
As above, these are just vectors.
transgalactic said:
their dim(Ker)=0
Still no matrix
transgalactic said:
thats the only option i can think of

How about the definition of linear dependence? You have a textbook, right? It has the definition.
 
  • #7
why do i need the definition??
 
  • #8
How in the world are you going to prove a statement like this:
prove that vectors ,.., on a vectorinc space V over field F
are linearly dependant if and only if ...
if you don't know what linear dependence means?
 
  • #9
i gave almost the complete prove
do you get the idea?
 
  • #10
You're the one who doesn't understand the proof. How can you expect to understand a proof that involves linear independence/linear dependence if you don't know what these terms mean?

I'm not asking you for the definition because I need to know it -- you need to know it.
 
  • #11
i know what the independent vector means
" a set of vectors that, in a linear combination, can represent every vector in a given vector space or free module, and such that no element of the set can be represented as a linear combination of the others. In other words, a basis is a linearly independent spanning set.
"

but its not helping with understanding this prove
??
 
  • #12
transgalactic said:
i know what the independent vector means
" a set of vectors that, in a linear combination, can represent every vector in a given vector space or free module, and such that no element of the set can be represented as a linear combination of the others. In other words, a basis is a linearly independent spanning set.
"

but its not helping with understanding this prove
??
There's part of your problem. That is NOT a definition of "independent", it is a definition of 'basis". What is the definition of "independent vectors"?
 
  • #13
that there is no linear combination of the three vectors that will add to zero unless the coefficients multiplying the three vectors (not their internal components) are individually zero.

the only way
av1+bv2+cv3=0
if
a=b=c=0

how to use it?
 
  • #14
OK, this is essentially the definition of linear independence for three vectors. A bit more generally, a set of vectors {v1, v2, v3, ... , vn} in a vector space over a field F is linearly independent iff the only solution for the equation c1*v1 + c2*v2 + ... + cn*vn = 0 is c1 = c2 = ... = cn = 0.

For the same set of vectors to be linearly dependent, the equation c1*v1 + c2*v2 + ... + cn*vn = 0 has a solution where at least one of the ci's is nonzero.

Now, go back to your original post in this thread and see how this idea is being used.
 
  • #15
ok i heve this expression
0=-1v_1+a_1v_1+..a_i-1v_i-1
not all the vectors can be with 0 coefficient
v_1 is not.(he has -1)

why they pick maximal index for which the coefficient differs 0

??
 
  • #16
what to do next??
 
  • #17
transgalactic said:
ok i heve this expression
0=-1v_1+a_1v_1+..a_i-1v_i-1
not all the vectors can be with 0 coefficient
v_1 is not.(he has -1)
why they pick maximal index for which the coefficient differs 0

??
Whoever wrote what you're reading is using the definition of linear dependence. The set {v1, v2, v3, ..., vn} is assumed to be linearly dependent, which means that the equation a1*v1 + a2*v2 + ... + an*vn = 0 has a solution where at least one ai is not 0. The definition of linear dependence guarantees that at least one such number is not zero, but it doesn't say which one. There might be just one constant ai or a bunch of them. The max() part says to pick the constant with the highest such index. Let's call it ak instead of what he uses, which is ai0. Then he moves that term to the other side of the equation to get
-ak*vk = a1*v1 + a2*v2 + ... + a[k-1]*v[k-1] + a[k+1]*v[k+1] + ... + an*vn

Since ak is not zero, you can divide both sides of the equation by it, thereby showing that vk is a linear combination of the other vectors.
 
  • #18
thanks i got it
:)
 

FAQ: Vector independence proof question

What is a vector independence proof?

A vector independence proof is a mathematical method used to determine whether a set of vectors is linearly independent or linearly dependent. It involves showing that a linear combination of the vectors is equal to zero, and then proving that the only way for this to happen is if all the coefficients are zero.

Why is proving vector independence important?

Proving vector independence is important because it helps us understand the relationships between vectors and determine their usefulness in solving mathematical problems. It is also a fundamental concept in linear algebra and is used in many other fields such as physics, engineering, and computer science.

How do you prove vector independence?

To prove vector independence, you need to show that a linear combination of the vectors is equal to zero and then prove that the only solution is where all the coefficients are zero. This can be done through various methods such as using the definition of linear independence, Gaussian elimination, or using the determinant of a matrix.

What is the difference between linearly independent and linearly dependent vectors?

Linearly independent vectors are a set of vectors that cannot be written as a linear combination of each other, while linearly dependent vectors can be expressed as a linear combination of each other. In other words, if one vector in a set of vectors can be written as a combination of the others, then the set is linearly dependent.

What are some real-life applications of vector independence?

Vector independence is used in various fields, such as physics, engineering, and computer science. In physics, it is used to model and solve problems related to forces, motion, and energy. In engineering, it is used in designing structures, analyzing circuits, and optimizing processes. In computer science, it is used in graphics, machine learning, and data analysis.

Back
Top