Is This Proof of Linear Dependence Correct?

In summary, the set \{e_{1},e_{2},...,e_{n},\mathbf{v}\} with \mathbf{v}\in V must be linearly dependent because any vector \mathbf{v}\in V can be written as a linear combination of {e_{1},e_{2},...,e_{n}} and adding \mathbf{v} to the set creates a linear combination that sums to the 0-vector with at least one non-zero coefficient.
  • #1
autre
117
0
I have to prove:

Consider [itex]V=F^{n}[/itex]. Let [itex]\mathbf{v}\in V/\{e_{1},e_{2},...,e_{n}\}[/itex]. Prove [itex]\{e_{1},e_{2},...,e_{n},\mathbf{v}\}[/itex] is a linearly dependent set.

My attempts at a proof:

Since [itex]{e_{1},e_{2},...,e_{n}}[/itex] is a basis, it is a linearly independent spanning set. Therefore, any vector [itex]\mathbf{v}\in V[/itex] can be written as a linear combination of [itex]{e_{1},e_{2},...,e_{n}}[/itex]. Therefore, the set [itex]\{e_{1},e_{2},...,e_{n},\mathbf{v}\}[/itex] with [itex]\mathbf{v}\in V[/itex] must be linearly dependent.

Am I on the right track?
 
Physics news on Phys.org
  • #3
because...if we write:

v = v1e1+v2e2+...+vnen,


then: v1e1+v2e2+...+vnen - 1v

is a linear combination of {e1,e2,...,en,v} that sums to the 0-vector, and yet not all the coefficients in this sum are 0 (the one for v is -1).
 

FAQ: Is This Proof of Linear Dependence Correct?

What is linear dependence?

Linear dependence refers to the relationship between two or more vectors in a vector space. It means that one vector can be written as a linear combination of the other vectors, meaning it can be expressed as a sum of scalar multiples of those vectors.

How do you prove linear dependence?

To prove linear dependence, you can use the definition of linear dependence and solve for the coefficients of the linear combination. If the coefficients are not all zero, then the vectors are linearly dependent.

What is a linear combination?

A linear combination is a mathematical operation where a set of vectors is multiplied by a set of scalars and then added together. This operation is used to express one vector as a sum of multiples of other vectors.

Why is it important to prove linear dependence?

Proving linear dependence is important because it helps in determining the dimension of a vector space and identifying a basis for that space. It also allows for the simplification of vector equations and the understanding of relationships between vectors.

Can you give an example of proving linear dependence?

Yes, for example, if we have two vectors in a 2-dimensional vector space, v = [1, 2] and w = [2, 4], we can write v as a linear combination of w by multiplying w by a scalar of 1/2. Therefore, v = (1/2)w, proving that v and w are linearly dependent.

Similar threads

Replies
15
Views
2K
Replies
10
Views
2K
Replies
1
Views
772
Replies
4
Views
1K
Replies
5
Views
2K
Back
Top