Vector Spaces .... Linear Dependence and Indepence .... Basic Proof Required

In summary, Theorem 2.4.3 in Andrew McInerney's book states that if a set $S$ has $n$ elements and one of them can be expressed as a linear combination of the others, then the set is linearly dependent. However, if $n = 1$, the set is considered linearly independent. Conversely, if a set $S$ is linearly dependent, then at least one of the elements can be expressed as a linear combination of the others. For example, the set $S = \{(0,1,0),(0,1,0),(2,5,0)\}$ is linearly dependent because one of the elements, $(2,5,0)$
  • #1
Math Amateur
Gold Member
MHB
3,998
48
In Andrew McInerney's book: First Steps in Differential Geometry, Theorem 2.4.3 reads as follows:https://www.physicsforums.com/attachments/5252McInerney leaves the proofs for the Theorem to the reader ...

I am having trouble formulating a proof for Part (3) of the theorem ...

Can someone help ...

Peter
 
Physics news on Phys.org
  • #2
Suppose $S$ has $n$ elements, so $S = \{v_1,v_2,\dots,v_n\}$.

If one of these, say, $v_n$ (we can always "re-organize" our set $S$, so that the vector that is a linear combination of the others is the last one), is a linear combination of the others, we have:

$v_n = c_1v_1 + c_2v_2 + \cdots + c_{n-1}v_{n-1}$, for some scalars (field elements) $c_1,\dots,c_{n-1}$.

Hence:

$c_1v_1 + c_2v_2 +\cdots + c_{n-1}v_{n-1} + (-1)v_n = 0$.

These scalars cannot all be $0$, since in any field $1 \neq 0$, hence $-1 \neq -0 = 0$.

So, by the *definition* of linear dependence, $S$ is a linearly dependent set.

One caveat: $n = 1$ doesn't work. Why? Because if $v_1 \neq 0$, the set $\{v_1\}$ is linearly independent, since if:

$c_1v_1 = 0$, from $v_1 \neq 0$, we must have $c_1 = 0$.

On the other hand, if $S$ is a linearly dependent set of $n$ vectors, that for some $c_1,\dots,c_n$ not ALL $0$, we have:

$c_1v_1 +\cdots + c_nv_n = 0$.

Choose any $c_j \neq 0$ (we have at least one).

Then $v_j = -\left(\dfrac{c_1}{c_j}\right)v_1 - \cdots - \left(\dfrac{c_{j-1}}{c_j}\right)v_{j-1} - \left(\dfrac{c_{j+1}}{c_j}\right)v_{j+1} - \cdots - \left(\dfrac{c_n}{c_j}\right)v_n$

which is a linear combination of the other $n-1$ vectors.

For example, the set $S \subseteq \Bbb R^3$ given by:

$S = \{(0,1,0),(0,1,0),(2,5,0)\}$ is linearly dependent.
 

FAQ: Vector Spaces .... Linear Dependence and Indepence .... Basic Proof Required

What is a vector space?

A vector space is a mathematical structure that consists of a set of vectors and a set of operations that can be performed on these vectors, such as addition and scalar multiplication. These operations must follow certain rules, including closure, associativity, and distributivity.

What does linear dependence and independence mean?

In a vector space, a set of vectors is considered linearly dependent if one or more of the vectors in the set can be expressed as a linear combination of the other vectors. In contrast, a set of vectors is considered linearly independent if none of the vectors can be expressed as a linear combination of the others.

How do you prove linear independence of a set of vectors?

To prove that a set of vectors is linearly independent, you can use the definition of linear independence, which states that a set of vectors is linearly independent if and only if the only solution to the equation c1v1 + c2v2 + ... + cnvn = 0 is c1 = c2 = ... = cn = 0. You can also use other methods such as the determinant test or the rank-nullity theorem.

What is the basic proof required for vector spaces?

The basic proof required for vector spaces is to show that the set of vectors satisfies all the properties of a vector space, such as closure under addition and scalar multiplication, associativity, and distributivity. This can be done by using specific examples and mathematical properties or by using the axioms of vector spaces.

Can you give an example of linear dependence and independence?

Yes, an example of linear dependence is a set of three vectors in three-dimensional space: v1 = (1, 0, 0), v2 = (2, 0, 0), and v3 = (-1, 0, 0). These vectors are linearly dependent because v3 can be expressed as a linear combination of v1 and v2 (v3 = -1*v1 + 0*v2). An example of linear independence is a set of two vectors in two-dimensional space: u1 = (1, 0) and u2 = (0, 1). These vectors are linearly independent because neither can be expressed as a linear combination of the other (there is no solution to the equation c1*u1 + c2*u2 = (0, 0) other than c1 = c2 = 0).

Back
Top