- #1
poochie_d
- 18
- 0
Hi all,
Here is the problem:
If T: V -> W is a linear transformation and S is a linearly dependent subset of V, then prove that T(S) is linearly dependent.
Now, I know that the usual proof goes as follows:
Since S is linearly dependent, there are distinct vectors [itex]v_1, ..., v_n[/itex] in S and scalars [itex]a_1, ..., a_n[/itex] (not all zero) such that [itex] \sum_{i=1}^n a_i v_i = 0. [/itex]
=> [itex] \sum_{i=1}^n a_i T(v_i) = T(\sum_{i=1}^n a_i v_i) = T(0) = 0 [/itex]
=> Since there are vectors [itex]T(v_1), ..., T(v_n)[/itex] in T(S) and scalars [itex]a_1, ..., a_n[/itex] (not all zero) such that they form a nontrivial representation of 0, it follows that T(S) is dependent.
What I am wondering is whether the above proof is still valid if some of the [itex]v_i[/itex]'s take on the same value under T. In this case, wouldn't the proof be wrong, since you have to have distinct vectors to show that the set is dependent?
e.g. What if you have a situation where [itex]S = \{v_1,v_2,v_3\}[/itex] and [itex]v_1 + v_2 - 2v_3 = 0,[/itex] but [itex]T(v_1) = T(v_2) = T(v_3) = w[/itex] (say), so that
[itex]0 = T(v_1) + T(v_2) - T(v_3) = w + w - 2w = 0w[/itex]? This doesn't prove that T(S) is dependent! (Or does it?)
Any help would be much appreciated. Thanks!
PS: I am posting this here since it is related to linear algebra, but maybe this is a homework-type question; please feel free to move it to a different forum if it doesn't belong here.
Here is the problem:
If T: V -> W is a linear transformation and S is a linearly dependent subset of V, then prove that T(S) is linearly dependent.
Now, I know that the usual proof goes as follows:
Since S is linearly dependent, there are distinct vectors [itex]v_1, ..., v_n[/itex] in S and scalars [itex]a_1, ..., a_n[/itex] (not all zero) such that [itex] \sum_{i=1}^n a_i v_i = 0. [/itex]
=> [itex] \sum_{i=1}^n a_i T(v_i) = T(\sum_{i=1}^n a_i v_i) = T(0) = 0 [/itex]
=> Since there are vectors [itex]T(v_1), ..., T(v_n)[/itex] in T(S) and scalars [itex]a_1, ..., a_n[/itex] (not all zero) such that they form a nontrivial representation of 0, it follows that T(S) is dependent.
What I am wondering is whether the above proof is still valid if some of the [itex]v_i[/itex]'s take on the same value under T. In this case, wouldn't the proof be wrong, since you have to have distinct vectors to show that the set is dependent?
e.g. What if you have a situation where [itex]S = \{v_1,v_2,v_3\}[/itex] and [itex]v_1 + v_2 - 2v_3 = 0,[/itex] but [itex]T(v_1) = T(v_2) = T(v_3) = w[/itex] (say), so that
[itex]0 = T(v_1) + T(v_2) - T(v_3) = w + w - 2w = 0w[/itex]? This doesn't prove that T(S) is dependent! (Or does it?)
Any help would be much appreciated. Thanks!
PS: I am posting this here since it is related to linear algebra, but maybe this is a homework-type question; please feel free to move it to a different forum if it doesn't belong here.