Linear algebra, when does the implications hold?

In summary, the conversation discusses implications related to linear transformations from a vector space A to B. The first two implications state that if T(A*) is linearly independent, then A* must be linearly independent, and if T(A*) is linearly dependent, then A* may or may not be linearly dependent depending on the injectivity and surjectivity of T. The third implication states that if span(T(A*)) = B, then we can conclude that span(A*) = A if T is injective. The fourth implication states that if span(T(A*)) is not equal to B, then we can conclude that span(A*) is not equal to A if T is surjective. Examples are provided to demonstrate the validity of these
  • #1
bobby2k
127
2
Hi, I have 4 implications I am interested in, I think I know the answer to the first 2, but the last two is not something I know, however they are related to the first 2 so I will include all to be sure.

Assume that T is a linear transformation from from vectorspace A to B.

T: A -> B
A* is n vectors in A, that is A* = {a1, a2, an}

1.
T(A*) linearly independent -> A* linearly independent
If T(A*) is linearly independent, then A* must be linearly independent, without any requirements for T?

2.
T(A*) lindearly dependent -> A* linearly dependent
If T(A*) is linearly dependent, then we can only conclude that A* is linearly independent only if T is 1-1, if T is not 1-1 we can not conlude anything?

3.
span(T(A*))=B -> span(A*)= A
If span(T(A*)) = B, what requirement must we have to conlude that span(A*) = A. That T is 1-1, surejective, both or none?

4.
span(T(A*)) ≠ B -> span(A*) ≠ A*
If span(T(A*)) is not B what must we have to conlude that span(A*) is not A? Will this implication hold if T is 1-1, surjective, both or none?

thanks
 
Last edited:
Physics news on Phys.org
  • #2
1. If c1a1+...+cnan=0, then T(c1a1+...+cnan)=0. So no restrictions.
2. Injectivity is not needed. Consider T: R^3->R^2. T(x,y,z)=(x,y). Surjectivity is not needed. Consider
T: R^2->R^3. T(x,y)=(x,y,0).
3. Let B={0} and you'll see surjectivity isn't enough. 1-1 is. Think of any x in A and consider T(x)=c1T(a1)+c2T(a2)+...+cnT(an).
4. Your last statement is equivalent to span(A*)=A -> span(T(A*)) = B. If dim(A)<dim(B) and you have n=dim(A) vectors, this is not true. So injectivity is not enough. Surjectivity is though. Consider y in B. Then there is an x in A such that T(x)=y. Or T(c1a1+...cnan)=y.
 
  • #3
The word is 'linearly'.
 
  • #4
johnqwertyful said:
2. Injectivity is not needed. Consider T: R^3->R^2. T(x,y,z)=(x,y). Surjectivity is not needed. Consider
T: R^2->R^3. T(x,y)=(x,y,0).
Hey, thanks man.
I just have some more questions, are you allowed to prove a general statement with examples?

Also, i am not sure about the example.
T(x,y,z) = (x,y)
if A* = {(1,1,0),(1,1,1)}
then T(A*) = {(1,1),(1,1)}
so T(A*) is linearly dependent, but A* is not, so the implication is false here.
 
  • #5


The implications in linear algebra hold in certain situations, depending on the properties of the linear transformation T and the vectors in A.

1. The first implication states that if T(A*) is linearly independent, then A* must also be linearly independent. This holds true regardless of any requirements for T. The reason for this is that if T(A*) is linearly independent, then none of the vectors in A* can be written as a linear combination of the others. This means that even when T is not 1-1, the vectors in A* are still linearly independent.

2. The second implication states that if T(A*) is linearly dependent, then A* may or may not be linearly dependent, depending on whether T is 1-1 or not. If T is 1-1, then A* must also be linearly dependent, since the linear dependence of T(A*) implies that there exists a non-trivial linear combination of the vectors in A* that equals the zero vector. However, if T is not 1-1, then A* may still be linearly independent, as there may be a non-trivial linear combination of the vectors in A* that equals the zero vector, but this does not necessarily mean that the vectors themselves are linearly dependent.

3. The third implication states that if the span of T(A*) is equal to B, then the span of A* must be equal to A. In order for this implication to hold, T must be both 1-1 and surjective. This means that for every vector b in B, there exists a unique vector a in A such that T(a) = b. This also implies that the span of T(A*) must be the entire space B, and therefore the span of A* must be the entire space A.

4. The fourth implication states that if the span of T(A*) is not equal to B, then the span of A* may or may not be equal to A. This depends on whether T is 1-1, surjective, both, or neither. If T is 1-1 and surjective, then the span of T(A*) must be the entire space B, which means that the span of A* must also be the entire space A. However, if T is not 1-1 or surjective, then the span of T(A*) may not equal B, and
 

FAQ: Linear algebra, when does the implications hold?

What is linear algebra?

Linear algebra is a branch of mathematics that deals with the study of linear equations and their representations in vector spaces. It involves the manipulation and analysis of vectors, matrices, and systems of linear equations.

What are the applications of linear algebra?

Linear algebra has a wide range of applications in various fields such as physics, engineering, computer science, economics, and statistics. It is used to solve problems involving systems of linear equations, optimization, data analysis, and image processing.

When does the implication hold in linear algebra?

The implication holds in linear algebra when a set of linear equations has a unique solution. This means that the equations are consistent and independent, and there is no redundancy in the system. In other words, the equations can be solved for a unique set of values.

How is linear algebra used in machine learning?

Linear algebra is an essential tool in machine learning as it is used to represent and manipulate data in the form of matrices and vectors. It is used in algorithms such as linear regression, principal component analysis, and support vector machines to model and analyze data.

What are the key concepts in linear algebra?

Some of the key concepts in linear algebra include vector spaces, linear transformations, eigenvalues and eigenvectors, determinants, and matrix operations. These concepts are fundamental to understanding and solving problems in linear algebra and form the basis for more advanced topics such as multivariable calculus and differential equations.

Similar threads

Replies
1
Views
1K
Replies
10
Views
1K
Replies
6
Views
1K
Replies
4
Views
1K
Replies
8
Views
1K
Replies
10
Views
2K
Replies
1
Views
1K
Back
Top