A linear transformation is invertible if and only if

In summary, the problem is trying to find an inverse for a linear transformation, but is stuck on proving that the c's in the basis must all be 0.
  • #1
baseball3030
9
0
Problem: A linear transformation T: Rm->Rm is invertible if and only if, for any basis {v1, ...vm} of Rm, {T(v1),...,T(vm)} is also a basis for Rm.Ideas: Since the inverse exists, we can say that some vector u in the inverse of T can be represented as linear combinations of basis vectors:

T^-1(u)= c1v1+c2v2+...+cnvnThen we can take the transformation of both sides...
This is what I am not sure about, I am not sure where to go from here... Any help I would greatly appreciate it! Thank you
 
Physics news on Phys.org
  • #2
Does anyone know if we can assume the Ker is 0 since we are staying in the same dimension and then use 0 as the arbitrary vector spanned by the basis? Thank you so much everyone I have been thinking about this problem for days on end.
 
  • #3
baseball3030 said:
Problem: A linear transformation T: Rm->Rm is invertible if and only if, for any basis {v1, ...vm} of Rm, {T(v1),...,T(vm)} is also a basis for Rm.

Ideas: Since the inverse exists, we can say that some vector u in the inverse of T can be represented as linear combinations of basis vectors:

T^-1(u)= c1v1+c2v2+...+cnvn

Then we can take the transformation of both sides...

This is what I am not sure about, I am not sure where to go from here...
Well, applying $T$ to both sides gives you that an arbitrarily chosen $u=\sum_{i=1}^m c_iT(v_i)$, i.e., $\{T(v_1),\dots,T(v_i)\}$ spans $\mathbb{R}^m$. You also need to prove that $\{T(v_1),\dots,T(v_i)\}$ is linearly independent to show that it is a basis.

Do you have an idea about the converse (showing that $T$ is invertible)?
 
  • #4
Hi,
Here's an entirely similar statement and proof. Note the format of the proof of the "if and only if" statement. You need to prove two things, in the following 1 and 2. Now you have a go at your problem.

View attachment 1325
 

Attachments

  • MHBlinearalgebra1.png
    MHBlinearalgebra1.png
    14 KB · Views: 80
  • #5
Here is what I have now:

Suppose T is invertible and there exists a vector u in Rm and V is a basis:

Then, T^-1(u)=c1v1+c2v2+...cmvm.

Now if we apply the transformation T to both sides of the equation we get...

T(T^-1(u))=T(c1v1+c2v2+...cmvm)

(by linearity)=c1T(v1) + c2T(v2)+...+cmT(vm)

Now we know that the Tv's form a spanning set for Rm.

Since we are assuming that the transformation is invertible,

we can say that the Ker(T)=0 because a linear transformation has an inverse only if the Ker=0.

Now 0=c1T(v1) + c2T(v2)+...+cmT(vm) implies that all of the c's must be zero because the Ker(T)=0 therefore, the T(vi's) are linearly independent and form a basis on Rm. ... This is where I am now stuck, not sure how to prove that the T(vi's) imply there is an inverse...

Any help will be MUCH appreciated! Thank you,
 
  • #6
baseball3030 said:
Since we are assuming that the transformation is invertible,

we can say that the Ker(T)=0 because a linear transformation has an inverse only if the Ker=0.

Now 0=c1T(v1) + c2T(v2)+...+cmT(vm) implies that all of the c's must be zero because the Ker(T)=0 therefore, the T(vi's) are linearly independent and form a basis on Rm.

What you say is certainly true, but I'm not sure that you can provide more detail on why the c's must all be 0. Hope I'm wrong.

For the converse, assume the set of Tv's form a basis. Prove if v is in Ker(T), v = 0. Hint: let v be in Ker(T). Then there are c's with
\(\displaystyle v=c_1v_1+c_2v_2+\cdots+c_nv_n\)
Now prove each \(\displaystyle c_i=0\)

By the theorem in my previous post, T is then invertible.
 
  • #7
Some equivalences worth remembering:

1)T is injective (1-1) <=> T(B) is LI for any basis B.

2)T is surjective (onto) <=> T(B) spans T(V) for any basis B.

3)T is bijective (invertible) <=> T(B) is a basis for any basis B.

What you are being asked to prove is (3). It hopefully should be clear that:

(1) and (2) together <=> (3).

The way I would prove:

If $B = \{v_1,\dots,v_n\}$ is a basis for $\Bbb R^n$, with $T(B)$ a basis as well, then $T$ is invertible; is like so:

Suppose we take any old $x \in \Bbb R^n$. By virtue of $T(B)$ being a basis, we have:

$x = c_1T(v_1) + \cdots + c_nT(v_n) = T(c_1v_1 + \cdots + c_nv_n)$, which shows $T$ is onto, since $x$ is arbitrary.

Now define, for any $x \in \Bbb R^n$:

$S(x) = S(c_1T(v_1) + \cdots + c_nT(v_n)) = c_1v_1 + \cdots + c_nv_n$.

Since $B$ is a basis, the linear combination in $B$ on the right is UNIQUE (so $S$ is well-defined). And:

$T \circ S(x) = T(S(x)) = T(c_1v_1 + \cdots + c_nv_n) = c_1T(v_1) + \cdots + c_nT(v_n) = x$

$S \circ T(x) = S(T(x)) = S(T(c_1T(v_1) + \cdots + c_nT(v_n))) = S(c_1T(T(v_1)) + \cdots c_nT(T(v_n)))$

$=c_1T(v_1) + \cdots + c_nT(v_n) = x$, that is:

$S = T^{-1}$.

Note this actually exhibits the inverse. Perhaps it might help to see an example:

Let $n = 3$, and let:

$T(x,y,z) = (x+y,x+z,y+z)$

Suppose $B = \{(1,0,0),(0,1,0),(0,0,1)\}$. Then $T(B) = \{(1,1,0),(1,0,1),(0,1,1)\}$.

I leave it to you to show this is indeed a basis. To explicitly give $T^{-1}$, we need to know how to express $(x,y,z)$ in terms of this basis. If:

$(x,y,z) = c_1(1,1,0) + c_2(1,0,1) + c_3(0,1,1) = (c_1+c_2,c_1+c_3,c_2+c_3)$

we get the 3 equations:

$x = c_1 + c_2$
$y = c_1 + c_3$
$z = c_2 + c_3$

which gives (how did I do this?):

$c_1 = \frac{1}{2}(x + y - z)$
$c_2 = \frac{1}{2}(x - y + z)$
$c_3 = \frac{1}{2}(-x + y + z)$

Thus $T^{-1}(x,y,z) = \frac{1}{2}(x + y - z,x - y + z,-x + y + z)$.
 
  • #8
I followed everything you did in the above steps except for this:

T∘S(x)=T(S(x))=T(c 1 v 1 +⋯+c n v n )=c 1 T(v 1 )+⋯+c n T(v n )=x

S∘T(x)=S(T(x))=S(T(c 1 T(v 1 )+⋯+c n T(v n )))=S(c 1 T(T(v 1 ))+⋯c n T(T(v n ))) How do you know that T(S(x))=x?

Is it chosen arbitrarily? Thanks,
 
  • #9
Remember, we KNOW that $T(B) = \{T(v_1),\dots,T(v_n)\}$ is a basis, so in particular, it spans $\Bbb R^n$, and since $x \in \Bbb R^n$, $x$ is a linear combination of basis elements.

So we take the coordinates of $x$ in the $T(B)$ basis, and just take the SAME linear combination in the $B$ basis, that is:

$S(T(v_j)) = v_j$,

and extend by linearity. So for example, if n= 2:

$S(aT(v_1) + bT(v_2)) = av_1 + bv_2$.

All $S$ does is send $\displaystyle \sum_{i = 1}^n c_iT(v_i)$ "back where it came from".

The "tricky" part is showing $S$ is actually a function (that is, that $T(u)$ comes from "only ONE $u$"). And THAT is where $\{v_1,\dots,v_n\}$ being a basis comes in. For if:

$c_1v_1 + \cdots + c_nv_n = c'_1v_1 + \cdots + c'_nv_n$, then:

$(c_1 - c'_1)v_1 + \cdots + (c_n - c'_n)v_n = 0$

and by the linear independence of the basis, we must have:

$c_1 - c'_1 = \dots = c_n - c'_n = 0$, so for each $i, c_i = c'_i$.

This shows that each image $T(u)$ can have but ONE pre-image $u$ (only one linear combination of all the ones possible over $B$ sum up to the vector $u$).

*******

One can, of course, argue using kernels. That is:

1) T(A) is linearly independent whenever A is, for any subset A,
2) T is injective,
3) ker(T) = {0},

are all equivalent. johng's previous posts demonstrate the equivalence of (2) and (3). We can easily demonstrate the equivalence of (1) and (3):

Suppose ker(T) = {0}, and that A is linearly independent.

If $A = \{a_1,\dots,a_k\}$ with $T(A) = \{T(a_1),\dots,T(a_k)\}$, and:

$c_1T(a_1) + \cdots + c_kT(a_k) = 0$, then:

$T(c_1a_1 + \cdots + c_ka_k) = 0$ that is,

$c_1a_1 + \cdots + c_ka_k \in \text{ker}(T)$.

Since $\text{ker}(T) = \{0\}$, by the linear independence of A we must have:

$c_1 = \dots = c_k = 0$, which then shows the linear independence of T(A).

This shows (3) implies (1).

On the other hand, if T(A) is linearly independent whenever A is, then given ANY $u \in \text{ker}(T)$, and letting $B = \{v_1,\dots,v_n\}$ be any basis for $\Bbb R^n$, we have:

$u = c_1v_1 + \cdots + c_nv_n$, for some $c_i \in \Bbb R$.

Thus:

$0 = T(u) = T(c_1v_1 + \cdots + c_nv_n) = c_1T(v_1) + \cdots + c_nT(v_n)$.

Since T(B) is linearly independent (because B is, being a basis) by assumption, we MUST have:

$c_1 = \dots = c_n = 0$ which shows that $u = 0$, for ANY $u \in \text{ker}(T)$, which means $\text{ker}(T) = \{0\}$, which proves (1) implies (3).
 
  • #10
thank you so much
 
Last edited:

FAQ: A linear transformation is invertible if and only if

What is a linear transformation?

A linear transformation is a mathematical function that maps one vector space to another in a way that preserves the linear structure of the original space.

What does it mean for a linear transformation to be invertible?

A linear transformation is invertible if there exists another linear transformation that can reverse its effects, effectively "undoing" the original transformation.

How can I determine if a linear transformation is invertible?

A linear transformation is invertible if and only if its determinant is non-zero. This means that the transformation does not collapse any dimension or lose any information.

What is the importance of a linear transformation being invertible?

An invertible linear transformation allows for the manipulation and analysis of vector spaces in a more efficient and systematic manner, and is a key concept in many areas of mathematics and science, such as linear algebra, differential equations, and physics.

Can a linear transformation be invertible in some cases and not in others?

Yes, a linear transformation can be invertible in some cases and not in others. This depends on the properties of the transformation and the vector spaces involved. For example, a transformation may be invertible in one vector space but not in another with different dimensions or structures.

Similar threads

Back
Top