Kernel on linear transformation proof

In summary, the author is trying to show that if an element v\in\text{ker}(T_{1})\cap\text{ker}(T_{2}), then v\in \text{ker}\left(\sum_{i=1}^{n}T_{i} \right). To do this, they show that v makes all of the linear transformations zero and that when plugged into the function/linear transformation \sum_{i=1}^{n}T_{i} they still get zero.
  • #1
Cristian1
3
0
hi guys :D

im having trouble with this proof, any hints?

let V be a vector space over a field F and let T1, T2: V--->V be linear transformations

prove that
 

Attachments

  • kernel.jpg
    kernel.jpg
    3.8 KB · Views: 71
Physics news on Phys.org
  • #2
Hi Cristian,

If \(\displaystyle v\in \text{ker}(T_{1})\cap\text{ker}(T_{2}),\)

then what can we say about \(\displaystyle T_{1}(v)\) and \(\displaystyle T_{2}(v)\)? With this question answered, we then look at

\(\displaystyle (T_{1}+T_{2})(v)=T_{1}(v)+T_{2}(v)=\ldots\)

I'm trying to give you a few hints so you can fill in the gaps on your own. Let me know if you're still confused/unclear on something.
 
  • #3
thank you!
I see the point but don't see clearly the difference between the zero of the intersection and the zero of the sum
 
  • #4
I will try to address what I think may be the confusion; if I have misunderstood you, just let me know.

My understanding of your question is that there is a perceived difference in the zero of the intersection and the zero of the sum. This is not the case: there is a single vector space, \(\displaystyle V,\) and only one zero element of \(\displaystyle V.\)

What we are trying to show is ultimately set-theoretic in nature: we want to show that if an element \(\displaystyle v\in\bigcap_{i=1}^{n}\text{ker}(T_{i}),\) then \(\displaystyle v\in \text{ker}\left(\sum_{i=1}^{n}T_{i} \right).\) Let's break down what all this means:

To say that a vector belongs to the kernel of a linear transformation means that the linear transformation sends that vector to zero. So, by assuming \(\displaystyle v\in\bigcap_{i=1}^{n}\text{ker}(T_{i})\) we are saying that \(\displaystyle v\) makes all of the linear transformations zero; i.e.

\(\displaystyle T_{1}(v)=0,~ T_{2}(v)=0,\ldots, T_{n}(v)=0\)

In each case above the zero is the zero element of \(\displaystyle V,\) they are not different zeros. Now, to show that \(\displaystyle v\in \text{ker}\left(\sum_{i=1}^{n}T_{i} \right),\) we must show that when we plug \(\displaystyle v\) into the function/linear transformation \(\displaystyle \sum_{i=1}^{n}T_{i}\) we still get zero. For this we compute:

\(\displaystyle \left( \sum_{i=1}^{n}T_{i}\right)(v)=\sum_{i=1}^{n}T_{i}(v)=T_{1}(v)+T_{2}(v)+\ldots+T_{n}(v)=0+0+\ldots+0=0\)

Hence, \(\displaystyle v\in\text{ker}\left(\sum_{i=1}^{n}T_{i} \right).\) Since \(\displaystyle v\in\bigcap_{i=1}^{n}\text{ker}(T_{i})\) was arbitrary, the proof is finished.

Let's take an example to help us along. Let our vector space be

\(\displaystyle V=\mathbb{R}^{3}=\left\{\begin{bmatrix}x\\ y\\ z \end{bmatrix}: x, \, y, \, z\in\mathbb{R} \right\}\)

and our linear transformations \(\displaystyle T_{1}\) & \(\displaystyle T_{2}\) be given by

\(\displaystyle T_{1}\left(\begin{bmatrix}x\\y\\z \end{bmatrix} \right)=\begin{bmatrix}0\\y\\0 \end{bmatrix}\)

and

\(\displaystyle T_{2}\left(\begin{bmatrix}x\\y\\z \end{bmatrix} \right)=\begin{bmatrix}x\\0\\0 \end{bmatrix}\)

The zero element in this example is \(\displaystyle \begin{bmatrix}0\\0\\0\end{bmatrix}\)

Notice that \(\displaystyle T_{1}\left(\begin{bmatrix}x\\y\\z \end{bmatrix} \right)=\begin{bmatrix}0\\0\\0\end{bmatrix}\)

if and only if \(\displaystyle y=0.\) Hence, \(\displaystyle \text{ker}(T_{1})=xz\) plane. Similarly, \(\displaystyle \text{ker}(T_{2})=yz\) plane. The intersection of these two kernels is the entire \(\displaystyle z\) axis; i.e.

\(\displaystyle \bigcap_{i=1}^{2}\text{ker}(T_{i})=\left\{\begin{bmatrix}0\\0\\z \end{bmatrix}: z\in\mathbb{R} \right\}\)

(Drawing a picture of the two planes to see that their intersection is the \(\displaystyle z\) axis may be helpful)

Now, if we wish to demonstrate the general proof you're working on in this example, we take an element \(\displaystyle \begin{bmatrix}0\\0\\z\end{bmatrix}\in \bigcap_{i=1}^{2}\text{ker}(T_{i})\) and compute using the definitions of \(\displaystyle T_{1}\) & \(\displaystyle T_{2}\)

\(\displaystyle \left(T_{1}+T_{2}\right)\left(\begin{bmatrix}0\\0\\z\end{bmatrix} \right)=T_{1}\left(\begin{bmatrix}0\\0\\z\end{bmatrix} \right)+T_{2}\left(\begin{bmatrix}0\\0\\z\end{bmatrix} \right)=\begin{bmatrix}0\\0\\0 \end{bmatrix} + \begin{bmatrix}0\\0\\0 \end{bmatrix} = \begin{bmatrix}0\\0\\0 \end{bmatrix}
\)

Hence, \(\displaystyle \begin{bmatrix}0\\0\\z\end{bmatrix}\in\text{ker}\left(\sum_{i=1}^{2}T_{i} \right)\) as well. Thus, for this specific example,

\(\displaystyle \bigcap_{i=1}^{2}\text{ker}(T_{i})\subseteq \text{ker}\left(\sum_{i=1}^{2}T_{i} \right),\)

as it should be from your general exercise.

This is a long post, but I hope I have understood and addressed your concern. Let me know if anything is unclear/not quite right.
 
  • #5
thank you very much! :)
 

FAQ: Kernel on linear transformation proof

What is a kernel in the context of linear transformations?

A kernel, also known as the null space, is a set of vectors in the domain of a linear transformation that are mapped to the zero vector in the codomain.

How is the kernel related to the range of a linear transformation?

The kernel and the range of a linear transformation are related by the rank-nullity theorem, which states that the dimension of the kernel plus the dimension of the range equals the dimension of the domain.

How do you prove that a vector is in the kernel of a linear transformation?

To prove that a vector is in the kernel of a linear transformation, you must show that when the vector is multiplied by the transformation's matrix, the resulting vector is the zero vector.

Can the kernel of a linear transformation be empty?

Yes, it is possible for the kernel of a linear transformation to be empty. This would occur when all vectors in the domain are mapped to non-zero vectors in the codomain.

How does the kernel of a linear transformation relate to its invertibility?

A linear transformation is invertible if and only if its kernel consists only of the zero vector. In other words, the transformation must map each vector in the domain to a unique vector in the codomain.

Back
Top