Redundancy in Question about Linear Transformations

In summary, the Homomophism theorem states that for any two vector spaces, the vector space of linear transformations that have the same rank has the same image.
  • #1
Sudharaka
Gold Member
MHB
1,568
1
Hi everyone, :)

Take a look at this question.

Show that if two linear transformations \(f,\,g\) of rank 1 have equal \(\mbox{Ker f}=\mbox{Ker g},\,\mbox{Im f}=\mbox{Im g},\) then \(fg=gf\).

Now the problem is that I feel this question is not properly worded. If the linear transformations have rank = 1 then it is obvious that \(\mbox{Im f}=\mbox{Im g}=\{0\}\). So restating that is not needed. Don't you think so? Correct me if I am wrong. If I am correct the answer is also obvious. Since the image space of the linear transformations contain only the identity element, \(fg=gf\).
 
Physics news on Phys.org
  • #2
A linear transformation of rank 1 maps to a 1-dimensional subspace of the co-domain (which is isomorphic to the underlying field, at least as a vector space). The trivial subspace $\{0\}$ has null basis (and thus 0 dimension), as it is ALWAYS a linearly dependent set.

Two such functions need not have the same image, consider $f,g:\Bbb R^2 \to \Bbb R^2$:

$f(x,y) = (x,0)$

$g(x,y) = (0,y)$.
 
  • #3
Deveno said:
A linear transformation of rank 1 maps to a 1-dimensional subspace of the co-domain (which is isomorphic to the underlying field, at least as a vector space). The trivial subspace $\{0\}$ has null basis (and thus 0 dimension), as it is ALWAYS a linearly dependent set.

Two such functions need not have the same image, consider $f,g:\Bbb R^2 \to \Bbb R^2$:

$f(x,y) = (x,0)$

$g(x,y) = (0,y)$.

Thanks very much. I don't know, but somehow I have forgotten that the trivial subspace has null basis and zero dimension. :eek: Back to the square one.
 
  • #4
Hi everyone, :)

Do you have any ideas how to use the fact that \(\mbox{Ker f}=\mbox{Ker g}\) in this question? I am finding it hard to see how this fact could be connected to the question. My initial thought was to use the Homomophism theorem for vector spaces, however I don't see how it could be linked. Let me write down what I did for the moment,

It is given that, \(\mbox{rank f} = \mbox{rank g} = 1\)

Then, \(\mbox{Im f} = \mbox{Im g} = <u>\) where each element of I am f and I am g can be written as a scaler multiple of \(u\). Now consider \(fg(v)\) and \(gf(v)\) for any \(v\in\mbox{Im f}=\mbox{Im g}\).

\[fg(v)=f[g(v)]=f(ku)=kf(u)=(kl)u\]

where \(k\mbox{ and }l\) are scalers. Similarly,

\[gf(v)=(mn)u\]

Therefore, \(fg(v)=\lambda \,gf(v)\) where \(\lambda = \lambda (v)\)

What I feel is that if I could use the fact that \(\mbox{Ker f}=\mbox{Ker g}\) then I might be able to show that \(\lambda=1\). What do you think? :)
 
  • #5


Hello,

I would agree that the question may not be worded in the most efficient way. However, I don't believe it is redundant. The statement about the rank being equal to 1 is necessary in order to establish that the image spaces of both transformations are equal to the zero vector, as you mentioned. This is an important detail that should not be assumed and needs to be explicitly stated in order for the proof to be complete.

Additionally, while the answer may seem obvious to someone familiar with linear transformations and their properties, it is still important to provide a clear and concise proof for those who may not have the same level of understanding. This helps to ensure that the solution is understood by all and can be applied to similar problems in the future.

In summary, while the question could potentially be reworded to be more concise, I believe it is important to include all necessary details in order to provide a complete and accurate solution.
 

FAQ: Redundancy in Question about Linear Transformations

What is redundancy in linear transformations?

Redundancy in linear transformations refers to the presence of duplicate or unnecessary information in a set of linearly transformed data. It can occur when the transformation matrix is not unique or when the transformation is not invertible.

How does redundancy affect linear transformations?

Redundancy can have a negative impact on linear transformations as it can lead to inaccuracies and errors in the transformed data. It can also make it difficult to interpret or analyze the data, and can result in unnecessary computational resources being used.

How can we identify redundancy in linear transformations?

There are several methods for identifying redundancy in linear transformations, such as checking for linear dependence among the transformed data, examining the rank of the transformation matrix, and comparing the transformed data to the original data to look for patterns or repetitions.

Can we eliminate redundancy in linear transformations?

Yes, it is possible to eliminate redundancy in linear transformations. This can be done by using methods such as matrix factorization or orthogonalization to reduce the dimensionality of the transformed data and remove any redundant information.

Why is it important to avoid redundancy in linear transformations?

Avoiding redundancy in linear transformations is important because it ensures that the transformed data accurately represents the original data and can be used for analysis or modeling without introducing errors or bias. It also helps to reduce the complexity of the data and make it easier to interpret and understand.

Similar threads

Replies
4
Views
1K
Replies
7
Views
2K
Replies
2
Views
1K
Replies
24
Views
3K
Replies
8
Views
3K
Replies
1
Views
1K
Replies
7
Views
2K
Replies
23
Views
4K
Back
Top