Prove that T is a linear transformation

In summary, the conversation discussed two vectors and their sum, as well as the rotation of a triangle by an angle. It was mentioned that matrices were invented for this type of transformation, but there was some confusion about whether a transformation was linear. A counterexample was given to show that the transformation was not linear. The conversation then shifted to discussing the rank of a linear space of differentiable functions, with a question about finding the dimension of the space.
  • #1
Hall
351
88
Homework Statement
T is transformation which rotates every vector in 2D by a fixed angle ##\phi##. Prove that T is a linear transformation.
Relevant Equations
##T(\mathbf{v_1} + \mathbf{v_2}) = T( \mathbf{v_1}) + T (\mathbf{v_2})##
##T(c\mathbf{v_1}) = c T (\mathbf{v_1})##
We got two vectors ##\mathbf{v_1}## and ##\mathbf{v_2}##, their sum is, geometrically, :

Screen Shot 2022-01-14 at 8.44.42 PM.png


Now, let us rotate the triangle by angle ##\phi## (is this type of things allowed in mathematics?)
Screen Shot 2022-01-14 at 8.47.48 PM.png

OC got rotated by angle ##\phi##, therefore ##OC' = T ( \mathbf{v_1} + \mathbf{v_2})##, and similarly ##OB' = T(\mathbf{v_2})##. The little pointer guy which is alone, at the right corner (which was marked ##\mathbf{v_1}## in the previous diagram), also got rotated by ##\phi## so, now it represents ##T(\mathbf{v_1})##, which gives us ##B'C' = T(\mathbf{v_1})##. So, by the geometrical law of vector addition:
$$
\mathbf{OB'} + \mathbf{B'C'} = \mathbf{OC'}$$
$$
T(\mathbf{v_2}) + T(\mathbf{v_2}) = T( \mathbf{v_1 + v_2})$$

AND

##T( c(r \cos \theta, r \sin \theta) ) = (rc \cos (\theta + \phi) , rc \sin (\theta + \phi) )##
## c T (r\cos \theta, r\sin \theta) = c(r \cos (\theta + \phi), r\sin (\theta + \phi) = (rc, \cos (\theta + \phi), rc \sin (\theta + \phi) )= T( c(r \cos \theta, r \sin \theta) ) ##

We're done, finally.

The big question: Am I right?
 
Last edited:
Physics news on Phys.org
  • #2
For the first part of your proof, dealing with the sum of vectors, I think it would be better to do things algebraically than invoking geometry. Any vector that starts at the origin can be identified by the coordinates of its endpoint.

For the second part, scalar multiples, the following doesn't look right to me. For a given vector that extends from the origin to the point (x, y), in polar form ##x = r \cos(\theta)## and ##y = r\sin(\theta)##.
Since the vector doesn't change length in a rotation r doesn't change between the unrotated and rotated versions.
Hall said:
##T( c(x \cos \theta, y \sin \theta) ) = (xc \cos (\theta + \phi) , yc \sin (\theta + \phi) )##
## c T (x\cos \theta, y\sin \theta) = c(x \cos (\theta + \phi), y \sin (\theta + \phi) = (xc, \cos (\theta + \phi), yc \sin (\theta + \phi) )= T( c(x \cos \theta, y \sin \theta) ) ##
 
  • #3
Isn't this why matrices were invented?
 
  • #4
PeroK said:
Isn't this why matrices were invented?
I really don't know.
 
  • #7
@PeroK
Is T a Linear transformation given that ##T(r, \theta)= (r, 2 \theta)##?

I don't think this transformation satisfies the first condition of Linear transformations. Here is the proof:

## \mathbf{v_1}= (r_1, \theta_1)##
##\mathbf{v_2} = (r_2, \theta_2)##
##|\mathbf{v_1} +\mathbf{v_2} |^2 = |\mathbf{v_1}|^2 + |\mathbf{v_2}|^2 - 2|\mathbf{v_1 v_2}| \cos \alpha##

Let's take the transformations now. After applying the transformations, the transformed ##\mathbf{v_1}## and ##\mathbf{V_2}## might not make the same angle (not the angle between them but the one when we placed the tail of v_1 on the head of v_2, however, that also is correct) as before, therefore,
##
|T(\mathbf{v_1}) + T(\mathbf{v_2})|^2 = |T(\mathbf{v_1})|^2 + |T(\mathbf{v_2})|^2 - 2 \cos \beta##

But we know that the transformation T is not changing the magnitude of a vector, therefore,
$$
|T(\mathbf{v_1} + \mathbf{v_2} )|^2 =|\mathbf{v_1} +\mathbf{v_2} |^2 = |\mathbf{v_1}|^2 + |\mathbf{v_2}|^2 - 2|\mathbf{v_1 v_2}| \cos \alpha$$
And we have
$$
|T(\mathbf{v_1}) + T (\mathbf{v_2}) |^2 = |\mathbf{v_1}|^2 + |\mathbf{v_2}|^2 - 2|\mathbf{v_1 v_2}| \cos \beta$$

Therefore, ## T(\mathbf{v_1} + \mathbf{v_2} ) \neq T(\mathbf{v_1} + T(\mathbf{v_2}##.

But, T does satisfy the second condition:
$$
T ( c (r, \theta)= T (cr, \theta) = (cr, 2 \theta)$$
$$
c T (r, \theta)= c (r, 2\theta)= (cr, 2\theta)$$

So, as everybody else, I conclude that T is not a Linear transformation. But, am I right?
 
  • #8
Hall said:
@PeroK
Is T a Linear transformation given that ##T(r, \theta)= (r, 2 \theta)##?
Imagine ##T## operating on unit vectors in the ##x## and ##y## directions:
$$T\hat x = \hat x, \ \text{and} \ T\hat y = -\hat x$$ And we see that:
$$T(\hat x + \hat y) = \sqrt 2 \hat y \ne 0 = T \hat x + T\hat y$$
 
  • Like
Likes Hall
  • #9
PS you should really be finding specific counterexamples, rather than general formulas that might look different but be equivalent. In any case, finding specific counterexamples is a useful skill, as demonstrated above. I think of it like ju jitsu: finding the weak point and applying a small but potent force where it has the most effect!
 
  • #10
PeroK said:
PS you should really be finding specific counterexamples, rather than general formulas that might look different but be equivalent. In any case, finding specific counterexamples is a useful skill, as demonstrated above. I think of it like ju jitsu: finding the weak point and applying a small but potent force where it has the most effect!
Okay! A nice analogy.
 
  • #11
@PeroK I got a chick doubt, so I thought a new thread might not be suitable, and it is: Let V be the Linear space of all differentialable functions on (-1,1), and T(f) = x f'(x) [x lies between 1 and -1] Find the rank of T.

I found it easier to subtract the nullity from the dimension of V, but the dimension of V itself is quite hard for me to find. What could be the basis for all differentialable functions on that interval?
 
  • #12
Hall said:
I found it easier to subtract the nullity from the dimension of V, but the dimension of V itself is quite hard for me to find. What could be the basis for all differentialable functions on that interval?
It will be infinite dimensional.
 
  • #13
PeroK said:
It will be infinite dimensional.
As I thought. And I thought that we have all the polynomials, trig functions and transcendental functions, which are differetiable in the interval (-1,1); and so it is not possible to construct all of them from some finite basis.
 
  • #14
PeroK said:
It will be infinite dimensional.
Can you please tell me if the space of all convergent sequences whose limit is zero is infinite-dimensional?
 
  • #15
Hall said:
Can you please tell me if the space of all convergent sequences whose limit is zero is infinite-dimensional?
Can you prove that?
 
  • #16
PeroK said:
Can you prove that?
No, sorry. I can only imagine that there will be a lot of sequences with different species and their limit being zero, and because of belonging to different species we can not get everyone of them from interbred of few of them.
 
  • #17
Hall said:
No, sorry. I can only imagine that there will be a lot of sequences with different species and their limit being zero, and because of belonging to different species we can not get everyone of them from interbred of few of them.
This is linear algebra, not zoology! :smile:
 
  • Informative
Likes Hall

FAQ: Prove that T is a linear transformation

What is a linear transformation?

A linear transformation is a mathematical function that maps one vector space to another while preserving the basic operations of vector addition and scalar multiplication.

How do you prove that T is a linear transformation?

To prove that T is a linear transformation, you must show that it satisfies two properties: additivity and homogeneity. This means that for any two vectors u and v, and any scalar c, T(u+v) = T(u) + T(v) and T(cu) = cT(u).

What is the importance of proving that T is a linear transformation?

Proving that T is a linear transformation is important because it ensures that the function is well-defined and follows the rules of linear algebra. This allows for the use of powerful tools and techniques in solving problems involving T.

Can a linear transformation be represented by a matrix?

Yes, a linear transformation can be represented by a matrix. This is known as the standard matrix representation and it allows for easier computation and visualization of the transformation.

How can you prove that T is a linear transformation using matrices?

To prove that T is a linear transformation using matrices, you must show that the matrix representation of T satisfies the properties of additivity and homogeneity. This can be done by performing matrix operations on the standard matrix representation of T and showing that it follows the rules of linear algebra.

Similar threads

Back
Top