On Linear Transformations Tsquared = T

In summary, T is diagonalisable if it has n linearly independent eigenvectors, which can be found by considering a standard basis and collecting all T(e_i) with eigenvalue 0 and the remainder of T(e_j) with eigenvalue 1.
  • #1
cjqsg
1
0
Diagonisability of Linear Transformations Tsquared = T

Let T be a linear transformation such that T^2 = T.

i. Show that if v is not 0, then either T(v) = 0 or T(v) is an eigenvector of eigenvalue 1. (easy)

ii. Show that T is diagonalisable.

...

Sorry, I misread the question just now. For part 2, I need n linearly independent eigenvectors. How can I get them?

(my thoughts; consider a standard basis. T(e1), ..., T(en). Take out all T(e_i) such that T(e_i) = 0. Clearly, all e_i are linearly independent eigenvectors with eigenvalue 0. For the remaining e_j, we have T(ej) = eigenvector with eigenvalue 1. How do we show that these T(ej) are linearly independent?)
 
Last edited:
Physics news on Phys.org
  • #2
Answer:We can prove that T is diagonalisable by showing that it has n linearly independent eigenvectors. We begin by considering a standard basis, i.e. e_1, ..., e_n. We note that T(e_i) may be either 0 or a vector with an eigenvalue of 1. First, we collect all the vectors T(e_i) such that T(e_i) = 0. These are all eigenvectors with eigenvalue 0. Clearly, these are all linearly independent. Now consider the remainder of the T(e_j). By the first part of the question, we know that these are eigenvectors with eigenvalue 1. To show that they are linearly independent, we will take two distinct T(e_j) and consider the linear combination c_1T(e_j1) + c_2T(e_j2). Applying T to both sides, we get c_1T^2(e_j1) + c_2T^2(e_j2). Since T^2 = T, this simplifies to c_1T(e_j1) + c_2T(e_j2). Since we started with c_1T(e_j1) + c_2T(e_j2), this implies that c_1 = c_2 = 0 and therefore the two eigenvectors T(e_j1) and T(e_j2) are linearly independent. Thus, we have shown that the set of all T(e_i) such that T(e_i) = 0, plus the set of all T(e_j) with eigenvalue 1, form a set of n linearly independent eigenvectors. This means that T is diagonalisable.
 

FAQ: On Linear Transformations Tsquared = T

What is a linear transformation?

A linear transformation is a mathematical function that maps points from one vector space to another in a way that preserves their linear structure. This means that the transformation can be expressed as a matrix multiplication, and it preserves operations such as addition and scalar multiplication.

What does the equation Tsquared = T mean in linear transformations?

This equation represents a special type of linear transformation where the transformation of a vector is equal to the transformation of that vector, applied twice. In other words, applying the transformation twice has the same effect as applying it once.

How is the equation Tsquared = T related to eigenvalues and eigenvectors?

The equation Tsquared = T is a simplified way of expressing a transformation's characteristic equation, which is used to find the eigenvalues and eigenvectors of a transformation. The eigenvalues are the values for which the equation holds true, and the corresponding eigenvectors are the vectors that are transformed by a scalar multiple of their original value.

Why is the equation Tsquared = T important in linear algebra?

This equation is important because it allows us to find the eigenvalues and eigenvectors of a transformation, which have many applications in mathematics and science. They can be used to solve systems of linear equations, analyze the behavior of dynamical systems, and more.

Can the equation Tsquared = T be applied to any linear transformation?

Yes, the equation Tsquared = T can be applied to any linear transformation, as long as the transformation is defined on a vector space. However, it may not always have solutions, depending on the specific transformation and vector space involved.

Back
Top