If T is diagonalizable then is restriction operator diagonalizable?

In summary, the conversation discusses the theorem that states that the restriction operator of a diagonalizable linear operator on an invariant subspace is also diagonalizable. While the proof is easily understood, the question arises as to why the assumption of invariance is necessary. It is explained that if the subspace is not invariant, the matrix representation of the restriction operator will not be square and therefore cannot be considered diagonalizable. The discussion also clarifies that this theory is only applicable to linear maps where the domain is the same as the codomain, and a restriction to a subspace only qualifies as such if the subspace is invariant. Finally, the knowledge of the subspace's invariance is necessary to prove the theorem as it is undefined to
  • #1
CGandC
326
34
TL;DR Summary
Does minimal polynomial zero out the linear operator restricted to any subspace?
The usual theorem is talking about the linear operator being restricted to an invariant subspace:
Let ##T## be a diagonalizable linear operator on the ##n##-dimensional vector space ##V##, and let ##W## be a subspace of ##V## which is invariant under ##T##. Prove that the restriction operator ##T_W## is diagonalizable.​
I had no problem understanding its proof, it appears here for example: https://math.stackexchange.com/ques...-t-w-is-diagonalizable-if-t-is-diagonalizable However, I had difficulty understanding why we needed the assumption that ## W ## is ##T##-invariant, I mean - If ## m_T(x) ## is the minimal polynomial of ##T## so ## m_T(T)=0 ## and thus for any subspace ## W \subseteq V ## ( not necessarily ## T##-invariant ) ## m_T(T_W) =0 ##; so why in the above theorem it was necessary for ## W \subseteq V ## to be ## T ##-invariant?
 
Physics news on Phys.org
  • #2
If [itex]W[/itex] is not [itex]T[/itex]-invariant, then the matrix representation of [itex]T_W[/itex] is not square: it must include additional rows to account for the part of [itex]T_W(W)[/itex] which is not in [itex]W[/itex]. In what sense is this non-square matrix "diagonalizable"?

This theory is defined for linear maps [itex]T: V \to V[/itex] where the codomain is the same as the domain, rather than some different space. A restriction of [itex]T: V \to V[/itex] to a subspace [itex]W \subset V[/itex] will only qualify as such a map if we have [itex]T_W: W \to W[/itex], ie. [itex]W[/itex] is [itex]T[/itex]-invariant.
 
  • Like
Likes WWGD, CGandC and topsquark
  • #3
pasmith said:
If [itex]W[/itex] is not [itex]T[/itex]-invariant, then the matrix representation of [itex]T_W[/itex] is not square: it must include additional rows to account for the part of [itex]T_W(W)[/itex] which is not in [itex]W[/itex]. In what sense is this non-square matrix "diagonalizable"?

This theory is defined for linear maps [itex]T: V \to V[/itex] where the codomain is the same as the domain, rather than some different space. A restriction of [itex]T: V \to V[/itex] to a subspace [itex]W \subset V[/itex] will only qualify as such a map if we have [itex]T_W: W \to W[/itex], ie. [itex]W[/itex] is [itex]T[/itex]-invariant.
Although it makes sense that the matrix representation of a diagonalizable operator should be square matrix, I still don't see how this knowledge is necessary for proving the theorem for non-invariant subspace since a linear operator is also called diagonalizable iff its minimal polynomial decomposes to distinct linear factors of multiplicity 1 - this theorem shows the definition of diagonalizability is independent of a matrix representation, thus the minimal polynomial of T restricted to some arbitrary subspace will still divide the minimal polynomial of T itself which is composed of different linear factors of multiplicity 1 thus the minimal polynomial of the restriction will also be composed of different linear factors of multiplicity 1, hence T restricted to some subspace will still be diagonal ( regardless of the subspace's invariance ); so I still don't see how the knowledge that the subspace should be invariant is required to prove the above theorem.
 
  • #4
Ok, I think I understand fully now what you have said.
The minimal polynomial is defined as the minimal polynomial which zeros out a square matrix.
The minimal polynomial is also defined for linear transformations whose domain is the same as the co-domain ( i.e. ## T: V \to V ## ) as the minimal polynomial which zeros such linear transformation.

So although it is true that ## m_T(T_W) =0 ## for arbitrary subspace ## W \subseteq V ##, it is undefined to talk about a minimal polynomial of ## T_W ## if ## W## is not ##T##-invariant since it isn't true that the domain is equal to the co-domain.

Am I correct?
 
Back
Top