- #1
Hall
- 351
- 88
Actual statement:
Base Case: n=1. When ##n=1##, the matrix of T will be have just one value and therefore, the characteristic polynomial ##det(\lambda I -A)=0## will have only one solution. So, the Eigenvector corresponding to this value will act as a basis for ##V##.
Hypothesis: Let us assume that if ##dim~V= n-1##, then there exists eigenvectors ##u_1, u_2, \cdots u_n## of ##T:V \to V## which are orthogonal and will be act as a basis for ##V##.
Induction: ##dim ~ V = n##.
Take any eigenvalue of ##T## and call it ##\lambda_1##; the corresponding eigenvector with norm can be named as ##u_1##.
##S = \text{span} ( u_1)##
##S^{\perp}## = space of all elements of ##V## which are orthogonal to ##S##
As ##u_1## is a basis for ##S##, and ##S## is subspace of ##V##, therefore ##u_1## forms a part of basis for ##V##. Let the basis be ##(u_1, v_2, v_3, \cdots v_n)##, we can assume, without loss in generality, that this is an orthonormal basis (else we can convert to one with Gram-Schmidt process, keeping ##u_1## as the first basis element).
Take any ##x## in ##S^{\perp}## and write
##x= x_1 u_1 + x_2 v_2 \cdots x_n v_n##
##\langle x, u_1 \rangle = \langle x_1 u_1 , u_1 \rangle +0##
##langle x, u_1 \rangle = x_1 ##
As ##x \in S^{\perp}## and ##u_1 \in S##, their inner product is zero, and we must have, then, ##x_1=0##.
That means, ##x= \sum_{2}^{n} x_i v_i##, thus implying that the dimension of ##S^{\perp}=n-1##.
Now, we have to prove that if ##T## is applied on ##S^{\perp}##, the results will into ##S^{\perp}## itself, only then we shall be able to use our hypothesis.
##\langle T(x), u_1 \rangle= \langle x, T(u_1) \rangle## as T is Hermitian.
##\langle T(x) , u_1 \rangle= \langle x, \lambda_1 u_1 \rangle##
##\langle T(x), u_1 \rangle = \bar{\lambda_1} \langle x, u_1 \rangle = 0##
Thus, implying ##T(x) \in S^{\perp}##.
By hypothesis, ##S^{\perp}## has as a basis ##n-1## eigenvectors which is orthonormal. Therefore, adding ##u_1## to that set will preserve the orthonormality of set, and as any set of orthogonal elements are independent and we have ##n## of them, they will form a basis for ##V##.
This completes the proof.
I'm not able to absorb this theorem, there are two reasons for that: first, I don't see the motivation behind it, I mean what exactly do we want to achieve by it? (I know this type of question is senseless in Mathematics), and second, the proof involves a few things which seem quite sour to me, like that involvement S perpendicular and proving that T maps it to itself.
Proof (of Mr. Tom Apostol): We will do the proof by induction on ##n##.Assume ##dim~V=n## and let ##T: V \to V## be Hermitian or Skew-Hermitian. Then there exists ##n## eigenvectors ##u_1, \cdots u_n## of T which form an orthonormal basis for ##V##. Hence, the matrix of ##T## relative to this basis is the diagonal matrix ##\Lambda = diag (\lambda_1, \cdots, \lambda_n)## where ##\lambda_k## is the eigenvalue belonging to ##u_k##.
Base Case: n=1. When ##n=1##, the matrix of T will be have just one value and therefore, the characteristic polynomial ##det(\lambda I -A)=0## will have only one solution. So, the Eigenvector corresponding to this value will act as a basis for ##V##.
Hypothesis: Let us assume that if ##dim~V= n-1##, then there exists eigenvectors ##u_1, u_2, \cdots u_n## of ##T:V \to V## which are orthogonal and will be act as a basis for ##V##.
Induction: ##dim ~ V = n##.
Take any eigenvalue of ##T## and call it ##\lambda_1##; the corresponding eigenvector with norm can be named as ##u_1##.
##S = \text{span} ( u_1)##
##S^{\perp}## = space of all elements of ##V## which are orthogonal to ##S##
As ##u_1## is a basis for ##S##, and ##S## is subspace of ##V##, therefore ##u_1## forms a part of basis for ##V##. Let the basis be ##(u_1, v_2, v_3, \cdots v_n)##, we can assume, without loss in generality, that this is an orthonormal basis (else we can convert to one with Gram-Schmidt process, keeping ##u_1## as the first basis element).
Take any ##x## in ##S^{\perp}## and write
##x= x_1 u_1 + x_2 v_2 \cdots x_n v_n##
##\langle x, u_1 \rangle = \langle x_1 u_1 , u_1 \rangle +0##
##langle x, u_1 \rangle = x_1 ##
As ##x \in S^{\perp}## and ##u_1 \in S##, their inner product is zero, and we must have, then, ##x_1=0##.
That means, ##x= \sum_{2}^{n} x_i v_i##, thus implying that the dimension of ##S^{\perp}=n-1##.
Now, we have to prove that if ##T## is applied on ##S^{\perp}##, the results will into ##S^{\perp}## itself, only then we shall be able to use our hypothesis.
##\langle T(x), u_1 \rangle= \langle x, T(u_1) \rangle## as T is Hermitian.
##\langle T(x) , u_1 \rangle= \langle x, \lambda_1 u_1 \rangle##
##\langle T(x), u_1 \rangle = \bar{\lambda_1} \langle x, u_1 \rangle = 0##
Thus, implying ##T(x) \in S^{\perp}##.
By hypothesis, ##S^{\perp}## has as a basis ##n-1## eigenvectors which is orthonormal. Therefore, adding ##u_1## to that set will preserve the orthonormality of set, and as any set of orthogonal elements are independent and we have ##n## of them, they will form a basis for ##V##.
This completes the proof.
I'm not able to absorb this theorem, there are two reasons for that: first, I don't see the motivation behind it, I mean what exactly do we want to achieve by it? (I know this type of question is senseless in Mathematics), and second, the proof involves a few things which seem quite sour to me, like that involvement S perpendicular and proving that T maps it to itself.