Proof that if T is Hermitian, eigenvectors form an orthonormal basis

  • #1
Hall
351
88
Actual statement:
Assume ##dim~V=n## and let ##T: V \to V## be Hermitian or Skew-Hermitian. Then there exists ##n## eigenvectors ##u_1, \cdots u_n## of T which form an orthonormal basis for ##V##. Hence, the matrix of ##T## relative to this basis is the diagonal matrix ##\Lambda = diag (\lambda_1, \cdots, \lambda_n)## where ##\lambda_k## is the eigenvalue belonging to ##u_k##.
Proof (of Mr. Tom Apostol): We will do the proof by induction on ##n##.

Base Case: n=1. When ##n=1##, the matrix of T will be have just one value and therefore, the characteristic polynomial ##det(\lambda I -A)=0## will have only one solution. So, the Eigenvector corresponding to this value will act as a basis for ##V##.

Hypothesis: Let us assume that if ##dim~V= n-1##, then there exists eigenvectors ##u_1, u_2, \cdots u_n## of ##T:V \to V## which are orthogonal and will be act as a basis for ##V##.

Induction: ##dim ~ V = n##.
Take any eigenvalue of ##T## and call it ##\lambda_1##; the corresponding eigenvector with norm can be named as ##u_1##.

##S = \text{span} ( u_1)##
##S^{\perp}## = space of all elements of ##V## which are orthogonal to ##S##

As ##u_1## is a basis for ##S##, and ##S## is subspace of ##V##, therefore ##u_1## forms a part of basis for ##V##. Let the basis be ##(u_1, v_2, v_3, \cdots v_n)##, we can assume, without loss in generality, that this is an orthonormal basis (else we can convert to one with Gram-Schmidt process, keeping ##u_1## as the first basis element).

Take any ##x## in ##S^{\perp}## and write
##x= x_1 u_1 + x_2 v_2 \cdots x_n v_n##
##\langle x, u_1 \rangle = \langle x_1 u_1 , u_1 \rangle +0##
##langle x, u_1 \rangle = x_1 ##
As ##x \in S^{\perp}## and ##u_1 \in S##, their inner product is zero, and we must have, then, ##x_1=0##.
That means, ##x= \sum_{2}^{n} x_i v_i##, thus implying that the dimension of ##S^{\perp}=n-1##.

Now, we have to prove that if ##T## is applied on ##S^{\perp}##, the results will into ##S^{\perp}## itself, only then we shall be able to use our hypothesis.

##\langle T(x), u_1 \rangle= \langle x, T(u_1) \rangle## as T is Hermitian.
##\langle T(x) , u_1 \rangle= \langle x, \lambda_1 u_1 \rangle##
##\langle T(x), u_1 \rangle = \bar{\lambda_1} \langle x, u_1 \rangle = 0##
Thus, implying ##T(x) \in S^{\perp}##.

By hypothesis, ##S^{\perp}## has as a basis ##n-1## eigenvectors which is orthonormal. Therefore, adding ##u_1## to that set will preserve the orthonormality of set, and as any set of orthogonal elements are independent and we have ##n## of them, they will form a basis for ##V##.

This completes the proof.

I'm not able to absorb this theorem, there are two reasons for that: first, I don't see the motivation behind it, I mean what exactly do we want to achieve by it? (I know this type of question is senseless in Mathematics), and second, the proof involves a few things which seem quite sour to me, like that involvement S perpendicular and proving that T maps it to itself.
 
Physics news on Phys.org
  • #2
Intuitively, the idea is: if we find an n-1 dimensional space ##W## such that ##T## maps ##W## to ##W##, then ##T## is a Hermitian map on ##W## and we get ##n-1## orthogonal eigenvectors by the inductive hypothesis. So we just need to find a ##W## where this works, and where the last orthogonal dimension to fill out ##V## is also an eigenvector. So a natural place to start is pick an eigenvector, and hope the orthogonal space works.
 
  • Like
Likes PeroK
  • #3
Office_Shredder said:
Intuitively, the idea is: if we find an n-1 dimensional space ##W## such that ##T## maps ##W## to ##W##, then ##T## is a Hermitian map on ##W## and we get ##n-1## orthogonal eigenvectors by the inductive hypothesis. So we just need to find a ##W## where this works, and where the last orthogonal dimension to fill out ##V## is also an eigenvector. So a natural place to start is pick an eigenvector, and hope the orthogonal space works.
Actually, I was wondering when in history this theorem was proven for the first time, and was it really proven like that.

Some searches tell me that Mr. Cauchy first embarked on this thing, but I doubt if his aim was really about diagonalizing the Hermitian matrix because the life of Cauchy and Hermite intersected only for a decade or a little more (Hermite was born in 1822, no matter how intelligent his contributions to Maths couldn't have come before 1842, and Cauchy died in 1857).
 
  • #4
I doubt you can even point to something that would obviously be the first proof. I think it's highly likely that people figured out that real symmetric 2x2 matrices were diagonalizable very early on for example.
 
  • Like
Likes Hall

Similar threads

Back
Top