Property related to Hermitian operators.

In summary, the author proves that the fundamental theorem of algebra is equivalent to a polynomial of degree n having n complex roots. The first raw turned out to be zero because of hermiticity, but he got the eigenvector after choosing a basis of the eigenspace. The second characteristic equation is of dimension n-1 that its extracted from the matrix representation in the basis of "any n-1 vectors orthogonal to |w1>". He gets n-1 roots that he chose one and it is the eigenvalue w2 with correspondent eigenvector |w2>.
  • #1
amjad-sh
246
13
Hello;
I'm reading "principles of quantum mechanics" by R.Shankar.
I reached a theorem talking about Hermitian operators.
The theorem says: " To every Hermetian operator Ω,there exist( at least) a basis consisting of its orthonormal eigenvectors.Its diagonal in this eigenbasis and has its eigenvalues as its diagonal entries".
I will talk here about the first part of the proof:" Let us start with the characteristic equation.It must have at least one root,call it w1.Corresponding to w1 there must exist at least one nonzero eigenvector |w1>.Consider the subspace v1(of n-1 dimention) of all vectors orthogonal to |w1 >.Let us choose as our basis the vector |w1> normalized to unity and any n-1 orthonormal vectors in v1.In this basis Ω has the following matrix{picture1}
The column 1 is just the image of |w1> after Ω has acted on it.Given the first column , the first row follows from the hermiticity of Ω.
The characteristic equation now takes the form (w1-w)*(determinant of the boxed submatrix)=0.
{picture3}
Now also the polynomial P (of degree n-1) must also generate one root w2 ,and a normalized eigenvector |w2>
Define the subspace v2(of n-2 dimentions ) of vectors in v1 orthogonal to |w2> (and automatically to |w1>).and repeat the procedure as before.Finally the matrix Ω becomes, in the basis |w1>,|w2>,,,,,,|wn>):
{picture2}"
My Question:As we can choose any n-1 orthonormal vectors in v1 and |w1> as a basis what guarantee that we will get the same eigenvalues if we changed the chosen n-1 orthonormal vectors in v1?
Capture1.GIF
Capture3.GIF
Capture2.GIF
picture1,picture3,picture2 respectivly.
 
Physics news on Phys.org
  • #2
If you did not get the same eigenvalues it would not be the same operator. There is no guarantee that you will pick the eigenvectors, but you see to that later.

Anyway this seems like the long way around to me. It is essentially just the proof that the fundamental theorem of algebra is equivalent to a polynomial of degree n having n complex roots (although in the case of a hermitian operators, all eigenvalues turn out to be real). You could just use that result to begin with.
 
  • #3
OK.
But there is another question.why the first raw turned out to be zero? he said because of hermiticity but i didn't get it.
and there is also another question: for unitary operators we know that all its eigenvectors are mutually orthogonal,is this true for hermitian operators?
 
Last edited:
  • #4
amjad-sh said:
is this true for hermitian operators?

As long as two eigenvectors have different eigenvalues, yes, they are orthogonal if the operator is hermitian. I suggest attempting to prove this because it is a fundamental result.
 
  • #5
OK I will.
But I think even if two eigenvectors have the same eigenvalue they will be orthognal to each other and this is referred as the eigenspace I think.
you know about the theorem above : "To every Hermetian operator Ω,there exist( at least) a basis consisting of its orthonormal eigenvectors.Its diagonal in this eigenbasis and has its eigenvalues as its diagonal entries".
I will talk here about the first part of the proof:" Let us start with the characteristic equation.It must have at least one root,call it w1.
the first characteristic equation is of dimension n so we must get n roots.
but he chose one root and then he got its eigenvector. The first root is w1 and its correspondent eigenvector is |w1>.
then he chose a basis than consist of |w1> and a subspace v1 that consists of all vectors orthogonal to |w1>.
then we get the figure below.
we solve then the second characteristic equation which is of dimension n-1 that its extracted from the matrix representation in the basis of "any n-1 vectors orthogonal to |w1>".
we get then n-1 roots we chose one and it is the eigenvalue w2 with correspondent eigenvector |w2>.
|w2> is automatically orthogonal to |w1> since |w2> is in the subspace v1.
and we stay in the same procedure until the end.
Is this explanation right ?
sorry for asking a lot.
 

Attachments

  • Capture1.GIF
    Capture1.GIF
    2.4 KB · Views: 520
  • #6
amjad-sh said:
But I think even if two eigenvectors have the same eigenvalue they will be orthognal to each other and this is referred as the eigenspace I think.

No, this is not true. If I have two orthogonal eigenvectors I can easily create a linear combination of them that will be linearly independent of either (although not of both at the same time, it is after all a linear combination) which is also not orthogonal to either. What you can do is to find an orthogonal basis of the eigenspace, which may be what you are thinking about. You just apply any arbitrary orthogonalisation algorithm (Gram-Schmidt comes to mind).
 
  • #7
But in the book I'm reading "principles of quantum mechanics" by R.shankar,he states that the two eigenvectors of same eigenvalue are orthogonal here is a capture from the page:
Capture.GIF
 
  • #8
Orodruin said:
the fundamental theorem of algebra is equivalent to a polynomial of degree n having n complex roots (although in the case of a hermitian operators, all eigenvalues turn out to be real). You could just use that result to begin with.
This way you will find that a linear operator on an n-dimensional complex vector space has between 1 and n distinct eigenvalues. This doesn't immediately imply that there's an orthonormal basis of eigenvectors.

Edit: I included an attempt to simplify the proof here, but I deleted it when I found that it was too good to be true. There was a flaw in my argument.
 
Last edited:
  • #9
amjad-sh said:
But in the book I'm reading "principles of quantum mechanics" by R.shankar,he states that the two eigenvectors of same eigenvalue are orthogonal here is a capture from the page:View attachment 79746
He's saying that if there are two eigenvectors with the same eigenvalue, then there exist two orthogonal eigenvectors with that eigenvalue. He's not saying that any two eigenvectors with the same eigenvalue are orthogonal.
 
  • #10
After reading this there might be some value in the original poster learning about the spectral theorem from linear algebra:
http://www.math.uchicago.edu/~may/VIGRE/VIGRE2008/REUPapers/Pouliot.pdf

It's only valid for finite dimensional spaces - and extending it to the infinite dimensional spaces of QM is a royal pain in the toosh leading to all sorts or arcane areas like the Stieltjes Integral or Rigged Hilbert spaces. But I always felt it really helped in understanding that to know the finite dimensional case first.

Thanks
Bill
 
  • #11
I deduced from the replies that the hermitian operators that have eigenvectors with distinct eigenvalues,these eigenvectors must be orthogonal to each other.
But if they have the same eigenvalue they are not necessarily orthogonal but we can transform them to orthogonal by gram-shmidet procedure or they may exist.
amjad-sh said:
I will talk here about the first part of the proof:" Let us start with the characteristic equation.It must have at least one root,call it w1.
the first characteristic equation is of dimension n so we must get n roots.
but he chose one root and then he got its eigenvector. The first root is w1 and its correspondent eigenvector is |w1>.
then he chose a basis than consist of |w1> and a subspace v1 that consists of all vectors orthogonal to |w1>.
then we get the figure below.
we solve then the second characteristic equation which is of dimension n-1 that its extracted from the matrix representation in the basis of "any n-1 vectors orthogonal to |w1>".
we get then n-1 roots we chose one and it is the eigenvalue w2 with correspondent eigenvector |w2>.
|w2> is automatically orthogonal to |w1> since |w2> is in the subspace v1.
and we stay in the same procedure until the end.
Is this explanation right ?
Is what I said logic in the above quote?
Thanks.
 
  • #12
amjad-sh said:
I deduced from the replies that the hermitian operators that have eigenvectors with distinct eigenvalues,these eigenvectors must be orthogonal to each other.
But if they have the same eigenvalue they are not necessarily orthogonal but we can transform them to orthogonal by gram-shmidet procedure or they may exist.
That's right.

amjad-sh said:
Is what I said logic in the above quote?
Thanks.
What you're describing in the quoted text is how to find an orthonormal ordered basis ##(w_1,\dots,w_n)## such that the matrix corresponding to a self-adjoint linear operator is diagonal. It's possible that for example ##w_2## and ##w_5## correspond to the same eigenvalue. In that case they're still orthogonal to each other, because of the way this ordered basis was constructed. Gram-Schmidt is a way to explain how this is possible. There's a theorem that says that eigenvectors corresponding to different eigenvalues are orthogonal. This ensures that all the eigenspaces are mutually orthogonal. If an eigenspace isn't 1-dimensional, then you can pick any basis for it, and use Gram-Schmidt to turn the basis into an orthonormal basis.
 
Last edited:
  • #13
bhobba said:
After reading this there might be some value in the original poster learning about the spectral theorem from linear algebra:
http://www.math.uchicago.edu/~may/VIGRE/VIGRE2008/REUPapers/Pouliot.pdf
The theorem discussed here is the spectral theorem for self-adjoint linear operators on a finite-dimensional complex inner product space.

I didn't look closely enough at the theorem before. I thought we were discussing the theorem that says that for all linear operators A, there's an orthonormal ordered basis E such that the matrix ##[A]_E## is upper triangular. The spectral theorem is a corollary of this one, because once we have proved this result, and that the matrix of ##A^*## is the conjugate transpose of the matrix of A, we will know that the matrix of ##A^*## (in the same ordered basis) is lower triangular. So if ##A^*=A##, the matrix is both upper triangular and lower triangular, and therefore diagonal.
 
Last edited:
  • Like
Likes bhobba
  • #14
OK, thanks all for helping.:smile:
 
  • #15
I worked out a way to fix the broken proof that I posted yesterday and then deleted, so I might as well post the improved version. (A comment to make it easier for me to find this post in the future: This post proves the spectral theorem for self-adjoint linear operators on a complex finite-dimensional inner product space).

Lemma: Every linear operator has an eigenvalue.

Proof: The eigenvalue equation ##Ax=\lambda x## has a non-zero solution if and only if ##\lambda## is a root of the degree n polynomial ##\det(A-\lambda I)##. By the fundamental theorem of algebra, every polynomial has a root.

Lemma: If A is self-adjoint and M is a subspace that has a basis of eigenvectors of A, then ##M^\perp## is invariant under A.

Comments: A subspace K is said to be invariant under A if Ax is in K for all x in K. We need this lemma because we will need to know that the restriction of A to ##M^\perp## is a self-adjoint linear operator on ##M^\perp##.

Proof: Let ##x\in M^\perp## and ##y\in M## be arbitrary. Let ##\{v_i\}_{i=1}^m## be a basis for M such that ##Av_i=\lambda_i v_i## for all i. Since
$$\langle y,Ax\rangle =\langle A^*y,x\rangle =\langle Ay,x\rangle =\sum_{i=1}^m y_i^*\lambda_i^*\langle v_i,x\rangle=0,$$ we have ##Ax\in M^\perp##.

Theorem: If A is self-adjoint, there's an orthonormal ordered basis V such that ##[A]_V## is diagonal.

Proof: Let ##\lambda_1## be an eigenvalue of A. (The first lemma guarantees that one exists). Let ##v_1## be an eigenvector with eigenvalue ##\lambda_1## and norm 1. Define ##M_2=(\operatorname{span}\{v_1\})^\perp##. The second lemma ensures that the restriction of A to ##M_2## is a self-adjoint linear operator on ##M_2##. Let ##\lambda_2## be an eigenvalue of this operator. Let ##v_2## be an eigenvector of this operator with eigenvalue ##\lambda_2## and norm 1. Note that ##v_2## is an eigenvector of ##A## with eigenvalue ##\lambda_2## and that ##\{v_1,v_2\}## is an orthonormal set. Define ##M_3=(\operatorname{span}\{v_1,v_2\})^\perp##. Repeat this procedure until we have found an orthonormal ordered basis ##V=(v_1,\dots,v_n)## of eigenvectors of ##A##. For all ##i,j\in\{1,\dots,n\}##, we have
$$([A]_V)_{ij}=\langle v_i,Av_j\rangle =\lambda_j\langle v_i,v_j\rangle =\lambda_j\delta_{ij}.$$
 
Last edited:
  • #16
I've been thinking about how to prove the more general theorem about upper triangular matrices in a similar way (i.e. without drawing pictures of matrices in the intermediate steps). The proof I came up with requires you to understand orthogonal projections. In particular, you need to know that they are self-adjoint and that if M,N are subspaces such that ##N\subseteq M##, then the corresponding orthogonal projections satisfy ##P_N=P_NP_M##.

Theorem: If A is a linear operator, there's an orthonormal basis V such that ##[A]_V## is upper triangular.

Proof: Let ##M_1## be the the entire vector space and let ##P_1## be the corresponding projection operator (i.e. the identity map). Let ##\lambda_1##, ##v_1## and ##M_2## be as before. Let ##P_2## be the orthogonal projection onto ##M_2##. Let ##\lambda_2## be an eigenvalue of ##P_2A##. Let ##v_2## be an eigenvector of ##P_2A## with eigenvalue ##\lambda_2## and norm 1. Define ##M_3=(\operatorname{span}\{v_1,v_2\})^\perp##. Repeat this procedure until we have found an orthonormal ordered basis ##V=(v_1,\dots,v_n)## such that ##P_iAv_i=\lambda_i v_i## for all ##i\in\{1,\dots,n\}##. Let ##i,j## be arbitrary elements of ##\{1,\dots,n\}## such that ##i>j##. We have ##M_i\subseteq M_j## and therefore ##P_i=P_iP_j##. This implies that
\begin{align}
&([A]_V)_{ij}=\langle v_i,Av_j\rangle =\langle P_iv_i,Av_j\rangle =\langle v_i,P_iA v_j\rangle =\langle v_i,P_iP_j Av_j\rangle =\lambda_j\langle v_i,P_iv_j\rangle\\
&=\lambda_j \langle P_iv_i,v_j\rangle =\lambda_j \langle v_i,v_j\rangle =0.
\end{align}
(The calculation can be made slightly shorter by observing that since ##P_i## is the orthogonal projection onto the orthogonal complement of a subspace that contains ##v_j##, we have ##P_iv_j=0##).
 

Related to Property related to Hermitian operators.

1. What is a Hermitian operator?

A Hermitian operator is a type of operator used in quantum mechanics to represent physical properties of a system. It is defined as an operator that is equal to its own adjoint, or conjugate transpose.

2. How is a Hermitian operator related to properties of a system?

A Hermitian operator represents an observable property of a quantum system, such as energy or momentum. The eigenvalues of a Hermitian operator correspond to the possible outcomes of a measurement of that property.

3. Can any operator be Hermitian?

No, not all operators are Hermitian. In order for an operator to be Hermitian, it must satisfy the condition that its adjoint is equal to itself. This is only true for certain types of operators, such as linear operators with real eigenvalues.

4. What is the significance of a Hermitian operator being self-adjoint?

A Hermitian operator being self-adjoint means that it is equal to its own adjoint, which makes it a special type of operator in quantum mechanics. It allows for simpler mathematical calculations and has important implications for the measurement and interpretation of physical properties.

5. How are Hermitian operators used in quantum mechanics?

Hermitian operators are used in quantum mechanics to represent observable physical properties of a system. They play a crucial role in the mathematical formalism of quantum mechanics, allowing for predictions of the behavior and outcomes of measurements in a quantum system.

Similar threads

Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
958
Replies
3
Views
2K
  • Quantum Physics
Replies
2
Views
14K
  • Quantum Physics
Replies
3
Views
2K
Replies
9
Views
1K
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Advanced Physics Homework Help
Replies
4
Views
3K
Replies
3
Views
1K
Back
Top