Proving Linear Independence of Eigenfunctions of a Hermitian Operator

  • MHB
  • Thread starter ognik
  • Start date
  • Tags
    Proof
In summary: Anyway, I think you meant to say:If linear combo of vectors = 0 and the only solution is all coefficients = 0, then the vectors are LI.In summary, the eigenvalues of a Hermitian operator must be distinct in order for the vectors corresponding to those eigenvalues to be linearly independent.
  • #1
ognik
643
2
Given $ u_1, u_2 $ are eigenfunctions of the same Hermitian operator, with distinct eigenfunctions $ \lambda_1, \lambda_2 $, show $ u_1, u_2 $ linearly independent.

If they are LI, then $ \alpha_1u_1+\alpha_2u_2=0 $ (1)
Now $ Hu_1=\lambda_1u_1, Hu_2=\lambda_2, \therefore H(\alpha_1u_1+\alpha_2u_2)=0,$
$ \therefore \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_2 u_2=0$ (2)
$ (1)*\lambda_1 = \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2=0 $ (3)
$ (2) - (3): \alpha_2 u_2 (\lambda_2 - \lambda_1) = 0 $
Eigenfunctions cannot be 0, and $\lambda_1 \ne \lambda_2, \therefore \alpha_2 = 0 $

We can similarly show that for the linear combination to = 0, $\alpha_1 = 0 $
I based this on "If linear combo of vectors = 0 and the only solution is all coefficients = 0, then the vectors are LI"
I just feel vaguely hesitant about this,as it seems slightly circular logic?
 
Physics news on Phys.org
  • #2
You should assume your equation (1). That is, you're trying to show that $\alpha_1 u_1+\alpha_2 u_2=0$ forces or implies that $\alpha_1=\alpha_2=0$. So if you assume (1) holds, and show that the alphas must vanish, you're done. I'm not sure the rest of your logic holds, though. In particular, right around (1), it seems to be a bit shaky. I would recommend using inner products, and your knowledge of how Hermitian operators behave.
 
  • #3
Ackbach said:
I would recommend using inner products, and your knowledge of how Hermitian operators behave.
The only Hermitian property that seemed potentially useful was orthogonality - and the previous exercise had already asked to prove orthogonal eigenvectors were linearly independent (multiply my (1) by each of the vectors in turn also proves the coefficients are 0, $u_1.u_2 = 0 $ etc) , so it seemed they wanted a different approach ...

Could you be more specific where you think the logic is shaky please?
 
  • #4
Try showing that the eigenvectors corresponding to distinct eigenvalues are orthogonal. Then you can use your previous result.

As for shaky logic, you wrote:

Given $ u_1, u_2 $ are eigenfunctions of the same Hermitian operator, with distinct eigenfunctions $ \lambda_1, \lambda_2 $, show $ u_1, u_2 $ linearly independent.

If they are LI, then $ \alpha_1u_1+\alpha_2u_2=0 $ (1)
Now $ Hu_1=\lambda_1u_1, Hu_2=\lambda_2, \therefore H(\alpha_1u_1+\alpha_2u_2)=0,$
$ \therefore \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_2 u_2=0$ (2)

So far, so good.

$ (1)*\lambda_1 = \alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2=0 $ (3)

It's not at all clear what you mean here. The eigenvalue $\lambda_1$ does not equal $\alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2$. Moreover, why the eigenvalue changed subscripts in the second term is unclear as well. What property are you using to change that subscript?

In general, your mindset should be this: every single symbol on one line must survive to the next line, unless you invoke a particular property to change it. In fact, there's got to be some sort of "Conservation of Symbols" law in algebra. I just need to formulate it correctly.

$ (2) - (3): \alpha_2 u_2 (\lambda_2 - \lambda_1) = 0 $
Eigenfunctions cannot be 0, and $\lambda_1 \ne \lambda_2, \therefore \alpha_2 = 0 $

This sort of idea is what you need, but I would go with inner products to show orthogonality of eigenfunctions. Then they must be linearly independent.
 
  • #5
I think the idea of your proof is OK, the exposition is a bit off.

Suppose $\alpha_1u_1 + \alpha_2u_2 = 0$.

Since $H$ is LINEAR, it follows that $H(\alpha_1u_1 + \alpha_2u_2) = H(0) = 0$.

However, since $u_1,u_2$ are eigenfunctions with eigenvalues $\lambda_1,\lambda_2$ we have:

$H(\alpha_1u_1 + \alpha_2u_2) = \lambda_1\alpha_1u_1 + \lambda_2\alpha_2u_2 = 0$ ($\ast$)

(note we used the linearity of $H$ implicitly, without explanation).

Now $\lambda_1\alpha_1u_1 + \lambda_1\alpha_2u_2 = \lambda_1(\alpha_1u_1 + \alpha_2u_2) = \lambda_1 0 = 0$ ($\ast\ast$).

Subtracting ($\ast$) from ($\ast\ast)$ yields:

$(\lambda_1 - \lambda_2)\alpha_2u_2 = 0$.

Since $u_2$ is an eigenfunction, $u_2$ is not the $0$-function, by definition.

Therefore $(\lambda_1 - \lambda_2)\alpha_2 = 0$ (this is a scalar). Since we are in a FIELD:

either $\lambda_1 - \lambda_2 = 0$, or $\alpha_2 = 0$.

But $\lambda_1 - \lambda_2 = 0 \implies \lambda_1 = \lambda_2$.

As these eigenvalues were assumed distinct, it must be that $\alpha_2 = 0$.

Now we have that $\alpha_1u_1 + \alpha_2u_2 = 0$ becomes:

$\alpha_1u_1 = 0$, from which it is immediate that $\alpha_1 = 0$, as well, and LI is established.

***********

It appears to me that this is what you MEANT to express, but taking care with the details is important (for CLARITY's sake).
 
  • #6
Ackbach said:
It's not at all clear what you mean here. The eigenvalue $\lambda_1$ does not equal $\alpha_1 \lambda_1 u_1+\alpha_2 \lambda_1 u_2$. Moreover, why the eigenvalue changed subscripts in the second term is unclear as well. What property are you using to change that subscript?
Sorry, I multiplied my eqtn (1) by $\lambda_1$ precisely to get that change of subscript in the resulting eqtn (3); then (2)-(3) lead to one coefficient =0 ...
 
  • #7
Deveno said:
Since $H$ is LINEAR, it follows that $H(\alpha_1u_1 + \alpha_2u_2) = H(0) = 0$.
Aren't all operators linear? Either way, I don't get why H(some linear combo) = 0?

Later you justify $\alpha_1 (\lambda_2 - \lambda_1) $ as scalar(s) because we're in a field - do we need that field justification? The $\lambda's$ are always scalar (real) for an Hermitian operator? And the constants are also real, scalar - my book defines a linear combo of vectors as a real scalar * each vector.
 
  • #8
ognik said:
Aren't all operators linear? Either way, I don't get why H(some linear combo) = 0?

Later you justify $\alpha_1 (\lambda_2 - \lambda_1) $ as scalar(s) because we're in a field - do we need that field justification? The $\lambda's$ are always scalar (real) for an Hermitian operator? And the constants are also real, scalar - my book defines a linear combo of vectors as a real scalar * each vector.

For a linear operator $L$, we have:

$L(u+v) = L(u) + L(v)$ for any vectors (functions), $u,v$.

In particular, $L(0) = L(0) + L(0)$, so subtracting $L(0)$ from both sides gives $0 = L(0)$.

Since our original linear combination is assumed = to 0, it follows that $H$ of the linear combination is likewise 0.

The property that $ab = 0 \implies a = 0$ or $b = 0$, actually holds for a wider class of algebraic structures called an *integral domain*, which every field happens to be. There ARE structures for which this is NOT true, and it is possible to have "vector-space-like" structures (called MODULES) with non-integral-domains as the "scalars".

Vector spaces occur "over" an underlying field of scalars. It is not a "given" that this field is the field of real numbers. While eigenvalues of a Hermitian operator are always real (see here), there is no reason to so restrict linear combinations a priori, and in quantum mechanics, for example, the wave-functions are often complex-valued, with only their MAGNITUDES being real.
 

FAQ: Proving Linear Independence of Eigenfunctions of a Hermitian Operator

Can someone be naturally good at understanding proofs?

Yes, some individuals may have a natural aptitude for understanding proofs, just as some people may have a natural aptitude for other subjects or skills. However, with practice and dedication, anyone can improve their understanding of proofs.

How can I improve my understanding of proofs?

The best way to improve your understanding of proofs is to practice regularly. Start with simple proofs and work your way up to more complex ones. You can also seek guidance from a teacher or tutor, study resources such as textbooks or online courses, and collaborate with others to solve proofs together.

What is the purpose of learning proofs?

Learning proofs is essential for understanding the foundations of mathematics. It teaches critical thinking, logical reasoning, and problem-solving skills. It also helps to develop a deeper understanding of mathematical concepts and their applications.

Are there any tips for tackling difficult proofs?

One tip is to break down the proof into smaller, more manageable steps. It can also be helpful to work backward from the conclusion to see how it can be proven. Additionally, don't be afraid to seek help from others, as discussing and collaborating on proofs can lead to new insights.

What if I still struggle with understanding proofs?

It's normal to struggle with understanding proofs, and it may take time and practice to improve. Don't get discouraged and continue to seek guidance and resources to help. Remember that everyone learns at their own pace, and with dedication and effort, you can improve your understanding of proofs.

Similar threads

Replies
8
Views
1K
Replies
15
Views
1K
Replies
12
Views
1K
Replies
10
Views
1K
Replies
2
Views
1K
Back
Top