Eigenfunctions problem - i have the answer, explanation required

In summary, the conversation discusses the use of subscripts m and n in a revision lecture given by a lecturer. The use of these subscripts is to differentiate between two sets of eigenvalues, and to make the derivation more general. The conversation also explores the use of the delta function and summation over n and m in the context of orthonormality. The Kronecker Delta is used to simplify the summations and eliminate all terms except for one, resulting in a final expression without summations.
  • #1
Brewer
212
0

Homework Statement


question.jpg



Now in a revision lecture given a few weeks ago, the lecturer gave this as the answer.



The Attempt at a Solution


work2.jpg


No I think generally I'm fine with it (apart from it doesn't seem very obvious that this is what you should do with the maths!).

BUT
1) Where do the subscript m's come from? Why aren't they still n's like are used for the conjugate of [tex]\psi[/tex]?

2) How come the delta function suddendly gets rid of the sigma m, and itself, whilst simultaneously changing the am to an?

And what's a good definition of orthogonality? A simple laymans definition!
 
Physics news on Phys.org
  • #2
Brewer said:

Homework Statement


question.jpg



Now in a revision lecture given a few weeks ago, the lecturer gave this as the answer.



The Attempt at a Solution


work2.jpg


No I think generally I'm fine with it (apart from it doesn't seem very obvious that this is what you should do with the maths!).

BUT
1) Where do the subscript m's come from? Why aren't they still n's like are used for the conjugate of [tex]\psi[/tex]?
There are two different eigenvalues, An and Am. You are multiplying two sums and combining them into one. For example [itex](a_1+ a_2+ a_3)(b_1+ b_2+ b_3)= a_1b_1+ a_1b_2+ a_1b_3+ a_2b_1+ a_2b_2+ a_2b_3+ a_3b_1+ a_3b_2+ a_3b_3[/itex] where I have used "b" for the second sum for clarity. Notice that multiplying two sums of 3 terms each gives a 9 term sum as product. In general, multiply two sums of N terms gives a sum of N2 terms.

2) How come the delta function suddendly gets rid of the sigma m, and itself, whilst simultaneously changing the am to an?
If [itex]a_na_m= \delta_{mn}[/itex] which is defined as: 1 if m= n, 0 otherwise, then the above sum becomes 1+ 0+ 0+ 0+ 1+ 0+ 0+ 0+ 1 and the 9 terms reduce to three: 1+ 1+ 1 (of course, each "1" is multiplied by some term).

The
And what's a good definition of orthogonality? A simple laymans definition!
Basic definition is that two vectors are orthogonal if and only if they are perpendicular. Of course, if your "vectors" are abstract functions then you have to think about what you mean by "orthogonal". We say that two vectors are orthogonal if and only if some inner product is 0. For functions this is [itex]\int f(x)g(x)dx[/itex] with the integral taken over some interval.

A set of vectors is "orthonormal" if each has length 1 (the "normal" part) and any two different vectors are perpendicular (the "ortho" part). That is the same as saying that the inner product of a vector in the set with itself is 1 and with any other vector in the set is 0: the delta.
 
  • #3
So:

Am is written as such just because its used for [tex]\psi[/tex] rather than [tex]\psi*[/tex] just as a way to differentiate between the two sets of eigenvalues that arise from using the conjugate?

If that is the case, why couldn't an* and an be used from the very start?

And the delta function disappears because it is set to 1, and therefore an = am so one can be written as the other? And as a result there is no need for the sigma over n?

Thanks for your help.
 
  • #4
Brewer said:
So:

Am is written as such just because its used for [tex]\psi[/tex] rather than [tex]\psi*[/tex] just as a way to differentiate between the two sets of eigenvalues that arise from using the conjugate?

If that is the case, why couldn't an* and an be used from the very start?

And the delta function disappears because it is set to 1, and therefore an = am so one can be written as the other? And as a result there is no need for the sigma over n?

Thanks for your help.

The reason for using the m's and n's is really to make the derivation more general. If, for instance, you have two different eigenfunctions so that you have [tex]\psi_{m}[/tex] and [tex]\psi_{n}[/tex], then taking their inner product would give you:

[tex]\int \psi^{*}_{m} \psi_{n} dx[/tex]

Now, because the set of all the [tex]\psi[/tex]s are orthonormal, this inner product is 0 unless m=n. That is what orthonormality really means (that and the functions are normalized; i.e. integrating them over all space gives you 1).

When you get to the summation over n and m, all you are doing is subsituting in for what [tex]\psi^{*}_{m}[/tex] and [tex]\psi_{n}[/tex] are defined to be. The n and m are just two indices that you sum over. The idea here is that you sum them independent of each other, not all at once. This is why there are two. Then you use the condition that the u's are also orthonormal, pull out the summations, and use the Kronecker Delta. Essentially, the Kronecker Delta kills all the terms in the summation over m except for one, when m=n. So you just drop the summation over m and change all the m subscripts to n's.
 
Last edited:

FAQ: Eigenfunctions problem - i have the answer, explanation required

What is an eigenfunction?

An eigenfunction is a mathematical function that, when multiplied by a constant, gives back the same function. It is a special type of function that is important in the study of differential equations and linear algebra.

What is the significance of eigenfunctions in science?

Eigenfunctions are used to solve many important problems in science, such as finding the energy levels of atoms and molecules, analyzing the behavior of quantum systems, and understanding the dynamics of physical systems. They also have applications in image and signal processing, data analysis, and machine learning.

What is the difference between eigenfunctions and eigenvectors?

Eigenfunctions are functions that are multiplied by a constant to give back the same function, whereas eigenvectors are vectors that are multiplied by a matrix to give back the same vector. In other words, eigenfunctions operate on functions, while eigenvectors operate on vectors. However, both eigenfunctions and eigenvectors are related to the concept of eigenvalues, which represent the scaling factor of the constant in the multiplication process.

How are eigenfunctions calculated and solved?

To find the eigenfunctions of a given system, one must solve an eigenfunction problem, which involves finding the values of the constant (or eigenvalues) and the corresponding functions (eigenfunctions) that satisfy a given equation. This can be done through various methods, such as using analytical techniques or numerical methods like the power method or Jacobi method.

Can eigenfunctions have real-life applications?

Yes, eigenfunctions have a wide range of real-life applications in various fields such as physics, chemistry, engineering, and computer science. They are used to model and understand complex systems, make predictions, and solve important problems. For example, eigenfunctions are used in quantum mechanics to describe the behavior of particles and in signal processing to analyze and manipulate data.

Similar threads

Back
Top