How to find expectation value for combined state?

In summary: Your first guess would be not to square the coefficients. This works out much more nicely: $$\left<Q\right> = \sum_{n} C_n q_n$$ which leads to $$\left<Q\right> = \frac{A}{A+B}q_1 + \frac{B}{A+B}q_2$$ which does give ##q## as the expectation value when ##q_1 = q_2##. However, I'm not sure how to reconcile that method with......the fact that the coefficients of ##Y_1^1## and ##Y_1^{-1}## are the same.
  • #1
anlon
32
3

Homework Statement


Given ##\psi = AR_{21}[BY_1^1 + BY_1^{-1} + CY_1^0]##, find ##\left<L_z\right>## and ##\left<L^2\right>##. (This is not the beginning of the homework problem, but I know my work is correct up to here. I am not looking for a solution, only an answer as to whether or not my method of finding expectation values is correct, thus why I have generalized the problem with these coefficients instead of using the actual coefficients from the problem.)
Here, ##A##, ##B##, and ##C## are constants, ##Y_l^m## is the angular wave function for an electron in a hydrogen atom in the state ##\left|nlm\right>##, and similarly ##R_{nl}## is the electron's radial wave function.

Homework Equations


##L_z f_l^m = \hbar m f_l^m##
##L^2 f_l^m = \hbar^2 l (l+1) f_l^m##
##\left<Q\right> = \left<g\right|Q\left|g\right>## for the expectation value of an observable ##Q## for a function ##g##.

The Attempt at a Solution


Since the wave function is a linear combination of the angular wave functions, and in calculating the angular momentum the radial wave function can be ignored, let ##f_l^m## be the wave function without the radial component: $$f_l^m = A[BY_1^1 + BY_1^{-1} + CY_1^0].$$ The expectation value of ##L_z## would then be $$\left<L_z\right> = \left<f_l^m\right|\hbar m\left|f_l^m\right>$$
The problem I have is that there isn't one particular value of ##m##. We instead have three values. However, if we calculate ##\left<f_l^m | f_l^m\right>##, then even though the initial math would be horrible (creating nine terms where there previously were only three), because these angular wave functions are mutually orthogonal, any product of two wave functions that are not the same will be 0, meaning we can just square each wave function in place and remove the imaginary components.
Essentially, $$\left<L_z\right> = \left<f_l^m\right|\hbar m\left|f_l^m\right> = \left<ABY_1^1\right|\hbar m\left| ABY_1^1\right> + \left<ABY_1^{-1}\right|\hbar m\left|ABY_1^{-1}\right> + \left<ACY_1^0\right|\hbar m\left|ACY_1^0\right>=A^2 \left[B^2\left<Y_1^1\right|\hbar \left| Y_1^1\right> + B^2\left<Y_1^{-1}\right|-\hbar\left|Y_1^{-1}\right> + C^2\left<Y_1^0\right|0\left|Y_1^0\right>\right] = A^2(0) = 0$$
and ##\left<L^2\right>## would be found in a similar way. Is this an acceptable way to find the expectation value?
 
Physics news on Phys.org
  • #2
There's a much simpler way to get the answer.

What is the significance of these particular wave functions?
 
  • #3
PeroK said:
There's a much simpler way to get the answer.

What is the significance of these particular wave functions?
If you're asking whether or not I see a trend, they all have the same value of ##l##, with varying values of ##m##. They are also all spherical harmonics.
Does it have to do with the coefficients of ##Y_1^1## and ##Y_1^{-1}## being the same?
 
  • #4
anlon said:
If you're asking whether or not I see a trend, they all have the same value of ##l##, with varying values of ##m##. They are also all spherical harmonics.
Does it have to do with the coefficients of ##Y_1^1## and ##Y_1^{-1}## being the same?

Yes, those functions tell you the value of ##L## and ##L_z## that you will get. So, you can do a simple statistical process to get the expected value.
 
  • #5
Sometimes problems are easier if you generalise them. What about this:

Suppose you have a measurable ##Q## and two normalised eigenstates ##\psi_1, \psi_2##. Suppose you know the expected value of ##Q## in each of these states - let's say ##q_1, q_2## respectively.

If you have the state ##\psi = A\psi_1 + B\psi_2##, then what is the expected value of ##Q## in that state?

And, what do you get in the case that ##q_1 = q_2##?
 
Last edited:
  • #6
PeroK said:
Sometimes problems are easier if you generalise them. What about this:

Suppose you have a measurable ##Q## and two normalised states ##\psi_1, \psi_2##. Suppose you know the expected value of ##Q## in each of these states - let's say ##q_1, q_2## respectively.

If you have the state ##\psi = A\psi_1 + B\psi_2##, then what is the expected value of ##Q## in that state?

And, what do you get in the case that ##q_1 = q_2##?

I would think the expected value of ##Q## would be $$\sum_{n} \left|C_n \right|^2 q_n$$ where ##C_n## is the "amount" of the wave function ##\psi_n## contained in the total wave function. In this case, $$\left<Q\right> = \sum_{1}^{2} \left|C_n\right|^2 q_n = \left(\frac{A}{A+B}\right)^2 q_1 + \left(\frac{B}{A+B}\right)^2 q_2$$ and for the case ##q_1 = q_2 = q## you would get ##\left<Q\right> = q##.

Edit: Mathematically, I don't see how this works. I'm using Griffiths as a reference, and his textbook confirms my guess that the total expectation value is found with the summation of squared coefficients with expectation values, but using this method you would get $$q\left( \frac{A^2}{(A+B)^2} + \frac{B^2}{(A+B)^2} \right) = q\left( \frac{A^2 + B^2}{(A+B)^2} \right) \neq q$$

Edit 2: My first guess would be not to square the coefficients. This works out much more nicely: $$\left<Q\right> = \sum_{n} C_n q_n$$ which leads to $$\left<Q\right> = \frac{A}{A+B}q_1 + \frac{B}{A+B}q_2$$ which does give ##q## as the expectation value when ##q_1 = q_2##. However, I'm not sure how to reconcile that method with Griffiths.
 
Last edited:
  • #7
anlon said:
I would think the expected value of ##Q## would be $$\sum_{n} C_nq_n$$ where ##C_n## is the "amount" of the wave function ##\psi_n## contained in the total wave function. In this case, $$\left<Q\right> = \sum_{1}^{2} C_nq_n = \frac{A}{A+B}q_1 + \frac{B}{A+B}q_2$$ and for the case ##q_1 = q_2 = q## you would get ##\left<Q\right> = q##.

It should be ##\langle Q \rangle = |A|^2q_1 + |B|^2q_2## but the point is that if you know the expected values in each eigenstate, you can compute the expected value for a linear combination of those eigenstates.

Note that we are assuming here that ##\psi_1, \psi_2## are orthogonal eigenstates of ##Q##. I forgot to say that, but you assumed it anyway!
 
  • #8
PeroK said:
It should be ##\langle Q \rangle = |A|^2q_1 + |B|^2q_2## but the point is that if you know the expected values in each eigenstate, you can compute the expected value for a linear combination of those eigenstates.

Note that we are assuming here that ##\psi_1, \psi_2## are orthogonal eigenstates of ##Q##. I forgot to say that, but you assumed it anyway!

That makes much more sense, I was overcomplicating the problem by trying to divide the different coefficients. So for ##q_1 = q_2 = q## the expectation value would be ##\left<Q\right> = q\left(\left|A\right|^2 + \left|B\right|^2 \right)##?
 
  • #9
anlon said:
That makes much more sense, I was overcomplicating the problem by trying to divide the different coefficients. So for ##q_1 = q_2 = q## the expectation value would be ##\left<Q\right> = q\left(\left|A\right|^2 + \left|B\right|^2 \right)##?

I was assuming that ##\psi## is normalised, so ##|A|^2 + |B|^2 = 1##.

Anyway, you should go back to your original problem now. You know the expected values in each eigenstate and you have already spotted some things about the coefficients, so ...
 
  • #10
PeroK said:
I was assuming that ##\psi## is normalised, so ##|A|^2 + |B|^2 = 1##.

Anyway, you should go back to your original problem now. You know the expected values in each eigenstate and you have already spotted some things about the coefficients, so ...

Back to the original problem, ##f_l^m = A \left[ B Y_1^1 + B Y_1^{-1} + C Y_1^0 \right]## so $$\left<L_z\right> = \sum_{m} D_m \hbar m Y_1^m = |AB|^2 \hbar Y_1^1 + |AB|^2(-\hbar) Y_1^{-1} + |AC|^2 (0) Y_1^0 = 0$$
(where ##D_m## is just the group of coefficients associated with each function ##Y_l^m##)
 
  • #11
anlon said:
Back to the original problem, ##f_l^m = A \left[ B Y_1^1 + B Y_1^{-1} + C Y_1^0 \right]## so $$\left<L_z\right> = \sum_{m} D_m \hbar m Y_1^m = |AB|^2 \hbar Y_1^1 + |AB|^2(-\hbar) Y_1^{-1} + |AC|^2 (0) Y_1^0 = 0$$
(where ##D_m## is just the group of coefficients associated with each function ##Y_l^m##)

Your notation is a bit confusing and you certainly shouldn't have those harmonic functions in the second equation.

##f = A \left[ B Y_1^1 + B Y_1^{-1} + C Y_1^0 \right]## so $$\left<L_z\right> = \left<f|L_zf\right> = |AB|^2 \hbar + |AB|^2(-\hbar) + |AC|^2 (0) = 0$$

Would make more sense to me.

(It's getting late for me, so I'm going off line now.)
 
  • Like
Likes anlon
  • #12
PeroK said:
Your notation is a bit confusing and you certainly shouldn't have those harmonic functions in the second equation.

##f = A \left[ B Y_1^1 + B Y_1^{-1} + C Y_1^0 \right]## so $$\left<L_z\right> = \left<f|L_zf\right> = |AB|^2 \hbar + |AB|^2(-\hbar) + |AC|^2 (0) = 0$$

Would make more sense to me.

(It's getting late for me, so I'm going off line now.)

I seem to keep making simple mistakes, but thank you for your patience. You've been very helpful, and I believe I can solve the rest of the problem now. Thank you for your help!
 

Related to How to find expectation value for combined state?

1. How do I calculate the expectation value for a combined state?

The expectation value for a combined state can be calculated using the formula ⟨A⟩ = ⟨Ψ·A·Ψ⟩, where Ψ represents the combined state and A represents the operator. This formula is also known as the inner product of the combined state and the operator.

2. What is the significance of calculating the expectation value for a combined state?

Calculating the expectation value for a combined state allows us to predict the most probable outcome of a measurement for that state. It also helps us understand the behavior and properties of the combined state in a given system.

3. Can the expectation value for a combined state be negative?

Yes, the expectation value for a combined state can be negative. This indicates that the most probable outcome of a measurement for that state is negative. However, the magnitude of the negative value does not necessarily represent the actual measurement outcome.

4. How is the expectation value related to the uncertainty principle?

The uncertainty principle states that the more precisely we know the position of a particle, the less we know about its momentum, and vice versa. The expectation value of an operator can give us information about both of these properties, thus playing a crucial role in understanding the uncertainty principle.

5. Can the expectation value for a combined state change over time?

Yes, the expectation value for a combined state can change over time. This is because the combined state itself can change over time, affecting its most probable outcome of a measurement. The expectation value is a time-dependent quantity and can provide insights into the dynamics of a system.

Similar threads

  • Advanced Physics Homework Help
Replies
4
Views
1K
  • Advanced Physics Homework Help
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
3
Views
1K
Replies
1
Views
856
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Advanced Physics Homework Help
Replies
30
Views
2K
  • Advanced Physics Homework Help
Replies
8
Views
1K
  • Advanced Physics Homework Help
Replies
9
Views
1K
Back
Top