- #1
Dfault
- 22
- 2
- TL;DR Summary
- Can any observable experiment reveal the exact phase for a wavefunction in an energy eigenstate, or is the only thing that carries physical significance the *relative* phase *differences* between energy eigenstates?
Hello all,
So I've been working through the solutions to some simple introductory problems for the Schrodinger Equation like the infinite square well, and I'm trying to make sense of how to think about the phase component. For simplicity's sake, let's start off by assuming we've measured an electron in the infinite square well to have the ground-state energy ## E_1 = \frac {\pi^2\hbar^2} {2ma^2}##. The ground-state solution to the Time-Independent Schrodinger Equation is:
## \psi_1(x) = c_1 \sin(\frac {\pi} {a}x) ##
and to add in the time dependence to find ## \Psi_1(x,t)##, all we have to do is multiply by a factor of ##e^-i\frac {E_1t} {\hbar}##.
To find the probability distribution, we'd multiply ## \Psi_1^* (x,t) \Psi_1 (x,t) ##, and we'd find that the complex exponential cancels out: the probability distribution does not evolve in time, which is what we'd expect for a "stationary state" since we started off by demanding that the electron we measure (or an ensemble of electrons, for that matter) should be prepared with the ground-state energy - an energy eigenstate of the Hamiltonian. Furthermore, when we try to sandwich the position, momentum, or kinetic energy operators between ## \Psi_1^* (x,t)## and ## \Psi_1 (x,t) ## and integrate with respect to dx, we find that the complex exponential part of the wavefunction still cancels out in the end: evidently, for a lone energy eigenstate of our equation, that phase component ##e^-i\frac {E_1t} {\hbar}## cannot be observed directly. Is this correct?
I like to imagine that the ##e^-i\frac {E_1t} {\hbar}## part of the wavefunction is a rotation of the unit vector through the complex plane so I can think of it kind of like this: for a given spot along the x-axis, there's a certain probability amplitude for our wavefunction - for example, for our ground-state solution right in the middle of the square well, we have an amplitude equal to ## c_1 ##. Over the course of time, that probability amplitude "sloshes back and forth" between a real component and an imaginary component, like water going back and forth between two buckets: at time ##t = 0##, 100% of the probability amplitude for that point in space is in the "real bucket," then at time ## t = \frac {\pi\hbar} {4E_1}##, 50% of it exists in the "real bucket" and 50% in the "imaginary bucket," then at time ## t = \frac {\pi\hbar} {2E_1}##, 100% of it exists in the "imaginary bucket," and so on. When we go to make any physical observation of an electron in the ground energy eigenstate, we'll catch the wavefunction at some random time t in its rotation through phase space - but where exactly won't matter, because all of our physical observables for an energy eigenstate will just ask us for the magnitude of the unit vector, which is always 1: no matter what angle the unit vector has in the complex plane at a given moment in time, the square of its projection onto the real axis plus the square of its projection onto the imaginary axis will always just produce an answer of 1.
If we now look at the first excited state for the infinite square well, we'd get a full sine wave as our solution to the time-independent Schrodinger equation:
##\psi_2(x) = c_2 \sin(\frac {2\pi} {a}x) ##
and to add in the time dependence to find ## \Psi_2(x,t)##, all we have to do is multiply by a factor of ##e^-i\frac {4E_1t} {\hbar}## since the energy eigenstates of the infinite square well increase by a factor of ##n^2##. This time, our phasor is rotating at four times the rate of the ground state's phasor: the probability amplitude's rotating through the complex plane four times as fast as before. Still though, as with our ground state, since we're again looking at an energy eigenstate, then we'll again have no physical observable that can tell us anything about where in the phase cycle we were when we took our measurement.
So far so good, except that a person who has looked at no other examples other than energy eigenstates would be tempted to ask "why bother writing the phase component at all? It seems like it has no bearing on any physical measurement we make, anyway." The answer seems to present itself in cases where we're looking at something that's not in a single energy eigenstate: imagine for a second that we're looking at some ##\psi(x)## which is some linear combination of ##\psi_1(x)## and ##\psi_2(x)##. For the sake of simplicity, I'll absorb the relative strengths of each of these two components into the coefficients of ##\psi_1(x)## and ##\psi_2(x)## themselves so we can just write ##\psi(x) = \psi_1(x) + \psi_2(x)##. That gives us a full solution to the Time Dependent Schrodinger Equation of
##\Psi(x,t) = \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar}##
Now when we try to find the probability distribution ##\Psi^*(x,t) \Psi(x,t)##, we get:
##\Psi^*(x,t) \Psi(x,t) = \{ \psi_1(x)e^i\frac {E_1t} {\hbar} + \psi_2(x)e^i\frac {4E_1t} {\hbar} \} \{ \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar} \}##
##= \psi_1(x)^2 + \psi_2(x)^2 + \psi_1(x)\psi_2(x)\{ e^i\frac {4E_1t - E_1t} {\hbar} + e^{-i}\frac {4E_1t - E_1t} {\hbar} \} ##
##= \psi_1(x)^2 + \psi_2(x)^2 + \psi_1(x)\psi_2(x)\{ 2cos(\frac{4E_1t - E_1t}{\hbar}) \} ##
Now the phase carries physical significance - or at least, the phase difference between two energy eigenstates does: it acts as the "driving agent" behind the time-varying portion of our solution. It seems that this time-varying portion has a frequency proportional to the difference between energy levels between our two eigenstates. (That kind of makes sense: if the two energy levels were the same, we would expect the time-varying portion to disappear.)
So if we imagine some arbitrary solution to the time-dependent Schrodinger equation as being "composed of" different proportions of energy eigenstates, we can imagine that each of those energy eigenstates that "makes up" the arbitrary solution carries with it its own phase, and that the differences in these phases is what's responsible for each component wavefunction interfering with each other constructively or destructively at a particular point ##x## on our axis at some particular time ##t## to produce the overall time-dependence of the wavefunction. Is that a good way to think about phase? Is it a quantity which, for an energy eigenstate, carries no physical significance, but whose existence can be inferred indirectly by looking at the interference pattern produced by the relative differences in phase between two or more energy eigenstates?
So I've been working through the solutions to some simple introductory problems for the Schrodinger Equation like the infinite square well, and I'm trying to make sense of how to think about the phase component. For simplicity's sake, let's start off by assuming we've measured an electron in the infinite square well to have the ground-state energy ## E_1 = \frac {\pi^2\hbar^2} {2ma^2}##. The ground-state solution to the Time-Independent Schrodinger Equation is:
## \psi_1(x) = c_1 \sin(\frac {\pi} {a}x) ##
and to add in the time dependence to find ## \Psi_1(x,t)##, all we have to do is multiply by a factor of ##e^-i\frac {E_1t} {\hbar}##.
To find the probability distribution, we'd multiply ## \Psi_1^* (x,t) \Psi_1 (x,t) ##, and we'd find that the complex exponential cancels out: the probability distribution does not evolve in time, which is what we'd expect for a "stationary state" since we started off by demanding that the electron we measure (or an ensemble of electrons, for that matter) should be prepared with the ground-state energy - an energy eigenstate of the Hamiltonian. Furthermore, when we try to sandwich the position, momentum, or kinetic energy operators between ## \Psi_1^* (x,t)## and ## \Psi_1 (x,t) ## and integrate with respect to dx, we find that the complex exponential part of the wavefunction still cancels out in the end: evidently, for a lone energy eigenstate of our equation, that phase component ##e^-i\frac {E_1t} {\hbar}## cannot be observed directly. Is this correct?
I like to imagine that the ##e^-i\frac {E_1t} {\hbar}## part of the wavefunction is a rotation of the unit vector through the complex plane so I can think of it kind of like this: for a given spot along the x-axis, there's a certain probability amplitude for our wavefunction - for example, for our ground-state solution right in the middle of the square well, we have an amplitude equal to ## c_1 ##. Over the course of time, that probability amplitude "sloshes back and forth" between a real component and an imaginary component, like water going back and forth between two buckets: at time ##t = 0##, 100% of the probability amplitude for that point in space is in the "real bucket," then at time ## t = \frac {\pi\hbar} {4E_1}##, 50% of it exists in the "real bucket" and 50% in the "imaginary bucket," then at time ## t = \frac {\pi\hbar} {2E_1}##, 100% of it exists in the "imaginary bucket," and so on. When we go to make any physical observation of an electron in the ground energy eigenstate, we'll catch the wavefunction at some random time t in its rotation through phase space - but where exactly won't matter, because all of our physical observables for an energy eigenstate will just ask us for the magnitude of the unit vector, which is always 1: no matter what angle the unit vector has in the complex plane at a given moment in time, the square of its projection onto the real axis plus the square of its projection onto the imaginary axis will always just produce an answer of 1.
If we now look at the first excited state for the infinite square well, we'd get a full sine wave as our solution to the time-independent Schrodinger equation:
##\psi_2(x) = c_2 \sin(\frac {2\pi} {a}x) ##
and to add in the time dependence to find ## \Psi_2(x,t)##, all we have to do is multiply by a factor of ##e^-i\frac {4E_1t} {\hbar}## since the energy eigenstates of the infinite square well increase by a factor of ##n^2##. This time, our phasor is rotating at four times the rate of the ground state's phasor: the probability amplitude's rotating through the complex plane four times as fast as before. Still though, as with our ground state, since we're again looking at an energy eigenstate, then we'll again have no physical observable that can tell us anything about where in the phase cycle we were when we took our measurement.
So far so good, except that a person who has looked at no other examples other than energy eigenstates would be tempted to ask "why bother writing the phase component at all? It seems like it has no bearing on any physical measurement we make, anyway." The answer seems to present itself in cases where we're looking at something that's not in a single energy eigenstate: imagine for a second that we're looking at some ##\psi(x)## which is some linear combination of ##\psi_1(x)## and ##\psi_2(x)##. For the sake of simplicity, I'll absorb the relative strengths of each of these two components into the coefficients of ##\psi_1(x)## and ##\psi_2(x)## themselves so we can just write ##\psi(x) = \psi_1(x) + \psi_2(x)##. That gives us a full solution to the Time Dependent Schrodinger Equation of
##\Psi(x,t) = \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar}##
Now when we try to find the probability distribution ##\Psi^*(x,t) \Psi(x,t)##, we get:
##\Psi^*(x,t) \Psi(x,t) = \{ \psi_1(x)e^i\frac {E_1t} {\hbar} + \psi_2(x)e^i\frac {4E_1t} {\hbar} \} \{ \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar} \}##
##= \psi_1(x)^2 + \psi_2(x)^2 + \psi_1(x)\psi_2(x)\{ e^i\frac {4E_1t - E_1t} {\hbar} + e^{-i}\frac {4E_1t - E_1t} {\hbar} \} ##
##= \psi_1(x)^2 + \psi_2(x)^2 + \psi_1(x)\psi_2(x)\{ 2cos(\frac{4E_1t - E_1t}{\hbar}) \} ##
Now the phase carries physical significance - or at least, the phase difference between two energy eigenstates does: it acts as the "driving agent" behind the time-varying portion of our solution. It seems that this time-varying portion has a frequency proportional to the difference between energy levels between our two eigenstates. (That kind of makes sense: if the two energy levels were the same, we would expect the time-varying portion to disappear.)
So if we imagine some arbitrary solution to the time-dependent Schrodinger equation as being "composed of" different proportions of energy eigenstates, we can imagine that each of those energy eigenstates that "makes up" the arbitrary solution carries with it its own phase, and that the differences in these phases is what's responsible for each component wavefunction interfering with each other constructively or destructively at a particular point ##x## on our axis at some particular time ##t## to produce the overall time-dependence of the wavefunction. Is that a good way to think about phase? Is it a quantity which, for an energy eigenstate, carries no physical significance, but whose existence can be inferred indirectly by looking at the interference pattern produced by the relative differences in phase between two or more energy eigenstates?
Last edited: