Negative variance of an observable quantity

In summary, the expectation value of the kinetic energy squared is:apply the operator A twice in succession to the wave function psi, multiply with psi* and integrate over spatial variables. From this result, subtract the square of the expectation value.
  • #36
dextercioby said:
[Mandragonia's] statements in this thread make little to absolutely no sense, at least from a mathematical perspective.
I know what you mean. The only reason I've been able to follow him is because I've worked out the various integrals. The goal in my responses in this thread is (eventually) to explain in some detail why statements like the following are incorrect:

Mandragonia said:
The proper way to obtain an expectation value <AB> is apparently by evaluating the integral (A psi, B psi).

(But not right now.)
 
Physics news on Phys.org
  • #37
George Jones said:
There is a new book that, at a glance, seems to have nice treatments of all this, "Quantum Theory for Mathematicians" by Brian Hall.

https://www.amazon.com/dp/146147115X/?tag=pfamazon01-20

This looks like a really wonderful book. It has "the basics of L^2 spaces and Hilbert spaces" as prerequisites. Most of the functional analysis presented in the book is in chapters 6 - 10, through which the author gives several paths "I have tried to design this section of the book in such a way that a reader can take in as much or as little of the mathematical details as desired."

The Hellinger-Toeplitz theorem is not named, but it is presented and proved as Corollary 9.9.

Checked it out.

Nice book - must get a copy.

What I like is, horror of horrors, it actually includes the Dirac Delta function formalism most physicists actually use, as well as a bit of a review of distribution theory. It isn't just a rehash of Von Neumann's classic.

As an aside all physicists and mathematicians should, IMHO, have knowledge of distribution theory. A book I got years ago on it has been one of the best I ever got:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

Thanks
Bill
 
Last edited by a moderator:
  • #38
Mandragonia said:
Suppose one is interested in three observables with operators A, B and C. First of all there are 6 permutations ABC, ACB etc. Secondly, each of these can be represented in four forms: (psi, ABC psi); (A psi, BC psi); (BA psi, C psi) and (CBA psi, psi). A messy situation. How should one proceed?
[...]
The proper way to obtain an expectation value <AB> is apparently by evaluating the integral (A psi, B psi).
Let's take a temporary detour...

Forget about QM for a moment, and consider the following problem in ordinary (classical) statistics. Suppose that ##f(x)## is a probability distribution for the random variable ##x##. So, among other things, it's normalized as ##\int f(x) dx = 1##, and the mean of the distribution is calculated by ##\int xf(x) dx##.

Exercises: Consider the specific distribution: ##f(x) := \frac{1}{\pi(1+x^2)}##, where ##x## takes values on the whole real line.

1) Calculate whether the distribution ##f(x)## is already normalized to 1.

2) Calculate the mean of the distribution ##f(x)## .

3) Write down the definition of variance (in ordinary statistics), and calculate the variance of the distribution ##f(x)##.

:biggrin:
 
  • #39
strangerep said:
The boundary terms account for why the various integrals differ. But that does not answer deeper questions like: "how may one define variance sensibly when unbounded operators are involved?". In that sense, the "answer" so far is not yet satisfactory.
I agree. I can't help you though, because I am insufficiently familiar with operators etc. However, today I came up with an idea that might help to avoid the problem altogether. The energy relation in Rydberg units reads T - 2/r = -1. This is equivalent to r = 2/(T+1). Using this 1-1 relation between r and T, it is straightforward to convert the probability function for r into a probability function for T. The result is:

rho(T) = 32 * (T+1)^-4 * exp{-4/(T+1)} ... where T ranges from -1 to +infinity.

This function allows us to evaluate all properties of the kinetic energy distribution DIRECTLY. There is no longer any need for the Laplace operator! Thus we side-step all problems associated with differentation, commutation, boundary effects etc. The results are UNAMBIGUOUS.

One may verify that the function is correctly normalized. The first two moments of T are easily computed, and found to be: <T> = +1 and <T^2> = +5. So the variance of T equals +4. It is also straightforward to derive properties for the potential energy V, since it can be expressed as V = -(T+1). One obtains <V> = -2 and <V^2> = +8. Therefore the variance of V equals +4. Note that these results are fully consistent with those I previously posted.
 
  • #40
Mandragonia said:
The energy relation in Rydberg units reads T - 2/r = -1. This is equivalent to r = 2/(T+1). Using this 1-1 relation between r and T, it is straightforward to convert the probability function for r
What "probability function for r" are you talking about? I have no idea what you're doing.

into a probability function for T. The result is:

rho(T) = 32 * (T+1)^-4 * exp{-4/(T+1)} ... where T ranges from -1 to +infinity.
 
  • #41
strangerep said:
What "probability function for r" are you talking about? I have no idea what you're doing.
I am referring to the probability density for the radial coordinate r (which represents the distance between the electron and the nucleus). In units of the Bohr radius, it is given by:

rho(r) = 4 * (r^2) * exp(-2*r) ... where r ranges from 0 to infinity.

It can be used to calculate properties of the orbital that depend only on r, such as the average distance of the electron from the nucleus <r>. Obviously it can not be used to calculate expectation values of operators that depend on differentiation! However, in the case of the kinetic energy one can side-step this problem by making clever use of the energy relation. This allows one to construct a similar probability density, but now for the kinetic energy T.
 
  • #42
strangerep said:
Consider the distribution: ##f(x) := \frac{1}{\pi(1+x^2)}##, where ##x## takes values on the whole real line.

Ha! I recognized the function immediately. It is a well-known example of a distribution with the property that its variance diverges. It is commonly presented to undergraduate math students, so that they learn that not every normalized bell-shaped distribution necessarily has a finite variance.
 
  • #43
Mandragonia said:
Ha! I recognized the function immediately. It is a well-known example of a distribution with the property that its variance diverges. It is commonly presented to undergraduate math students, so that they learn that not every normalized bell-shaped distribution necessarily has a finite variance.
OK, so... I guess there's no need for me to elaborate on how this is relevant to your example.
 
  • #44
Strangerep -- I acknowledge that one has to be careful when doing calculations on the hydrogen orbital; because it is ill-defined at r=0.
I think Dauto has formulated it well:

dauto said:
This hydrogen wavefunction is a solution to the Schrodinger equation everywhere except at the origin where the solution forms a cusp. The cusp means that the function is not differentiable at the origin, so it can't be a solution to a differential equation. That may, as you found out, cause boundary problems at the origin. The source of that problem is, off course, the fact that the Coulomb potential diverges at the origin. One possible way to attempt to solve that problem would be to replace the point charge with a Gaussian-like distribution of charge at the origin.

The equation I derived for the probability density of the kinetic energy is certainly not immune to this problem. However, since it only involves straightforward integration, one can pinpoint the problem more clearly and take appropriate measures.

My question to you: have you reached a conclusion on my formula for the probability density of the kinetic energy?
 
  • #45
Mandragonia said:
[...] have you reached a conclusion on my formula for the probability density of the kinetic energy?
You have still not explained yourself clearly and thoroughly. I'm not going to try anymore to guess what you mean.
 
  • #46
Multiply the wave function Psi(r) by its complex conjugate and by spherical shell 4*pi*r^2 and you get the probability density Rho(r). The latter has a straightforward physical meaning. It tells you that at any moment in time the electron has a probability given by Rho(r)*dr to be at a distance in the interval (r, r+dr). One can obtain the cumulative probability P(R), by integrating Rho(r) from 0 to R. One may also define Q(R) as 1-P(R). The results for the n=1 orbital in the hydrogen atom are:

P1(R) = 1 - (2*R^2 + 2*R + 1) * exp(-2*R)
Q1(R) = (2*R^2 + 2*R + 1) * exp(-2*R)

These functions have the following meaning.
[1a] P1(R) is the probability that the electron is at a distance r smaller than R from the nucleus.
[1b] Q1(R) is the probability that the electron is at a distance r larger than R from the nucleus.

Now by logical induction, we are free to reformulate these statements as follows:
[2a] There is a probability P2(1/R) that the electron is at a reciprocal distance 1/r larger than 1/R.
[2b] There is a probability Q2(1/R) that the electron is at a reciprocal distance 1/r smaller than 1/R.

Consistency demands that P2=P1 and Q2=Q1 for all distances r. It turns out to be very easy to derive the explicit formulas for P2 and Q2 in terms of P1 and Q1.

P2(S) = 1 - (2/S^2 +2/S + 1) * exp(-2/S)
Q2(S) = (2/S^2 + 2/S + 1) * exp(-2/S)

Setting the reciprocal distance S equal to 1/R, one verifies that indeed P2=P1 and Q2=Q1.
The next step is to recognize that in reduced units the potential energy V(r) is equal to -2/r. This differs by just a constant factor (-2) from the reciprocal distance S = 1/r. Substitution of S = -V/2 in P2 and Q2 yields:

P3(V) = 1 - (8/V^2 - 4/V +1) *exp (4/V)
Q3(V) = (8/V^2 - 4/V + 1) * exp(4/V)

The physical interpretation is again straightforward.
[3a] P3(V) is the probability that the potential energy is smaller than V = -2/R.
[3b] Q3(V) is the probability that the potential energy is larger than V = -2/R.

Next we use H = T+V = E. Since E equals -1 in reduced units, we can write the energy balance as V = -(T+1). Apply this to P3 and Q3.

P4(T) = 1 - {8/(T+1)^2 + 4/(T+1) +1} *exp{-4/(T+1)}
Q4(T) = {8/(T+1)^2 + 4/(T+1) + 1} * exp{-4/(T+1)}

With the physical interpretation:
[4a] P4(T) is the probability that the kinetic energy is larger than T = -1 + 2/R.
[4b] Q4(T) is the probability that the kinetic energy is smaller than T = -1 + 2/R.

Taking the derivative of Q4 with respect to T, we obtain the probability density Rho(T) for the kinetic energy.

Rho(T) = 32 * (T+1)^-4 * exp{-4/(T+1)} for T ranging from -1 to +infinity.

The physical interpretation of this formula is that Rho(T)*dT is the probability that the kinetic energy is in the interval (T, T+dT). We can use the last formula to evaluate properties of the kinetic energy, without having to apply the Laplace operator to the wave function.
 
Last edited:
  • #47
Mandragonia said:
[...] We can use the last formula to evaluate properties of the kinetic energy, without having to apply the Laplace operator to the wave function.
All that stuff boils down to nothing more than a change of variable in an integral, of the form: ##\rho(r) dr \to \rho(V) dV~## or ##\rho(r) dr \to \rho(T) dT##.

Such a change of variable relies on an equation like: $$T = E + 1/r ~, $$where one tacitly assumes we act only on the state ##\psi##. Indeed, for the state ##\psi## which is an eigenstate of the Hamiltonian, i.e., ##H\psi = E\psi##, the ##T## operator acts like a multiplication operator:
$$T\psi ~=~ (E+1/r)\psi ~.$$You extrapolate this to mean that the substitution ##T = E + 1/r## can be applied more generally. However, it is not generally true that a higher power of ##T##, such as ##T^2##, is equivalent to ##(E+1/r)^2## when acting on ##\psi##. The reason is that ##T\psi## is not an eigenstate of ##H## with eigenvalue ##E## .

This would invalidate your attempt to use probability distributions in the way you describe to calculate higher moments of the distribution such as the variance of kinetic energy.

[BTW, I might make an effort to reply sooner if you make an effort to learn and use basic Latex on this forum. No one wants to read all that ugly ascii math. Instructions for getting started with Latex can be found by following one of the pulldown menus near the top of the PF main page. I.e., SITEINFO->FAQ. There's really no excuse.]
 
Last edited:
  • #48
strangerep said:
It is not generally true that a higher power of ##T##, such as ##T^2##, is equivalent to ##(E+1/r)^2## when acting on ##\psi##. The reason is that ##T\psi## is not an eigenstate of ##H## with eigenvalue ##E## .
The key point of this thread is that the operator ##T^2## is not to be trusted, since it can lead to unphysical results, such as a negative variance. The right representation is apparently with two operators ##T##, working on the left and right wavefunction ##\psi## respectively. Now in that representation it is immediately clear that both operators ##T## can indeed be replaced by their multiplicative cousin ##(E+1/r)##. So up to second order the concept works well.

It is possible that my idea fails to describe the third and higher order moments of the kinetic energy correctly. We don't know that. As yet it is not even clear how to represent these higher moments in operator formalism.
 
  • #49
Mandragonia said:
The key point of this thread is that the operator ##T^2## is not to be trusted, since it can lead to unphysical results, such as a negative variance.
In fact, kinetic energy is not a good observable for the dynamical system of the hydrogen atom. This occurs in other cases too -- in the sum ##H=T+V##, the 3 operators are not necessarily well-defined on a common domain. Although ##H## may be well-defined as an observable on the Hilbert space constructed from its eigenstates, this doesn't necessarily mean that the ##T## operator is also a good observable on that Hilbert space. That's the key lesson to be learned in this thread.

(Indeed, making the mistake of thinking that the kinetic, potential, and total energy operators are sensibly well-defined on the same Hilbert space is one of the underlying causes of infinities in QFT.)

One might also ponder (but not too long) on how/whether the kinetic energy of a bound electron could ever be measured directly.

The right representation is apparently with two operators ##T##, working on the left and right wavefunction ##\psi## respectively.
If you mean that this is the correct representation for variance of kinetic energy, then no. It does not correspond to any standard definition of "variance".

Herein lies the point of my suggested exercise involving the Cauchy distribution, (which you dismissed so readily). Not all probability distributions have well-defined higher moments (and some even have ill-defined means). In such cases, one must adopt alternative measures of "average" and "dispersion".
 
  • #50
I have finally found a satisfactory solution to the problem that I posed in this thread !

Careful examination of the boundary terms that arise at r=0 indicates that the problem of non-self-adjointness of the kinetic energy operator T is associated with the presence of odd powers of r in the expansion of the wave function around r=0. This is an indication that the "true" wave function has only even terms in its expansion! In fact, this result makes a lot of sense, both mathematically and physically.

In order to get rid of the odd terms of r in Psi, I chose to adjust the wave function in a small region around the origin (0, epsilon). One can then calculate the higher moments of the kinetic energy operator. Finally taking the limit of epsilon -> 0, the results indeed satisfy the self-adjointness criterium. The computation itself is far from trivial. It involves many terms with higher order derivatives of the delta function, which are generated at r=epsilon, the boundary between the thin interface and the bulk region.

In conclusion I can say that the standard solution to the SE, Psi = exp(-r), is merely an approximation of the correct wave function. Sometimes this approximation works well, and sometimes it doesn't. In the latter case, one has to use a (mathematically) more appropriate version of the wave function to obtain the right results.

Perhaps I shall write a short article about my little project.
 

Similar threads

Replies
2
Views
3K
Replies
14
Views
1K
Replies
1
Views
1K
Replies
11
Views
2K
Replies
1
Views
2K
Replies
8
Views
3K
Back
Top