Can you define quantum mechanics in just one sentence?

In summary, quantum mechanics is a theory that describes the behavior of systems that cannot be described by classical mechanics. It uses a mathematical formalism that generalizes classical mechanics, and allows for the consideration of systems in which the energy of a field is not continuous.
  • #36
Here's my try:

Quantum Mechanics is the theory of motion of matter whose domain of validity extends to cases where characteristic physical quantities have a value of the same order of magnitude as Planck's constant:
[tex]
\hbar = \frac{h}{2 \pi} = 1.054 \times 10^{-34} \, \mathrm{J} \cdot \mathrm{s}
[/tex]
 
Last edited:
Physics news on Phys.org
  • #37
jimgraber said:
Are you familiar with Scott Aaronson's "definitions":
QM is physics with negative probabilities. or
QM is physics with a quadratic norm in place of a linear norm. ?

Here is a reference:
http://www.scottaaronson.com/democritus/lec9.html

Best
Jim Graber

Heh. Here's where I stopped reading and started to skim: "In the usual "hierarchy of sciences" -- with biology at the top, then chemistry, then physics, then math..."

And then I stopped skimming and just paged through the rest after this: "Now, what happens if you try to come up with a theory that's like probability theory, but based on the 2-norm instead of the 1-norm? I'm going to try to convince you that quantum mechanics is what inevitably results. "

This really doesn't strike me as a particularly enlightening way to look at quantum mechanics. Basically he seems to want to rename "amplitudes" as "probabilities". The arithmetic of amplitudes is not the hard part of quantum mechanics.

My favorite way to teach QM is laid out in Sakurai's book "Modern Quantum Mechanics". He starts with the Stern-Gerlach experiment as the clearest example of quantum behavior. The important thing, I think, is to get to the notion of state as represented by a ket.

BBB
 
  • #38
Any 'definition' must surely mention wave-particle duality.
 
  • #39
bbbeard said:
This really doesn't strike me as a particularly enlightening way to look at quantum mechanics. Basically he seems to want to rename "amplitudes" as "probabilities". The arithmetic of amplitudes is not the hard part of quantum mechanics.
Read it more closely and give it another shot. It's actually a lot more insightful than you give it credit for (I can see that his writing style can risk rubbing the wrong way sometimes). Note he is not talking about probability amplitudes, he is really talking about negative probabilities. Because at the end of the day, all probabilities are real, and so the main difference between QM and CM at that level is the fact that every different way that something can happen in CM adds to the probability it really will happen, but sometimes can subtract from that probability in QM. The key is, by framing this argument as a probability argument, rather than an amplitude argument, it reveals its fundamental "quantum" nature-- for amplitudes are common in classical wave mechanics, but it can only refer to a discrete behavior when it is framed as a probability rather than just an amplitude. So yes, that's a probability amplitude, but what makes it QM is the "probability" part, not the "amplitude" part-- and that's why it is key that probability contributions be able to be negative.
My favorite way to teach QM is laid out in Sakurai's book "Modern Quantum Mechanics". He starts with the Stern-Gerlach experiment as the clearest example of quantum behavior. The important thing, I think, is to get to the notion of state as represented by a ket.
I think Aaronson's point is to be able to see some of the key ideas of QM as being things we could have thought of, perhaps even fairly easily, before there was even any experimental verification of any of it. Perhaps the "ket" concept has some ways it could have been anticipated also, but I haven't seen that case made-- I think Aaronson makes a nice case for the aspects of QM that could conceivably have been guessed, at least as possibilities to look into, before we even had classical physics (had we been better mathematicians back then, and had wider imaginations).
 
  • #40
Originally Posted by jambaugh View Post

> To All: I think the "meta-question" should be hashed out here. What do we want in a definition?

> Should it be sufficient to reconstruct QM given sufficient empirical experimentation?

QM was constructed from experimentation without any definitions of QM, so yes. If you can construst it without a definition, then you can also do it from the starting point of a correct definition. > Should it include foundational experimental results (e.g. Einstein's photo-electric effect).

A definition include experimental results? Pshaw.

> Should it be sufficient to allow an educated layman to understand QM? [tall order!]

Maybe, if you define "sufficient" and "understand." I would say that is asking a bit much.

> Should it be axiomatic? Operational? Minimalistic?

Hilbert proposed the axiomization of physics in 1900 or so and it hasn't happened yet. I think the problem is that completely different sets of axioms can lead to the same conclusion. It happens all the time.

Since the definition of time seems to be operational, then you would think the definition of QM would be at least partially operational.

Minimalistic? Sure, why not.

---------------------All this being said, I can't define QM. The first thought is "the physics of the extremely small," but QM effects show up macroscopically too.

The one thing I would say is that quanta aren't really fundamental. Energy can have any value. If you look at a particle in isolation -- that seems like the most basic situation to me -- then the quantum thing does not arise.

How about "the physics of the wave nature of mass?" That could be minimal.
 
  • #41
jimgraber said:
Are you familiar with Scott Aaronson's "definitions":
QM is physics with negative probabilities. or
QM is physics with a quadratic norm in place of a linear norm. ?

Here is a reference:
http://www.scottaaronson.com/democritus/lec9.html

Best
Jim Graber

That's quite interesting. I basically agree with what he says. Often QM education proceeds by teaching all the wrong ways first. Instead of sowing these seeds of confusion, wouldn't it make more sense to forgo the suspense and teach the right way first?

On the other hand, negative probabilities I don't like. I see what he is driving at, at it is OK, but I think he is falling into the same trap of teaching something wrong first. I mean, you could allow negative probabilities and forbid all subtraction, but why? Just stick with the canonical view. When I have a two slit experiment and I close one slit I could say that I have added a -0.5 probability to the now-closed slit, but that is kind of silly.

So, how would I do it that is better? I would start like this.

They say that our Universe is four-dimensional, but what they really mean is, that the math works out much more easily in four dimensions. You could declare that the Universe is one-dimensional and label every point to arbitrary precision with a space-filling curve, but the math would be awful so no one does that. * You could go the other way and declare that everything is a vibration and that the Universe is infinite dimensional. And indeed, that's what quantum physics has done, and fortunately the math is easy. Just not what most people are used to.

Our story starts in 1807 when a highly-paced French revolutionary -- he had been appointed Governor of Egypt for a while -- figured out that ANY vibration could be described as the sum of sines and cosines. It didn't matter what the function did, it could be infinitely long and as complicated as you like, and it still could be described in this way. Those sines and cosines are not just any old sines and cosines, they are highly restricted set. Breaking down the original complicated wave form to a sum of this restricted set is called a transform. That's how JPEGs and MP3s are done. With music it is pretty easy to see that this is a vibration, and the transform is done to shrink the music file. Looking at it the other way, digitized music is just a sequence of bits and digitized images are also just a sequence of bits, so you can think of the image as a vibration as well and apply the transform to that too. That's how a JPEG is made. Then the information can be recovered by inverting the transform.

The math to do this is all complex numbers. Forget all that stuff about imaginary numbers and the square root of negative one: that just confuses things. The real point of complex numbers is that they make it very easy to describe sine waves. Each wave has a frequency, an amplitude, and a phase. That's all. The frequencies of the waves in that restricted set are always double the frequency of the next slowest wave and half of the next fastest wave. It is hard to believe that adding together such a restricted set could create any wave, but it is true. Take my word for it.

Now in this world we have matter and energy. It turns out that what they have in common is that they can be looked at as vibrations. In fact, if we DON'T look at them as vibrations then nothing works. So as practical people we are more or less required to look at things this way.
------

So we start from there. Talk about the wave function and getting position probabilities and momentum probabilities, etc. from it, then go on to the two slit experiment and the sum-over-histories.

That's the way I would do it.

All this being said, he makes quite a good case that "QM is physics with a quadratic norm not a linear norm." So the first thing to do it teach 'em what a quadratic norm is. But negative probabilities? Nah.
 
  • #42
PatrickPowers said:
The one thing I would say is that quanta aren't really fundamental. Energy can have any value. If you look at a particle in isolation -- that seems like the most basic situation to me -- then the quantum thing does not arise.
Actually it does, the key point is that it is not energy that is the thing that is quantized, it is action. This is an important point that I think gets obscured a lot when people focus on the discrete energy levels of hydrogen, or the discrete bundles of energy we call photons, for example. It really leads us to think that quantum mechanics is about the quantization of energy. But it isn't-- it's about the quantization of action in little bundles of h. That's why many of the definitions offered above included the physics of characteristic actions of order h, or multiples of h.

So why is energy quantized in atoms and in photons if energy is not the thing that is quantized? It's because both atoms and photons provide a timescale, and thus the combination of energy and time turns into a quantization of energy when the time is constrained. For atoms, the time is essentially the orbital period of the electron, and for photons, it is the period of the wave mode. Thus the time constraints are borrowed directly from classical physics, and quantum mechanics just quantizes the action, and hence the energy ends up quantized as well. But free particles don't come with an intrinsic timescale, so there is no quantization of energy for them-- only quantization of the action. That's the deBroglie result-- that the product of the momentum and the wavelength (which is the action of the free particle) is h. Both the momentum and the wavelength are continuous variables, but their product is discrete because it is a constant for each particle.
 
  • #43
Ken G said:
Actually it does, the key point is that it is not energy that is the thing that is quantized, it is action. This is an important point that I think gets obscured a lot when people focus on the discrete energy levels of hydrogen, or the discrete bundles of energy we call photons, for example. It really leads us to think that quantum mechanics is about the quantization of energy. But it isn't-- it's about the quantization of action in little bundles of h. That's why many of the definitions offered above included the physics of characteristic actions of order h, or multiples of h.

So why is energy quantized in atoms and in photons if energy is not the thing that is quantized? It's because both atoms and photons provide a timescale, and thus the combination of energy and time turns into a quantization of energy when the time is constrained. For atoms, the time is essentially the orbital period of the electron, and for photons, it is the period of the wave mode. Thus the time constraints are borrowed directly from classical physics, and quantum mechanics just quantizes the action, and hence the energy ends up quantized as well. But free particles don't come with an intrinsic timescale, so there is no quantization of energy for them-- only quantization of the action. That's the deBroglie result-- that the product of the momentum and the wavelength (which is the action of the free particle) is h. Both the momentum and the wavelength are continuous variables, but their product is discrete because it is a constant for each particle.

Wow! Looks like I've got some more larning to do.
 
  • #44
Ken G said:
Read it more closely and give it another shot. It's actually a lot more insightful than you give it credit for (I can see that his writing style can risk rubbing the wrong way sometimes). Note he is not talking about probability amplitudes, he is really talking about negative probabilities. Because at the end of the day, all probabilities are real, and so the main difference between QM and CM at that level is the fact that every different way that something can happen in CM adds to the probability it really will happen, but sometimes can subtract from that probability in QM. The key is, by framing this argument as a probability argument, rather than an amplitude argument, it reveals its fundamental "quantum" nature-- for amplitudes are common in classical wave mechanics, but it can only refer to a discrete behavior when it is framed as a probability rather than just an amplitude. So yes, that's a probability amplitude, but what makes it QM is the "probability" part, not the "amplitude" part-- and that's why it is key that probability contributions be able to be negative.

Okay, I read it carefully. I think this is my "major quibble" with Aaronson's pedagogy: I see no reason to confuse the issue by calling amplitudes "probabilities". The actual probabilities are not negative. The actual amplitudes are not real. One is tempted to say that most of what he writes is either false or trivial -- starting with the first sentence. About 3/4 of the way through he finally gets around to pointing out that amplitudes are complex, after wasting a lot of time with "negative probabilities" [i.e. negative amplitudes] -- which as you imply show up in classical interference phenomena as well.

And FWIW his general approach seems backwards to me. His approach, in essence, is "Complex amplitudes have neat properties, and this neatness is why God chose them as the basis for quantum mechanics". Instead it seems more convincing to say we have a large set of phenomena that are well-described with a certain kind of mathematics, including complex numbers and group theory, as well as other tools. It seems to me that Aaronson is falling for a Platonist fallacy but I'd have to think more carefully about that.

And I'm really not convinced that interference per se is the defining "conceptual core" of quantum mechanics. We've known about wave interference for hundreds of years. Quantization of energy levels is not the defining feature of quantum mechanics, either. We've known about quantized systems for thousands of years; quantization is the basis of many musical instruments. It's why a guitar string has a fundamental frequency that dominates when the string is plucked, and it's why you can hear the harmonic when you then touch the string at the 12th fret. We've known about the mathematics of Sturm-Liouville eigensystems for over 150 years. And in many cases the energy eigenbasis is continuous, as it is for free particles, and yet the system remains quantum mechanical.

What Aaronson's treatment is missing is the connection to actual physics. And the actual physics of quantum systems is tied intimately to Planck's constant. An example is a Stern-Gerlach apparatus (which is what Sakurai uses in his foundational chapter). Shooting a thermalized beam of magnetic dipoles through an inhomogeneous magnetic field sorts them by the z-component of the magnetic moment. If we could imagine a Stern-Gerlach apparatus operating on classical dipoles, all we would see coming out would be a big Gaussian blob, since classical dipoles can have any value of z component. Only when the dipoles are "small" -- small in precisely the sense that their magnetic moment is proportional to hbar -- do the dipoles sort into smaller blobs reflecting the quantization of Sz. Without hbar, there is no quantum mechanics. Again I recommend Sakurai's discussion of the classical limit, where he shows that the phase of the wave function is equal to http://en.wikipedia.org/wiki/Hamilton%E2%80%93Jacobi_equation" , divided by hbar.

Ken G said:
I think Aaronson's point is to be able to see some of the key ideas of QM as being things we could have thought of, perhaps even fairly easily, before there was even any experimental verification of any of it. Perhaps the "ket" concept has some ways it could have been anticipated also, but I haven't seen that case made-- I think Aaronson makes a nice case for the aspects of QM that could conceivably have been guessed, at least as possibilities to look into, before we even had classical physics (had we been better mathematicians back then, and had wider imaginations).

I think most of the ideas underlying quantum mechanics were thought of and invented before -- sometimes long before -- the contributions of Schrodinger and Heisenberg. Complex numbers, eigenvalue systems, wave equations, interference, vector spaces, Hilbert spaces, Lebesgue spaces, Lie groups, group representations, etc. were all invented before quantum mechanics. And I would add that much of classical optics is directly translated to quantum mechanics.

BBB
 
Last edited by a moderator:
  • #45
jambaugh said:
Given it has a continuous energy spectrum which in principle can be measured (classically) to arbitrary precision it has infinite information content.

I don't see how that answers the question. A mole of gas at STP (using the NIST standard) has a temperature of 293.15 K and a pressure of 101325 Pa. That is, 293.1500000... and 101325.00000... Where is the infinite information content?

On the other hand, consider a quantum system: an electron has a magnetic dipole moment. This is usually quantified in terms of a "g factor", which has a http://physics.nist.gov/cgi-bin/cuu/Value?gem|search_for=g+factor" g=2.002 319 304 361 53(53) for electrons. In principle this value can be measured to arbitrary precision, and of course there is no reason to think the true value is rational, or even an algebraic irrational. And yet we use the techniques of quantum field theory to approximate g. This would seem to show a quantum mechanical system with infinite information content.


jambaugh said:
The discrete symmetry of lattice gauge theory is an approximation of the continuous symmetries one is modeling. LGT is an abstraction of e.g. finite elements methods for solving PDE's. But note that even in LGT the gauge symmetry is still continuous e.g. SU(3).

It's not the continuity of the gauge group that makes the system quantum mechanical. We can study lattice gauge theory for the http://thy.phy.bnl.gov/~creutz/z2/z2.ps" , for example.

jambaugh said:
I think the "Quantum" in Quantum Mechanics is best expressed in simple terms as the finiteness of the amount of information which can be encoded in the system. Of course since "all is quantum" we will see the same limits to information content in classical systems once pragmatic considerations are taken into account. That's just QM peeking its head out from under the floorboards.

I think the "quantum" in quantum mechanics is there for historical reasons. Not every quantum mechanical system has discrete states. I suppose you could argue that quantum field theory is "beyond" mere quantum mechanics, but certainly by the time you get to quantum field theory you have to deal with the infinite variety of contributions to even the most fundamental processes.

BBB
 
Last edited by a moderator:
  • #46
bbbeard said:
And I'm really not convinced that interference per se is the defining "conceptual core" of quantum mechanics. We've known about wave interference for hundreds of years.
But what he is talking about is actually quite a bit different from just wave interference, or quantization of guitar frequencies, he is really talking about negative probabilities (not negative probability amplitudes or even negative contributions to some classical wave energy flux). This is a key element of QM that is not in classical wave physics-- amplitudes of classical waves have to do with energy fluxes, but they are not probabilities because there are no quanta there to care about a probability concept.

What he is really doing is outlining a real-number analog to how complex probability amplitudes (not complex wave amplitudes) work. He is noting that the set of linear transformations on a unit circle (under the 2-norm) is the orthogonal transformations, which is true. Then he points out that if the coordinates can be complex, the orthogonal transformations generalize to the unitary transformations, which are the transformations of time advance in quantum mechanics. So the key point is the evolutionary mapping that is both linear and preserving of the unit 2-norm. There is nothing like the preservation of the 2-norm in classical wave mechanics, this is purely a probability concept and it would only have meaning for quanta. For classical waves we get a concept of flux conservation, but there are no discrete outcomes there to associate with probabilities, negative or otherwise.

The guts of what he is saying is that quantum mechanics borrows the classical notion of destructive interference of wave amplitudes, and marries it with the classical notion of a sum over probabilities of ways that something discrete can happen, to create a totally new animal: the negative contribution to a probability. There is no such thing in classical probability theory and no such thing in classical wave theory, because those two never appear together anywhere in classical physics. The key example he describes is the tossing of the "quantum coin", where if you start with a "heads" and flip it (unitarily), you get some kind of new combination (superposition) of heads and tails that looks on the surface like a 50% chance of heads and 50% chance of tails, but it isn't-- it isn't because if you do the same (unitary) flip again, it doesn't stay at 50% heads and tails like classical coins do, it goes to 100% tails. That is just not something that classical probabilities are capable of doing, because they are never married with wave interference concepts.

His way to summarize this strange behavior is "negative probabilities"-- not in the net, but in the summing of the contributions. I agree that such sums over cancelling contributions are routine in classical wave mechanics, but they are never probabilities there. That's why it has to be "quantum" mechanics to get this, and that is where "quantum weirdness" comes from as well-- the probability aspect, not the interference aspect. No one has any problem with two-slit diffraction of a wave, the problem is two-slit diffraction of a quantum, because quanta are supposed to be ruled by probabilities not waves. Aaronson is saying that when probabilities can also be negative, so can exhibit interference, the weirdness goes away. So he is saying the only reason we think diffraction of a quantum is weird is because we had some prejudice against negative probabilities, and removing that prejudice makes quantum phenomena much easier to understand.
 
Last edited:
  • #47
PatrickPowers said:
The frequencies of the waves in that restricted set are always double the frequency of the next slowest wave and half of the next fastest wave. It is hard to believe that adding together such a restricted set could create any wave, but it is true. Take my word for it.
I'm not sure I should take your word for that-- I don't think it's right. When are Fourier transforms limited to a set of frequencies like 2n? Usually, the frequencies are either continuous (as for nonperiodic functions on an infinite spatial domain) or fixed multiples of the integers, so n not 2n (as for periodic functions, or what is equivalent, functions on a finite spatial domain).
All this being said, he makes quite a good case that "QM is physics with a quadratic norm not a linear norm." So the first thing to do it teach 'em what a quadratic norm is. But negative probabilities? Nah.
I also like his stress on the quadratic norm, but if you look over my last post, you'll see why I think his stress on negative probabilities is essential.
 
  • #48
Ken G said:
...he is really talking about negative probabilities (not negative probability amplitudes or even negative contributions to some classical wave energy flux).

Perhaps I am being too concrete in trying to understand what you are saying. But the amplitude is not a probability. It lacks the essential characteristics of a probability. It is not real and non-negative. It is does not sum to 1. So why call it a probability? A probability is a fraction of a sample space. A squared amplitude is a probability. You might as well say that the amplitude of a classical wave is an energy.

Ken G said:
This is a key element of QM that is not in classical wave physics-- amplitudes of classical waves have to do with energy fluxes, but they are not probabilities because there are no quanta there to care about a probability concept.

The wave function in quantum mechanics is directly linked to a mechanical flux as well. If you write down the current

j = -(i hbar/2m)*(ψ* grad ψ - (grad ψ*) ψ)

then it follows

∫d3x j(x,t) = <p>t/m

where <p>t is the expectation value of the momentum operator at time t.

Ken G said:
What he is really doing is outlining a real-number analog to how complex probability amplitudes (not complex wave amplitudes) work.

Except the real-number analog doesn't work. In the standard notation, even if you implement real-valued Sx and Sz, you wind up having to introduce complex numbers as soon as you try to formulate Sy.

Ken G said:
... So the key point is the evolutionary mapping that is both linear and preserving of the unit 2-norm. There is nothing like the preservation of the 2-norm in classical wave mechanics, this is purely a probability concept and it would only have meaning for quanta.

The "preservation of the 2-norm" in classical wave theory is equivalent to the conservation of energy, isn't it? Maybe I'm misunderstanding something here.

Ken G said:
For classical waves we get a concept of flux conservation, but there are no discrete outcomes there to associate with probabilities, negative or otherwise.

Well, first, "flux" is not a conserved quantity. Second, there are "discrete outcomes" in classical wave theory. Consider the eigenmodes of a plucked string, or the oscillations of gas in an organ pipe. Third, a system doesn't have to have a discrete eigenspectrum to be quantum mechanical.

Ken G said:
The guts of what he is saying is that quantum mechanics borrows the classical notion of destructive interference of wave amplitudes, and marries it with the classical notion of a sum over probabilities of ways that something discrete can happen, to create a totally new animal: the negative contribution to a probability. There is no such thing in classical probability theory and no such thing in classical wave theory, because those two never appear together anywhere in classical physics.

Of course there are "negative contributions to probability" in classical probability theory:

P(A ∪ B) = P(A) + P(B) - P(A ∩ B))

Or consider a general bivariate normal distribution. The joint probability of P(x,y) can be greater or less than the P(x)P(y) if x and y have a non-zero correlation.

Ken G said:
The key example he describes is the tossing of the "quantum coin", where if you start with a "heads" and flip it (unitarily), you get some kind of new combination (superposition) of heads and tails that looks on the surface like a 50% chance of heads and 50% chance of tails, but it isn't-- it isn't because if you do the same (unitary) flip again, it doesn't stay at 50% heads and tails like classical coins do, it goes to 100% tails. That is just not something that classical probabilities are capable of doing, because they are never married with wave interference concepts.

Again, it's not a probability until you marry the amplitude to itself. I.e. the amplitude can be rotated in state space but it's not a probability until you square the amplitude. The wave function of an electron is not a probability distribution; only the squared amplitude is.

And this is one place where I think Aaronson is missing the boat on "quantum weirdness". The wave function is much more than a "probability wave". Yes, quantum mechanics is interpreted in terms of probability. But if you apply an operator to a ket, i.e. if you make a measurement on a quantum state, you can get all sorts of strange outcomes. If the ket happens to be an eigenket of the operator, then you get a multiple of the ket, and the factor multiplying the ket turns out to be the expectation value of the operator for that state. But if the ket is not an eigenket, you get a linear combination of all the eigenkets of the operator. This is not a true statement for the probabilities.

Ken G said:
His way to summarize this strange behavior is "negative probabilities"-- not in the net, but in the summing of the contributions. I agree that such sums over cancelling contributions are routine in classical wave mechanics, but they are never probabilities there. That's why it has to be "quantum" mechanics to get this, and that is where "quantum weirdness" comes from as well-- the probability aspect, not the interference aspect. No one has any problem with two-slit diffraction of a wave, the problem is two-slit diffraction of a quantum, because quanta are supposed to be ruled by probabilities not waves. Aaronson is saying that when probabilities can also be negative, so can exhibit interference, the weirdness goes away. So he is saying the only reason we think diffraction of a quantum is weird is because we had some prejudice against negative probabilities, and removing that prejudice makes quantum phenomena much easier to understand.

I suppose I am jaded enough, or maybe just came to the game late enough, that I don't find quantum mechanics all that weird. But if "weird" is the word you want to use, then I'd say a lot of weirdness has not so much to do with the probabilistic aspect and more to do with how wave functions represent particles. If you write down Schrodinger's equation for a single electron, you get six degrees of freedom (three position and three momentum) (not counting spin). But if you write down the Schrodinger equation for N electrons, you find that there are 6N degrees of freedom that are antisymmetrized. All of a sudden you have to deal with a Slater determinant. I find that a lot "weirder" than the "probability wave"...
 
  • #49
bbbeard said:
But the amplitude is not a probability. It lacks the essential characteristics of a probability. It is not real and non-negative. It is does not sum to 1.
But it does sum to 1, that's why it is a probability. And all the net probabilities he is talking about are non-negative. So he is talking about probabilities, it is just that the contributions to the net probabilities can be negative. That is what he sees as the crucial innovation of quantum mechanics, and that really is a lot of it.
The wave function in quantum mechanics is directly linked to a mechanical flux as well. If you write down the current

j = -(i hbar/2m)*(ψ* grad ψ - (grad ψ*) ψ)

then it follows

∫d3x j(x,t) = <p>t/m

where <p>t is the expectation value of the momentum operator at time t.
But that isn't what sets quantum mechanics apart from classical physics, that much is quite similar to classical wave mechanics. What sets quantum mechanics apart is the "quantum" part, and that is also where the probability comes in-- you have to be able to talk about probabilities if you need to talk about what quanta do. You just don't need that with classical waves, so with waves it's all just amplitude interference, never negative probabilities. So the key difference is that quantum mechanics is invoking negative probabilities. Why don't we find it strange that opening a second slit can reduce the light intensity at a point, but we do find it strange that opening a second slit can make it less likely for a particle to show up at some point? It's because we think that particles should obey probabilities rather than amplitudes. Aaronson is saying we can still think that way-- we just overlooked negative probability contributions.
Except the real-number analog doesn't work. In the standard notation, even if you implement real-valued Sx and Sz, you wind up having to introduce complex numbers as soon as you try to formulate Sy.
That's why it is just an analogy. It extends to the complex numbers in a fairly straightforward way that he didn't want to get into in that simple description. All you have to do is say that if you have amplitudes a and b, and probability |a+b|2, you can think of that as:
|a+b|2 = |a|2 + |b|2 + 2 Re(ab*).
The first two contributions are the way independent probabilities add classically, but it is the third term that can be negative. Aaronson interprets that third term as a negative independent probability contribution, because it can be larger in magnitude than either of the first two terms (a feature that classical probabilities never have). The chances of a and b can be less than the chances of either a or b by themselves, so that's what he means by negative probability contributions.
The "preservation of the 2-norm" in classical wave theory is equivalent to the conservation of energy, isn't it? Maybe I'm misunderstanding something here.
It is not quite equivalent, because it's a conservation of probability, not a conservation of energy. Conservation of classical probabilities doesn't involve interference, though conservation of wave energy does. There is a very important marriage of these two effects that is very much the distinguishing character of quantum dynamics.
Well, first, "flux" is not a conserved quantity.
Yes it is, you are referring to "flux density", which is not conserved. Those terms get used rather loosely.
Second, there are "discrete outcomes" in classical wave theory. Consider the eigenmodes of a plucked string, or the oscillations of gas in an organ pipe.
That is not what I mean about a discrete outcome. I'm not talking about the quantization of the frequencies of a guitar, which is not an example of a quantized action or a quantized energy, it is just an example of a quantized frequency, that's all. What I mean by a discrete outcome is a discrete set of possible outcomes for a quantum of some kind-- be it a particle or a quantum of action (i.e., whether first or second quantization).
Third, a system doesn't have to have a discrete eigenspectrum to be quantum mechanical.
Again, the discreteness I refer to is not the eigenvalue, it is the particle or the action being tracked. In short, it is the "quantum" in quantum mechanics.
Of course there are "negative contributions to probability" in classical probability theory:

P(A ∪ B) = P(A) + P(B) - P(A ∩ B))
But note, that third term can never be larger than either of the first two. Thus, it is not really its own independent effect, it is merely the overlap of the first two effects. Aaronson is implying a kind of independence between the probabilities of a particle going through either of two slits, for example, so it is something much more than just subtracting the probability of going through both (if that were possible). For example, you could draw a green line and red line on the floor, which overlap slightly, and ask what is the probability of a classical particle crossing either one. That would be a result like yours above. But that isn't interference, it's just subtracting off the double-counting. Aaronson is saying that a quantum particle can have a probability of crossing the green line that actually cancels out part of the probability of crossing the red line, in a way that has nothing to do with double counting and indeed can give a zero result for the whole business. That's fundamentally quantum mechanical, even though it certainly borrows from the classical concept (as quantum mechanics generally does).
Again, it's not a probability until you marry the amplitude to itself. I.e. the amplitude can be rotated in state space but it's not a probability until you square the amplitude. The wave function of an electron is not a probability distribution; only the squared amplitude is.
Both Aaronson and I agree with that, it's not a problem.
And this is one place where I think Aaronson is missing the boat on "quantum weirdness". The wave function is much more than a "probability wave".
Ah, but that's his whole point. He is saying it is not much more than that, as long as you allow for a type of negative probability contribution. The probability of going through one slit and hitting a box on the wall, and going through the other slit and hitting a box on the wall, are indeed probabilities even in quantum mechanics-- they just don't have to add, they can subtract (what really happens is some of the probability contributions are complex, like ab* and a*b, but they add to something real so their combination can be thought of as a single probability term or go whole hog and talk about complex probabilities).
Yes, quantum mechanics is interpreted in terms of probability. But if you apply an operator to a ket, i.e. if you make a measurement on a quantum state, you can get all sorts of strange outcomes.
But he covers the completely general case when he talks about density matrices and unitary operations. The only bit he is sweeping under the rug is he tends to explain the situation only for real coordinates, rather than the full complex-coordinate description. But I see that as just a streamlining trick, what he is saying generalizes like orthogonal --> unitary for the evolution and symmetric --> Hermitian for the operators.
But if the ket is not an eigenket, you get a linear combination of all the eigenkets of the operator. This is not a true statement for the probabilities.
Yet what you are saying here is just what he was describing with that "quantum coin" analogy. It can all be said at the probability level, if one sees a machinery going on behind those probabilities. In other words, he's not really saying you get quantum mechanics from negative probabilities, he is saying you get quantum mechanics from a fairly natural behind-the-scenes mechanism just by not requiring it to deal only in positive contributing probabilities (with subtractions due only to double counting).
I suppose I am jaded enough, or maybe just came to the game late enough, that I don't find quantum mechanics all that weird. But if "weird" is the word you want to use, then I'd say a lot of weirdness has not so much to do with the probabilistic aspect and more to do with how wave functions represent particles. If you write down Schrodinger's equation for a single electron, you get six degrees of freedom (three position and three momentum) (not counting spin). But if you write down the Schrodinger equation for N electrons, you find that there are 6N degrees of freedom that are antisymmetrized. All of a sudden you have to deal with a Slater determinant. I find that a lot "weirder" than the "probability wave"...
I agree that multiple identical particles brings in a whole new level of quantum weirdness (as does entanglement of all kinds), but if we stick to single-particle descriptions, it is still a bit weird that wave mechanics is applicable. But I would say this type of "weirdness" is very much a holdover from classical thinking. I view quantum mechanics as the unification of classical particle and wave mechanics, so unification is usually not considered weird, but what's weird is that no one was looking for unification of those things (any more than they were looking for unification of Maxwell and Newton when relativity came along, which is why relativity is often also considered weird). I thik Aaronson is arguing QM is not that weird, what makes it weird is simply our classical prejudices like positive probabilities.
 
  • #50
Ken G said:
But it [A PROBABILITY AMPLITUDE] does sum to 1, that's why it is a probability.

Ken, is your statement correct? I think not:

1. Does a PA (being complex) sum to 1?

2. AFAIK, a PA is not a 'probability' in any sensible sense.

Ken G said:
I thik Aaronson is arguing QM is not that weird, what makes it weird is simply our classical prejudices like positive probabilities.

AFAIK, positive probabilities (and never negative ones) occur in classical and quantum theory alike; no prejudice required?
 
  • #51
Gordon Watson said:
Ken, is your statement correct? I think not:

1. Does a PA (being complex) sum to 1?

2. AFAIK, a PA is not a 'probability' in any sensible sense.
That's right, that's why I am talking about probabilities not amplitudes. This is indeed the whole point-- amplitudes appear in mundane classical wave mechanics, what is different about quantum mechanics is:
1) it deals in probabilities
2) they can be negative (or even complex, though Aaronson tries hard to be able to talk about the phenomenon without ever mentioning complex numbers at all because he views it as an unnecessary complication to the basic issue).
Did you read his lecture?
 
  • #52
Ken G said:
That's right, that's why I am talking about probabilities not amplitudes. This is indeed the whole point-- amplitudes appear in mundane classical wave mechanics, what is different about quantum mechanics is:
1) it deals in probabilities
2) they can be negative (or even complex, though Aaronson tries hard to be able to talk about the phenomenon without ever mentioning complex numbers at all because he views it as an unnecessary complication to the basic issue).
Did you read his lecture?


In my opinion complex numbers are an unnecessary complication, but what they are representing is essential. I like the approach Feynman took in QED, where as far as I recall he never mentions complex numbers but instead uses the simple geometrical idea of a clock with one hand. One may model amplitudes and their cancellations in this simple and intuitive geometric way.
 
  • #53
Ken G said:
That's right, that's why I am talking about probabilities not amplitudes. This is indeed the whole point-- amplitudes appear in mundane classical wave mechanics, what is different about quantum mechanics is:
1) it deals in probabilities
2) they can be negative (or even complex, though Aaronson tries hard to be able to talk about the phenomenon without ever mentioning complex numbers at all because he views it as an unnecessary complication to the basic issue).
Did you read his lecture?

Thanks Ken, but sorry: I don't get your point!

You say "That's right" -- which I take to mean that you agree that your statement is wrong -- but you then continue with another confusion, your 1) + 2) above: implying that PROBABILITIES can be negative?

In my view, QM deals with probabilities derived from probability amplitudes. Thus, for me, QM is an important extension of ordinary probability theory: Especially when you realize that any probability density may be represented as the absolute square of a complex Fourier polynomial.

Then, seeing no point in belaboring our differences, but to be clear about what I mean by a probability: The probability P(A|C) is the expected proportion of long-run experimental outcomes, under condition C, in which A occurs. It is an estimate of the relative frequency of A that will be revealed by measurements under C.

Since I never expect a negative proportion, and have never heard of one being measured, I'm confident that probability needs no confusing negativities.

And Yes; I read and re-read the lecture with interest and fondness and pleasure and delight: seeing it compatible with the view that I've just expressed here.

With best regards,

Gordon
 
  • #54
bbbeard said:
I don't see how that answers the question. A mole of gas at STP (using the NIST standard) has a temperature of 293.15 K and a pressure of 101325 Pa. That is, 293.1500000... and 101325.00000... Where is the infinite information content?
Take an infinite signal, transcribe it to binary, use the string of bits to represent a binary number 0.bit bit bit... and you have a real value between 0 and 1. Given the classical assumption of continuous energy (or temperature, or pressure, or ...) it takes infinite information to represent an exact state. Of course pragmatically we measure only up to some level of precision and generally assume some bounds on values. It's the infinite number of classical states between two values of a continuous observable with which idealistically we can encode infinite information in a classical system.
On the other hand, consider a quantum system: an electron has a magnetic dipole moment. This is usually quantified in terms of a "g factor", which has a http://physics.nist.gov/cgi-bin/cuu/Value?gem|search_for=g+factor" g=2.002 319 304 361 53(53) for electrons. In principle this value can be measured to arbitrary precision, and of course there is no reason to think the true value is rational, or even an algebraic irrational. And yet we use the techniques of quantum field theory to approximate g. This would seem to show a quantum mechanical system with infinite information content.
I can express it with one symbol, g. The fact that it is a fixed constant precludes you using it to actually encode a signal. There's no information content to its value since it is not a variable.
It's not the continuity of the gauge group that makes the system quantum mechanical. We can study lattice gauge theory for the http://thy.phy.bnl.gov/~creutz/z2/z2.ps" , for example.
I'm not sure I see the quantum mechanics in this model. It describes a discrete computational model, not a theory of a physical system. If Z2 is a symmetry of some quantum system and doesn't act trivially then there's at least 2 modes in the Z2 orbit and by quantum theory there is thus at least an SU(2) symmetry incorporating all the superpositions of those two modes.

Mind you there are discrete symmetries e.g. CPT. But they occur as symmetries of the theory not of the system. Like Born duality (x <-->p).
I think the "quantum" in quantum mechanics is there for historical reasons. Not every quantum mechanical system has discrete states. I suppose you could argue that quantum field theory is "beyond" mere quantum mechanics, but certainly by the time you get to quantum field theory you have to deal with the infinite variety of contributions to even the most fundamental processes.
BBB
QFT is still within QM. The "2nd quantization" is rather quantification (going from one to many). Now while one may argue for infinite information due to the e.g. infinite bosonic fock space and infinite momentum spectrum of field quanta, you'll note these are sources of divergence in the theory. When the theory is regularized one typically has finite information content. E.g. a system of photons in a box with reasonable upper limits to energy (no BH formation e.g.) is a finite dimensional system. The infinities in QFT are there for pragmatic reasons within the calculus not as a necessary fundamental assumption. (And I'm working on a paper suggesting bosonic fields should be quasi-bosonic with a finite upper bound on particle number.)

But I did make one mistake in my attempt. I shouldn't have said "symmetry" I should have said "transformation group" or "relativity group". One can have a perfectly valid quantum system with absolutely no symmetries. But of course one always has at the very least the one parameter group of time translations generated by the Hamiltonian. I.e. the dynamics.

JB:DEF3=Quantum mechanics is the physics of systems with continuous transformation groups and finite information content.
 
Last edited by a moderator:
  • #55
I rather like the two axioms found in Ballentine - QM - A Modern Approach

1. To each dynamical variable there is a Hermitian operator whose egienvalues are the possible values of the dynamical variable.

2. To each state there corresponds a unique state operator P that must be Hermitian, non-negative and of unit trace. The average <A> for a dynamical variable with operator A in the virtual ensemble of events that may result from the preparation procedure for that state is <A> = Tr(PA).

Of course that pins it to the ensemble interpretation which I am sure not everyone agrees with.

Stuff like Schrodinger's Equation is derived from Galilean Invariance.

Thanks
Bill
 
  • #56
Gordon Watson said:
In my view, QM deals with probabilities derived from probability amplitudes. Thus, for me, QM is an important extension of ordinary probability theory: Especially when you realize that any probability density may be represented as the absolute square of a complex Fourier polynomial.

Bingo - for me as well.

Consider a system with N possible outcomes each with probability Pi. Write them as a row vector and expand in bra-ket notation. u = P1 |b1> + ... + PN |bn>. This is all vectors with positive entries such that add up to one. A perfectly good norm but in physics you generally use the inner product norm so map it to sqrt(Pi) instead and you end up with all vectors in this basis that are not negative, of unit norm and with Pi = |<u|bi>|^2/<u|u>. First problem is that in physics no basis is special so if we shift to another basis legitimate vectors will have a different representation and we can tell what basis we are using. To get around that we need to map it to the entire vector space which is easily done since in the formula Pi can be calculated for any vector. Secondly notice it is invariant to the transformation cu where c is any complex number suggesting it should be a complex vector space. And lastly since no basis is special we have to assume any orthonormal basis is a possible list of outcomes of some observation which immediately leads to the superposition principle and for any vectors a and b with b of unit norm |<a|b>|^2/<a|a> gives the probability of observing a system in state a to see if state b is the result.

It can be developed further that way by assigning values ai to the outcomes and defining a linear transformation A by A|bi> = ai|bi>, and we have <A> = sum (aiPi) = sum (ai <u|bi> <bi|u>) = sum (<u|ai|bi><bi|u>) = sum (<u|A|bi><bi|u>) = <u|A|u>. But I think this is enough to get the idea across it is simply a novel extension of probabilities by encoding them in a vector space such that basis independence holds.

Thanks
Bill
 
Last edited:
  • #57
Gordon Watson said:
You say "That's right" -- which I take to mean that you agree that your statement is wrong -- but you then continue with another confusion, your 1) + 2) above: implying that PROBABILITIES can be negative?
The "that's right" was in response to your unarguably true statement that probablity amplitudes are different from probabilities. I then went on to say that both I and Aaronson are talking about probabilities, not probability amplitudes, although we are also talking about contributions to a net probability result. The amplitude comes prior to the 2-norm that Aaronson is talking about, but what is not interesting is that the amplitudes could be negative (which has a simple classical wave analog), what's interesting is that the probablity contributions can be negative (which has no classical wave analog). And not just negative like a correction for double-counting of correlated outcomes, but fundamentally negative like the way interference works-- except it is interference in the probability contributions, not just fluxlike contributions as in classical wave interference.
In my view, QM deals with probabilities derived from probability amplitudes. Thus, for me, QM is an important extension of ordinary probability theory: Especially when you realize that any probability density may be represented as the absolute square of a complex Fourier polynomial.
I believe that is just exactly what Aaronson is saying when he talks about negative probabilities-- they are the kind of probabilities you can get from probability amplitudes. That is saying something very important about quantum mechanics, it is one of the key innovations, that probabilities can come from complex probability amplitudes (and so can be negative). If you don't want to mention the complex amplitudes, you just do the whole thing at the level of the negative probabilities. It's just a streamlined way to say the same thing as what you are saying is key about quantum mechanics.
Then, seeing no point in belaboring our differences, but to be clear about what I mean by a probability: The probability P(A|C) is the expected proportion of long-run experimental outcomes, under condition C, in which A occurs. It is an estimate of the relative frequency of A that will be revealed by measurements under C.

Since I never expect a negative proportion, and have never heard of one being measured, I'm confident that probability needs no confusing negativities.
But you are missing that Aaronson is not talking about negative net probabilities, he is talking about negative probability contributions in any sum of ways that A can occur under condition C. So it is just exactly the kind of probability that you mean, he is merely pointing out the key innovation that we are going to allow the contributing terms to be negative. In other words, if A can occur under condition C in way X, and in way Y, then A might be less likely to occur than in way X by itself-- if way Y corresponds to a negative probability. That's all he's saying, that's the central crux. He is trying to make it simple, on purpose.
And Yes; I read and re-read the lecture with interest and fondness and pleasure and delight: seeing it compatible with the view that I've just expressed here.
No one ever said it was incompatible with the view you just expressed, indeed that is the whole point. Probabilities that come from imaginary amplitudes can be negative, and the constraints on those probabilities demand that any net probability be real and nonnegative, but the contributions can be negative-- and that is what never happened before quantum mechanics.
 
  • #58
bhobba said:
1. To each dynamical variable there is a Hermitian operator whose egienvalues are the possible values of the dynamical variable.

2. To each state there corresponds a unique state operator P that must be Hermitian, non-negative and of unit trace. The average <A> for a dynamical variable with operator A in the virtual ensemble of events that may result from the preparation procedure for that state is <A> = Tr(PA).

Those are nice and economical. But even they seem to entangle the mathematical tools to some extent with the motivation and perspective of research and practice of QM. The tools may change with time or the situation, but what about the motivation of QM is worthy of definition?
 
  • #59
PhilDSP said:
but what about the motivation of QM is worthy of definition

QM is extensively used in applications such as for example understanding how transistors work.

Thanks
Bill
 
  • #60
PatrickPowers said:
In my opinion complex numbers are an unnecessary complication, but what they are representing is essential. I like the approach Feynman took in QED, where as far as I recall he never mentions complex numbers but instead uses the simple geometrical idea of a clock with one hand. One may model amplitudes and their cancellations in this simple and intuitive geometric way.

Yes it does - that turning arrow he talks about is complex numbers in disguise. But you are correct - it is not explicitly stated and gives a very nice intuitive picture of what is going on independent of the math and explains why complex numbers are used.

Thanks
Bill
 

Similar threads

Replies
12
Views
1K
Replies
44
Views
4K
Replies
29
Views
2K
Replies
7
Views
2K
Replies
27
Views
2K
Replies
6
Views
2K
Replies
3
Views
865
Back
Top