Exploring the Fundamental Postulates of QM: Are They Truly Ad-Hoc and Strange?

In summary, the fundamental postulates of quantum mechanics appear ad-hoc and strange compared to the postulates of other physical theories like special relativity and classical mechanics. However, these postulates are motivated by experiments and provide a consistent explanation for observed phenomena. While quantum mechanics may seem different from other theories, it has not been proven wrong and continues to accurately describe the behavior of matter.
  • #36
From Age of Entanglement, Louisa Gilder, ISBN978-1-4000-9526-1:

"Thus started the central debate of the conference, in which, as Mermin remembered, "[[John]] Bell claimed that in some deep way quantum mechanics lacked the naturalness that all classical theories possessed." There was no problem with the interpretation of classical physics. For example, as [[Kurt]] Gottfried granted Bell, "Einstein's equations tell you" their own interpretation. "You do not need him whispering in your ear." In quantum mechanics, on the other hand, Gottfried admitted, even the greatest classical physicist "would need help: 'Oh, I forgot to tell you that according to Rabbi Born, a great thinker in the yeshiva that flourished in Gottingen in the early part of the 20th century, [the amplitude-squared of the Schrodinger equation] is' " to be interpreted as a probability. despite all indications to the contrary. Bell felt it was obvious that something profound was missing from quantum mechanics; "Kurt [[Gottfried]]", Mermin said, "never felt this in his bones."

FWIW, I see two confusions: Bohr is the author of one. He attempted to force an interpretation - a philosophical orientation - onto the scientific community and beyond. We are still paying that price. Bohr lays down an epistemological Kantianism that divides "das Noumena" from "Phenomena" by stating that you can never get to "das Noumena" - the quantum particle - without the Phenomena of the equipment AND our language cannot never get to a scientific treatment of "What is really there" because language cannot in principle be used to describe "What goes on".

Hegelianism awaits at this point: "If the scientist cannot get to "das Noumena", then there does not appear to be a "Das Noumena"." Science will consist of arguments over sentences describing classical world objects - ONLY!

The second confusion is more important. It is known as "Born's Interpretation" of the Schrodinger Equation for a reason. Schrodinger himself appeared at times to be at a loss for what his own equation meant. I again quote from the John Clauser interview I gave in another post: "We have no idea how we got from Schrodinger's waves to Born's dots on the screen."

If we squeeze all "non-positive" space out of our analyses, do we lose something? I think so:
Consider a "Toy Universe" where a "Positive Vacuum Value X^Y" is matched by a "Negative Vacuum Value X^ -Y". We might begin an analysis with a statement: "X^Y times X^ -Y = 1". OK. Fine. All upfront.

Suppose, without knowing it, we made use of a mathematical definition that took the absolute value of an exponential, we would have "X^Y times x^ |-Y| = X^ 2Y". We might eventually get to a statement such as, "This implies that the magnitude of the cosmological constant must be smaller than 1/(10^23 kilometers)^2. Our theoretical estimate suggesting a magnitude greater than 1/(1kilometer)^2 is incorrect by, at the very least, an astonishing factor of 10^46." Larry Abbott, Mystery of the Cosmological Constant, SciAm, May 1988.

If Guth's Cosmic Inflation is true, does it work in reverse? Could there be a Guthian "Big Crunch"? It would mean that at the last moments of the previous universe, SpaceTime collapsed to a volume to Unification by a factor of 10^ ~50ish or something. Did matter follow?

Philosophy has taken a few hits on this site recently and I think the response should be, "Not Guilty!" Bohr is overbearing at times, a person you would eventually steer away from at a party. Born was doing great things with the tools that were available to him. At the same time, others were chafing at making "an interpretive use" into the "only way to see it".

We still are with this today. I defer to ZapperZ and Dr. Chinese who have an understanding greater than I. THEIR use of language and math has not been hampered by The Restricted Categories of the Understanding.

Good for them!

CW
 
Physics news on Phys.org
  • #37
HomogenousCow said:
so just as I said then
Yes, close enough.

HomogenousCow said:
does the schroedinger equation also arise from this?
It does if we include the requirement that the C*-algebras we consider must have a subalgebra that corresponds to translations in time. (Or something like that; I don't know how to make the statement exactly right).
 
Last edited:
  • #38
micromass said:
Apparently, in QM we need both pure states and other states. But in Classical Mechanics, we can get by with just the pure states. The pure states already determine the entire phase space ##X##. The other states induce naturally probability distributions on ##X##, but they don't seem necessary. Is there some interpretation in CM for states that are not pure states? Are they somehow needed in CM?
I really don't get what you mean here. in QM we don't need mixed ensembles. We use them for the same reason we use mixed ensembles in CM. The only special thing about CM is that a pure ensemble is pretty boring, it will always give you the same measurement value, while a pure QM ensemble will generally give a different value each time.

edit: I probably didn't explain myself enough. and I know it is annoying when people do that. In QM, I would think that we don't need mixed ensembles for the same reason we don't need mixed ensembles in CM. In both cases, the idea of a mixed ensemble doesn't contain any extra physics (any more than the pure states involved). We are just assigning probability (in the conventional sense) to certain states. You can interpret this as our lack of information, or in the frequentist philosophy that it gives the fraction of systems out of a large bunch of systems, in the limit of a very large number of systems.
 
Last edited:
  • #39
micromass said:
Apparently, in QM we need both pure states and other states. But in Classical Mechanics, we can get by with just the pure states. The pure states already determine the entire phase space ##X##. The other states induce naturally probability distributions on ##X##, but they don't seem necessary. Is there some interpretation in CM for states that are not pure states? Are they somehow needed in CM?
I'm not sure if they're needed, but it's not hard to think of a situation where a mixed state can be used. Consider two arbitrary pure states ##s_1,s_2##. Flip a coin to decide whether to prepare the system in state ##s_1## or state ##s_2##, and then tell your experimentalist friend that you did that, but don't tell him the result of the flip.

The experimentalist would now be correct to think of the system as being in a mixed state. Of course, he doesn't have to think about it in those terms. He can just think about it in terms of pure states and basic probability theory. So I don't think I can say that the concept of mixed states is really needed here. But maybe there are better examples.
 
  • #40
HomogenousCow said:
I find the fundamental postulates of QM very ad-hoc and strange.
Compare them to the fundamental postulates of special relativity, special relativity naturally arises out of classical electromagnetism and the equivalence of all inertial frames, while QM seems to come out of nowhere.

Everything in life, including science, is a matter of subjectivity. You like that, you don't like that, what's not intuitive to you is for someone else. The more subjective you are, the less you know about a certain topic. GR and Quantum Mechanics are both equally valid generalizations of classical mechanics. Formulate classical mechanics in such a way that both GR and QM are the natural extensions of it. If you think that CM is F=ma + forces add together like vectors and the force of 1 on 2 is the opposite of the force of 2 on 1, then you're not ready for GR, not ready for QM and seeing the (diluted or not) axiomatization of QM would make you say: <This is weird. Where did it come from?>.

Advice: read more and don't be afraid of mathematics. The more math you know, the more logical will the advanced physics look to you.
 
  • #41
dextercioby said:
Advice: read more and don't be afraid of mathematics. The more math you know, the more logical will the advanced physics look to you.

I don't think that's completely true. It's partly a personality difference, but there are physicists who know quite a bit of the mathematics behind quantum mechanics who still think that it's strange and a bit mysterious. Feynman famously said "I think I can safely say that nobody understands quantum mechanics", and I don't think that can be attributed to his unfamiliarity with the advanced mathematics involved.

There is very difficult mathematics involved in General Relativity, too, but I personally don't find it very mysterious, in spite of its difficulty. It's not the difficult mathematics.

Some people have also suggested that it's the intrinsic nondeterminism in quantum mechanics that's bothersome. I don't think that's true, either. It's true that classical mechanics is deterministic, but I don't think that there's anything conceptually difficult about assuming the existence of random processes--perfect coin flips--whose outcomes are not determined.

No, I think that what strikes some people as weird about quantum mechanics is that it gives such a prominent role to the concept of an "observable". The C* algebras in this regard succeed in making classical mechanics sound as weird as quantum mechanics, rather than making quantum mechanics as intuitive as classical mechanics.

Now, of course science is concerned with observation, but I don't think that's the same thing as being about observables. The weird thing about making an observable a separate kind of object in quantum mechanics (or classical mechanics in the C* algebra approach) is that observers are themselves physical systems, and observations are themselves physical interactions between systems. In my opinion, a satisfying formulation of quantum mechanics would not postulate the existence of observables. It would describe how physical systems interact, and the properties of measuring devices would be special cases derivable from the general case.
 
  • #42
dextercioby said:
If you think that CM is F=ma + forces add together like vectors and the force of 1 on 2 is the opposite of the force of 2 on 1, then you're not ready for GR...
you mean it is important to learn stuff like Hamiltonian vector field, canonical transformation, action integral, that kind of stuff?
 
  • #43
Another related point. In the C* algebra formalism, the difference between quantum mechanics and classical mechanics is noncommutativity of observables. Now, classical mechanics has its share of noncommutativity; the matrices used to describe rotations and boosts are noncommutative. Noncommutativity is to expected for nontrivial operators. But the weird thing about quantum mechanics is the association between operators and observables. Why should an observation have anything to do with operators? Once again, if an observation is just a physical interaction between one system (a scientist, or one of his measuring devices) and another system (an electron, say), why isn't it just described by the evolution of the two systems? Why do operators come into play?
 
  • #44
BruceW said:
you mean it is important to learn stuff like Hamiltonian vector field, canonical transformation, action integral, that kind of stuff?

Learning these things before learning QM makes the mathematics more familiar but the physical situation is still very strange.
 
  • #45
stevendaryl said:
I don't think that's completely true. It's partly a personality difference, but there are physicists who know quite a bit of the mathematics behind quantum mechanics who still think that it's strange and a bit mysterious. Feynman famously said "I think I can safely say that nobody understands quantum mechanics", and I don't think that can be attributed to his unfamiliarity with the advanced mathematics involved.

There is very difficult mathematics involved in General Relativity, too, but I personally don't find it very mysterious, in spite of its difficulty. It's not the difficult mathematics.

Some people have also suggested that it's the intrinsic nondeterminism in quantum mechanics that's bothersome. I don't think that's true, either. It's true that classical mechanics is deterministic, but I don't think that there's anything conceptually difficult about assuming the existence of random processes--perfect coin flips--whose outcomes are not determined.

No, I think that what strikes some people as weird about quantum mechanics is that it gives such a prominent role to the concept of an "observable". The C* algebras in this regard succeed in making classical mechanics sound as weird as quantum mechanics, rather than making quantum mechanics as intuitive as classical mechanics.

Now, of course science is concerned with observation, but I don't think that's the same thing as being about observables. The weird thing about making an observable a separate kind of object in quantum mechanics (or classical mechanics in the C* algebra approach) is that observers are themselves physical systems, and observations are themselves physical interactions between systems. In my opinion, a satisfying formulation of quantum mechanics would not postulate the existence of observables. It would describe how physical systems interact, and the properties of measuring devices would be special cases derivable from the general case.

I agree with this.
The fact that the universe is nondeterministic at the basic level is something I can accept, but the way this is implemented in QM strikes me as odd and bothersome.
I think the sentence that captures my problem with the whole thing is, where does the indeterminism stop? Observers in QM seem to have to be classical and macroscopic in order for everything to make sense, the probabilities have to stop somewhere or else the whole concept of a frame of reference seems to be ill-defined.
It seems that the more I think about it, the more QM seems like an effective theory which acts as a bridge between the classical world and the quantum world.
 
  • #46
stevendaryl said:
But the weird thing about quantum mechanics is the association between operators and observables. Why should an observation have anything to do with operators? Once again, if an observation is just a physical interaction between one system (a scientist, or one of his measuring devices) and another system (an electron, say), why isn't it just described by the evolution of the two systems? Why do operators come into play?
I can't answer the part about why it's not just "evolution of the two systems", other than by saying that what these approaches have in common is that they can be justified by arguments based on the idea that a theory of physics assigns probabilities to measurement results. They're not based on requirements about how the theory is supposed to be a description of what's happening, or anything like that. They're just based on the idea of falsifiability.

Note also that the algebraic approach doesn't assume that observables are operators. They are introduced as equivalence classes of measuring devices, and then you define some algebraic operations on the set of observables that turn it into a C*-algebra. Some of them are very natural. For example, for each real number r and each observable A, there should be an observable rA that corresponds to a measuring device like this: Take any measuring device from the equivalence class A, and add a component that multiplies the result by r (so that the result of the measurement will be r times what it would be without the modification). Let rA be the equivalence class that contains the modified device.

Some of the other operations are probably just technical assumptions meant to give us something we can work with at all. But my point is that the observables are not defined as operators. The association with operators comes from a theorem about *-homomorphisms from C*-algebras into the set of bounded linear operators on a Hilbert space.

I think that even if we don't use the algebraic approach, there's a case to be made for why observables (equivalence classes of measuring devices) should be represented by operators. Unfortunately it's not clear enough in my head that I can put it together right now. I think that I would try to argue that observables should correspond to projection-valued measures, and then refer to the spectral theorem to associate them with self-adjoint operators. But I'm definitely in over my head here, so don't take this too seriously.
 
  • #47
Fredrik said:
I can't answer the part about why it's not just "evolution of the two systems", other than by saying that what these approaches have in common is that they can be justified by arguments based on the idea that a theory of physics assigns probabilities to measurement results.

It seems a little weird to me that is what a theory of physics should be about, because "measurement" is not fundamentally different from any other kind of interaction. We just interpret certain interactions as being "measurements" when they result in a strong correlation between a persistent record (photograph, bits on a hard drive, marks on paper, etc.) and some aspect of the universe that we are interested in measuring. The fact that it is a measurement is the significance that WE put on the interaction, but it seems weird to me to think that physics cares whether something is a measurement, or not.

Note also that the algebraic approach doesn't assume that observables are operators. They are introduced as equivalence classes of measuring devices, and then you define some algebraic operations on the set of observables that turn it into a C*-algebra.

Well, if you don't think of them as operators, then it seems to me that turning them into a C* algebra is a strange thing to want to do. We have algebraic operations on observables such as:
If [itex]f[/itex] is an observable and [itex]g[/itex] is an observable, then [itex]f g[/itex] is an observable. But what does it mean to multiply observables? It makes sense to multiply the results of two operations (interpreted as giving real numbers), but what is the meaning of multiplying the observables?

Some of them are very natural. For example, for each real number r and each observable A, there should be an observable rA that corresponds to a measuring device like this: Take any measuring device from the equivalence class A, and add a component that multiplies the result by r (so that the result of the measurement will be r times what it would be without the modification). Let rA be the equivalence class that contains the modified device.

Some of the other operations are probably just technical assumptions meant to give us something we can work with at all. But my point is that the observables are not defined as operators. The association with operators comes from a theorem about *-homomorphisms from C*-algebras into the set of bounded linear operators on a Hilbert space.

Well, I think the ones that are hard to interpret is multiplication of two observables. What does it mean? People informally say that [itex]A B[/itex] means "first measure B", then measure A", but that's a little strange to interpret that as multiplication, because those two measurements take place at slightly different times.

I think that even if we don't use the algebraic approach, there's a case to be made for why observables (equivalence classes of measuring devices) should be represented by operators.

What makes a device a "measuring device"? It seems to me that it is the theory itself that tells us that under certain circumstances a microscopic fact (that an electron has spin-up along a certain axis) results in a persistent macroscopic fact (that a dot on a photographic plate appears on the left-hand side, rather than the right-hand side, or whatever).

Also, what is the notion of "equivalence" here? Two devices are equivalent if they ... what? Measure the same observable? That's a little circular, but what notion of equivalence are we supposed to be using?

Unfortunately it's not clear enough in my head that I can put it together right now. I think that I would try to argue that observables should correspond to projection-valued measures, and then refer to the spectral theorem to associate them with self-adjoint operators. But I'm definitely in over my head here, so don't take this too seriously.
 
  • #48
Fredrik said:
They [observables] are introduced as equivalence classes of measuring devices, and then you define some algebraic operations on the set of observables that turn it into a C*-algebra.

In my opinion, calling an observable of a C*-algebra an "equivalence class of measuring devices" is more suggestive than rigorous. I would think that if one really wanted to seriously talk about equivalence classes, then one would have to

  1. Define what a "measuring device" is.
  2. Define an equivalence relation on measuring devices.
  3. Define the operations on measuring devices (addition, multiplication, scaling, or whatever).
  4. Prove that the equivalence relation is a congruence with respect to those operations.

I don't think you can really do that in a noncircular way, because to make sense of the claim that a particular device is a measuring device for the z-component of spin angular momentum of some particle, you would need to assume some kind of dynamics whereby the device interacts with the particle so that its state evolves to a persistent record of the z-component of the spin angular momentum. You need to have a theory of interactions before you can ever know that something is a measuring device. So it's a bit weird to put in equivalence classes of measuring devices at the beginning, as opposed to having them come out of the theory.
 
  • #49
stevendaryl said:
It seems a little weird to me that is what a theory of physics should be about,
A set of statements about the real world must be falsifiable in order to be considered a theory of physics, and to be falsifiable, it must (at least) assign probabilities to possible results of experiments. This appears to be the absolute minimum requirement. This is why all theories involve probability assignments to results of measurements. It's not that measurements are fundamentally different from other interactions. It's just that this is part of what we mean by the word "theory".

stevendaryl said:
Well, if you don't think of them as operators, then it seems to me that turning them into a C* algebra is a strange thing to want to do. We have algebraic operations on observables such as:
If [itex]f[/itex] is an observable and [itex]g[/itex] is an observable, then [itex]f g[/itex] is an observable. But what does it mean to multiply observables?
I checked my copy of Strocchi, and the argument is quite complicated and ends with a comment that it shouldn't be considered a proof that we must use C*-algebras.

It certainly looks like multiplication is by far the algebraic operation that's the hardest to justify. All the others are fairly easy to justify.

stevendaryl said:
Also, what is the notion of "equivalence" here? Two devices are equivalent if they ... what? Measure the same observable? That's a little circular, but what notion of equivalence are we supposed to be using?
Something like this:

Let E(A|s) denote the theory's prediction for the average result of a long series of measurements using measuring device A on objects of the type that the theory is about (e.g. electrons) that have all been subjected to the same preparation procedure s just before the measurement.

Two measuring devices A and B are said to be equivalent if E(A|s)=E(B|s) for all preparation procedures s.
 
  • #50
Fredrik said:
A set of statements about the real world must be falsifiable in order to be considered a theory of physics, and to be falsifiable, it must (at least) assign probabilities to possible results of experiments. This appears to be the absolute minimum requirement. This is why all theories involve probability assignments to results of measurements. It's not that measurements are fundamentally different from other interactions. It's just that this is part of what we mean by the word "theory".

But the C*-algebra approach sure seems to single out measurements (or observables) as being something different. As I said, the fact that some interaction is a measurement of some quantity is not what you start with, it's a conclusion. There's a long chain of deductions involved in reaching that conclusion. It seems weird that measurements, which are complicated, macroscopic phenomena very indirectly connected with the microscopic phenomena being described by theory, should appear in the theory as the fundamental objects of interest. That just seems like a bizarre mismatch between the elegance and simplicity of the observables in C*-algebra and the complexity and messiness of actual measuring devices.

Clearly, there's abstraction and/or idealization going on, but what is the nature of this idealization?
 
  • #51
stevendaryl said:
In my opinion, calling an observable of a C*-algebra an "equivalence class of measuring devices" is more suggestive than rigorous. I would think that if one really wanted to seriously talk about equivalence classes, then one would have to

  1. Define what a "measuring device" is.
  2. Define an equivalence relation on measuring devices.
  3. Define the operations on measuring devices (addition, multiplication, scaling, or whatever).
  4. Prove that the equivalence relation is a congruence with respect to those operations.

I don't think you can really do that in a noncircular way, because to make sense of the claim that a particular device is a measuring device for the z-component of spin angular momentum of some particle, you would need to assume some kind of dynamics whereby the device interacts with the particle so that its state evolves to a persistent record of the z-component of the spin angular momentum. You need to have a theory of interactions before you can ever know that something is a measuring device. So it's a bit weird to put in equivalence classes of measuring devices at the beginning, as opposed to having them come out of the theory.


A full definition of a theory must include statements that tell us how to interpret the mathematics as predictions about measurement results. These statements, called "correspondence rules", must tell us what sort of devices we're supposed to use. This is where things get complicated.

Let's say that we want to write down the correspondence rules for (say) the theory of classical point particles in Minkowski spacetime. One of the rules must specify what a clock is. This is a problem. We can't just say that a clock is a device that measures time, because "time" is defined by the theory we're trying to define. The solution is to define a clock by explicit instructions on how to build one.

In principle those instructions can be written so that they can be followed by people who don't know any physics at all, but I can't even imagine what they would look like if we write them that way.

This is still pretty weird, because the best clocks are designed using SR, QM and a lot more. I'm not sure we absolutely need to address that issue, but I see one way that it can be addressed: We define a hierarchy of theories. In the level-0 theories, we use very simple descriptions of measuring devices. Then for each positive integer n, when we define the level-n theories, we make sure that the instructions in the correspondence rules can be understood by people who understand level-(n-1) theories and have access to level-(n-1) measuring devices.

As you can see this is all really complicated, and this is just a discussion of what it takes to completely write down the definition of a good theory (something that certainly has never been done). But I think it's clear that we can at least avoid circularity in the definition of the theory.

I have to go, so I don't have time to address the issue of circularity in the algebraic approach. Maybe later. (I don't think there is any circularity there).
 
  • #52
When Schrodinger posited his wavefunction, it came at a time when physical theory was transitioning from physical Newtonian space into "metaphysical" Hilbert space. Heisenberg was only interested in developing a matrix algebra of observed quantities. One of the uses of the wavefunction is to use its solution in three dimensions as the "orbitals" of electrons. This is the exact same concept as the Bohr model, except that the "orbitals" of the wavefunction are quite a bit more convoluted.

Both the Bohr and Schrodinger models can be viewed as classical in that they are are both (at least in theory) represented as existing in classical space. This is what it means for something to be a "model". Heisenberg was always adamant that his ideas never have any connection to the classical spacetime models of pre-20th century physical theory.

I think what makes things so strange is simply that connections were eventually made between the Heisenberg and the Schrodinger "ontologies". As Charles mentioned above, Bohr was the great philosopher who made it his mission to get everyone to put aside their personal prides for the sake of the larger goal of getting a cohesive vision on the table. Born was able to get rid of the classical model by squaring the wavefunction. And Dirac finally gave everything a formal language with his new bra-ket notation.

This is all quite a lot for mere mortals to put into proper perspective...
 
  • #53
stevendaryl said:
In my opinion, calling an observable of a C*-algebra an "equivalence class of measuring devices" is more suggestive than rigorous.
More suggestive than rigorous...yes, I suppose so. If we want to do it rigorously, we must start by stating a definition of "theory" that's general enough to include all the classical and all the quantum theories. We can then define terms like "state" and "observable" in a way that's both rigorous and theory-independent (in the sense that the same definition applies to all the classical theories, all the quantum theories, and more).

I spent some time thinking about how to do these things a couple of years ago. I didn't keep at it long enough to work everything out, but I feel very strongly that it can be done. The first step is to provide some motivation for a general definition of "theory". This is of course impossible to do rigorously, but the main ideas are very simple and natural. (Actually, what we want to define here isn't a theory of physics in the sense of my previous posts in this thread. It's just the purely mathematical part of such a theory, not including any correspondence rules. So maybe we should use some other term for it, but "theory" will have to do in this thread).

The idea that I used as the starting point is that a theory must be able to assign probabilities to statements of the form
"If you use the measuring device [itex]\delta[/itex] on the object [itex]\pi[/itex], the result will be in the set [itex]E[/itex]".​
These statements can be identified by the triples ##(\delta,\pi,E)##. This means that associated with each theory, there are sets ##\Delta,\Pi,\Sigma## and a function
$$P:\Pi\times\Delta\times\Sigma\rightarrow[0,1],$$ such that the maps ##E\mapsto P(\delta,\pi,E)## are probability measures. Note that this implies that ##\Sigma## is a σ-algebra. I call elements of the set ##\Delta## "measuring devices" and elements of the set ##\Pi## "preparations".

After these simple observations and conjectures, we are already very close to being able to write down a definition that we can use as the starting point for rigorous proofs. There are some subtleties that we have to figure out how to deal with before we write down a definition, like what happens if the measured object ##\pi## is too big to fit in the measuring device ##\delta##? I'm not going to try to work out all such issues here, I'm just saying that they look like minor obstacles that are unlikely to prevent us from finding a satisfactory definition.

Now let's jump ahead a bit and suppose that we have already written down a satisfactory definition, and that the sets and functions I've mentioned are a part of it. Then we can use the function P to define equivalence classes and terms like "state" and "observable". This function implicitly defines several others, like the maps [itex]E\mapsto P(\pi,\delta,E)[/itex] already mentioned above. We will be interested in the functions that are suggested by the following notations:
\begin{align}
P(\pi,\delta,E)=P_\pi(\delta,E)=P^\delta(\pi,E)=P_\pi^\delta(E)
\end{align} We use the [itex]P_\pi[/itex] and [itex]P^\delta[/itex] functions to define equivalence relations on [itex]\Pi[/itex] and [itex]\Delta[/itex]: [tex]\begin{align*}
&\forall \pi,\rho\in\Pi\qquad &\pi \sim \rho\quad &\text{if}\quad P_\pi=P_\rho\\
&\forall \delta,\epsilon\in\Delta\qquad &\delta \sim \epsilon\quad &\text{if}\quad P^\delta=P^\epsilon
\end{align*}[/tex]
The sets of equivalence classes are denoted by [itex]\mathcal S[/itex] and [itex]\mathcal O[/itex] respectively. The members of [itex]\mathcal S=\Pi/\sim[/itex] are called states, and the members of [itex]\mathcal O =\Delta/\sim[/itex] are called observables. The idea behind these definitions is that if two members of the same set can't be distinguished by experiments, the theory shouldn't distinguish between them either.

stevendaryl said:
I would think that if one really wanted to seriously talk about equivalence classes, then one would have to

  1. Define what a "measuring device" is.
As you can see, I don't agree with this point. We only have to define the term "theory" in such a way that every theory is associated with a set whose elements we can call "measuring devices".

If we continue along the lines I've started above, we will not automatically end up with C*-algebras. What I'm describing can (probably) be thought of as a common starting point for both the algebraic approach and the quantum logic approach. So we can proceed in more than one way from here. We can define operations on the set of observables that turn it into a normed vector space, and then think "wouldn't it be awesome if this is a C*-algebra?", or we can keep messing around with equivalence classes and stuff until we find a lattice, and then think "wouldn't it be awesome if this is orthocomplemented, orthomodular, and whatever else we need it to be?".

I seems to me that the reason why we don't get the most convenient possibility to appear automatically, is that we started with a definition that's "too" general. It doesn't just include all the classical theories and all the quantum theories, it includes a lot more. So if we want to consider only classical and quantum theories, we need to impose additional conditions on the structure (a normed vector space or a lattice), that gets rid of the unwanted theories.

stevendaryl said:
I don't think you can really do that in a noncircular way,
I think the approach I have described doesn't have any circularity problems.

stevendaryl said:
But the C*-algebra approach sure seems to single out measurements (or observables) as being something different. As I said, the fact that some interaction is a measurement of some quantity is not what you start with, it's a conclusion. There's a long chain of deductions involved in reaching that conclusion.
The way I see it, the chain of deductions that lead to this conclusion is based only on the concept of "falsifiability". And the conclusion provides the motivation for a definition of the term "theory of physics".
 
Last edited:
  • #54
Fredrik, this is possibly one of the most interesting posts I've read on this forum. It's too bad you never completely worked everything out. It would love to read something like that.
 
  • #55
micromass said:
Fredrik, this is possibly one of the most interesting posts I've read on this forum. It's too bad you never completely worked everything out. It would love to read something like that.
Thanks. I'm glad you liked it. Maybe I'll have another go at completing it soon.
 
  • #56
stevendaryl said:
People informally say that [itex]A B[/itex] means "first measure B", then measure A", [...]
As Ballentine points out somewhere in his textbook, such an interpretation of a product of operators is also clearly wrong.

Consider the case ##A = \sigma_x##, ##B=\sigma_y## (where the ##\sigma##'s are the usual Pauli matrices). Then we have ##AB = i\sigma_z##, but "a measurement of spin along the x-axis followed by a measurement of spin along the y axis" is in no sense relatable to a single measurement of spin along the z axis.

Moreover, even if ##A,B## are both hermitian, we could have ##(AB)^* = B^* A^* = BA \ne AB## in general. Hence ##AB## does not necessarily qualify as an observable in the ordinary sense of an hermitian operator.

Products of operators are better understood in the context of the full dynamical group of the system under study. E.g., in terms of the universal enveloping algebra (or maybe Poisson algebra) associated with the generators of that group. One constructs unitary representations of the group.
 
Last edited:
  • #57
HomogenousCow said:
I agree but it baffles me why this model works, none of the postulates are directly motivated by experimental evidence, only after some deep digging do we find that they agree with interference and other observations.

I would like to suggest you get a hold of Ballentine - Quantum Mechanics - A Modern Development. There you will find the correct basis of QM - it really rests on two axioms. Stuff like Schrodinger's equation etc follows from the the POR exactly the same as SR does. The second axiom he uses, which is basically Born's Rule follows from the first axiom if you accept non contextuality (which is highly intuitive mathematically) via Gleason's theorem so one can argue it is really based on one axiom with a bit of other stuff added.

The issue is can the two axioms be presented in an intuitive way? I believe it can - check out:
http://arxiv.org/pdf/0911.0695v1.pdf

It would seem some fairly general and intuitive considerations leads either to bog standard probability theory or QM - with QM being singled out if you want continuous transformations between pure states or entanglement - either one is enough to uniquely determine QM as the correct model.

Thanks
Bill
 
Last edited:
  • #58
dextercioby said:
Everything in life, including science, is a matter of subjectivity. You like that, you don't like that, what's not intuitive to you is for someone else. The more subjective you are, the less you know about a certain topic. GR and Quantum Mechanics are both equally valid generalizations of classical mechanics. Formulate classical mechanics in such a way that both GR and QM are the natural extensions of it. If you think that CM is F=ma + forces add together like vectors and the force of 1 on 2 is the opposite of the force of 2 on 1, then you're not ready for GR, not ready for QM and seeing the (diluted or not) axiomatization of QM would make you say: <This is weird. Where did it come from?>.

Exactly. What is reasonable to one person is crazy to another. I have found approaches to QM that for me make it seem quite reasonable.

dextercioby said:
Advice: read more and don't be afraid of mathematics. The more math you know, the more logical will the advanced physics look to you.

Yea - that seems to be a big problem for some. Those 'reasonable' approaches often use advanced math which can be a turn off - which of course it shouldn't be - physics is not math but is written in the language of math so its hardly surprising it takes its most elegant and transparent form within that framework.

Thanks
Bill
 
  • #59
bhobba said:
Yea - that seems to be a big problem for some. Those 'reasonable' approaches often use advanced math which can be a turn off - which of course it shouldn't be - physics is not math but is written in the language of math so its hardly surprising it takes its most elegant and transparent form within that framework.

As I said already, I don't think that the strangeness of quantum mechanics has anything to do with the difficulty of the math. To give some counter-examples, I think that General Relativity or statistical mechanics can be just as difficult, mathematically. I really think that it is the singling out "observables" as a fundamental, irreducible aspect of the world that is strange.
 
  • #60
bhobba said:
The issue is can the two axioms be presented in an intuitive way? I believe it can - check out:
http://arxiv.org/pdf/0911.0695v1.pdf

Thanks for that reference. To me, the strangeness is already put in at the very beginning, when a "measurement" is given fundamental status in the axioms. As I said, a "measurement" is not a fundamental, atomic entity, but is a special kind of interaction whereby the state of one system (the observer, or measuring device) becomes correlated, in a persistent way, with the state of another system (the thing being observed or measured). The discussion, where one talks about "reliably distinguishing" states, is already, it seems to me, making a division between the world and the thing that is studying the world. Of course, that distinction is certainly there when you have a scientist doing experiments, but I always felt that that was a matter of how we interpreted what was going on--at the level of the laws of physics, there's no fundamental distinction between scientist and experiment.
 
  • #61
Fredrik said:
The idea that I used as the starting point is that a theory must be able to assign probabilities to statements of the form
"If you use the measuring device [itex]\delta[/itex] on the object [itex]\pi[/itex], the result will be in the set [itex]E[/itex]".​

So your approach is to let "measuring device" be an abstract term. But what is supposed to be the interpretation? Suppose we have as a simple case a universe consisting of nothing but a single spin-1/2 particle fixed in place (so the only degrees of freedom are from spin). I assume that the "observables" in this case are associated with the set of 2x2 hermitian matrices. Which means, in terms of Pauli spin matrices [itex]\sigma_i[/itex], that they are of the form:
[itex]A + B_i \sigma_i[/itex], where [itex]A, B_x, B_y, B_z[/itex] are 4 real numbers. So for this toy theory, each such matrix is a measuring device?
 
Last edited:
  • #62
stevendaryl said:
So your approach is to let "measuring device" be an abstract term. But what is supposed to be the interpretation? Suppose we have as a simple case a universe consisting of nothing but a single spin-1/2 particle fixed in place (so the only degrees of freedom are from spin). I assume that the "observables" in this case are associated with the set of 2x2 hermitian matrices. Which means, in terms of Pauli spin matrices [itex]\sigma_i[/itex], that they are of the form:
[itex]A + B_i \sigma_i[/itex], where [itex]A, B_x, B_y, B_z[/itex] are 4 real numbers. So for this toy theory, each such matrix is a measuring device?
The set Δ whose members I call "measuring devices" contains elements that correspond to the actual measuring devices mentioned by the theory's correspondence rules. But I do not require that every element of Δ corresponds to an actual measuring device. We can take Δ to be a larger set, if that's convenient.

In a quantum theory defined by a 2-dimensional Hilbert space, the set of self-adjoint operators is our Δ/~ (i.e. the set of all equivalence classes of measuring devices). This is a 4-dimensional vector space over ℝ, that's spanned by ##\{\sigma_1,\sigma_2,\sigma_3,I\}##. The sigmas correspond to measuring devices that measure spin in one of three orthogonal directions. The identity matrix corresponds to a measuring device that always gives us the result 1, no matter how the system was prepared before the measurement. Since every self-adjoint operator is a linear combination of these four, every self-adjoint operator corresponds to an actual measuring device (assuming that linear combinations of observables make sense).

Regarding the meaning of linear combinations... I defined scalar multiplication earlier. I haven't really thought addition through. Strocchi appears to be doing something like this: If we denote the expectation value of an observable X by E(X|s) when the system is in state s, then addition can be defined by saying that A+B is the observable such that E(A+B|s)=E(A|s)+E(B|s) for all states s. (I haven't verified that this definition works).
 
  • #63
stevendaryl said:
To me, the strangeness is already put in at the very beginning, when a "measurement" is given fundamental status in the axioms. As I said, a "measurement" is not a fundamental, atomic entity, but is a special kind of interaction whereby the state of one system (the observer, or measuring device) becomes correlated, in a persistent way, with the state of another system (the thing being observed or measured).
As you know, physics obtains its knowledge by measurements. So first of all, the resulting theories are theories about measurements. In QM, it is not straightforward how to extrapolate the theory about measurements to a theory about what "really happens". From the viewpoint of common forms of realism, this is a problem of course. But why should we expect such a straightforward extrapolation in the first place?

Also conceptually, the Many Worlds interpretation is quite straightforward and tells us what really happens. However, it's still hard to accept from the viewpoint of naive realism.
 
  • #64
kith said:
As you know, physics obtains its knowledge by measurements.

Sure.

So first of all, the resulting theories are theories about measurements.

I don't think that follows at all. That's like saying: "Nowadays, many people learn about physics over the internet. So for them, a theory of physics is a theory of web browsers."

We learn about physics by measurements, but measurements are not the subject of physics. (Well, there can certainly be a subfield of physics, the theory of measurement, but that's not all of physics.) We use measurements to figure out how the world works, and then we apply that knowledge in situations where there are no measurements around--such as the Earth prior to the formation of life, or inside a star, or whatever.

I absolutely reject the assumption that a theory of physics is a theory of measurement.
 
  • #65
stevendaryl said:
To me, the strangeness is already put in at the very beginning, when a "measurement" is given fundamental status in the axioms. As I said, a "measurement" is not a fundamental, atomic entity, but is a special kind of interaction whereby the state of one system (the observer, or measuring device) becomes correlated, in a persistent way, with the state of another system (the thing being observed or measured).

Then, I suspect, to you, that entanglement basically leads to QM would be a very pertinent aspect.

My view is a few approaches with reasonable foundations lead to QM but unfortunately, like the paper I linked, require a certain amount of mathematical sophistication such as the Schur-Auerbach lemma from group theory. Unfortunately there are some people who don't like this mathematical aspect of physical theories and in some quarters there is a resistance to it with for example claims SR is simply math and can't represent physical reality. I had long discussions (if that's what you would call them) with people of that bent when I posted a lot on sci.physics.relativity - they just simply can't get the idea that physics is not about easily visualizeable pictures they carry around in their head.

Thanks
Bill
 
  • #66
stevendaryl said:
That's like saying: "Nowadays, many people learn about physics over the internet. So for them, a theory of physics is a theory of web browsers."
This analogy is valid if you make the assumption that measurements uncover an independent reality. But this is exactly the assumption I questioned in my previous post.
 
  • #67
kith said:
As you know, physics obtains its knowledge by measurements. So first of all, the resulting theories are theories about measurements.

Yes of course. But that in itself raises a fundamental issue - measurement apparatus are classical objects and QM is a fundamental theory about the constituents of those classical objects so we have a 'cut' in how we view nature right at the foundations of QM. This leads to stuff like the Von Newmann regress and the introduction of consciousness causes collapse most would think a bit too far out to be taken seriously. My view is a fully quantum theory of measurement is required and indeed much progress in that area has been made but a few issues still remain such as proving the basis singled out by decoherence does not depend on how the system is decomposed. I believe we are not far away from a full resolution but until all the i's are dotted at t's crossed I for one still think some mystery remains. And who knows - dotting the i's and crossing the t's may show up something truly surprising.

Thanks
Bill
 
  • #68
kith said:
This analogy is valid if you make the assumption that measurements uncover an independent reality. But this is exactly the assumption I questioned in my previous post.

Bingo - we have a winner. That is the rock bottom foundational issue with QM IMHO.

Thanks
Bill
 
  • #69
Bill, similar to your thoughts, I think that the universal wavefunction and decoherence give a quite complete realistic picture. It is just that I also see the appeal in the C* approach which is much closer to the scientific practise than unobservable entities like the universal wavefunction.
 
Last edited:
  • #70
bhobba said:
Yes of course. But that in itself raises a fundamental issue - measurement apparatus are classical objects and QM is a fundamental theory about the constituents of those classical objects so we have a 'cut' in how we view nature right at the foundations of QM. This leads to stuff like the Von Newmann regress and the introduction of consciousness causes collapse most would think a bit too far out to be taken seriously. My view is a fully quantum theory of measurement is required and indeed much progress in that area has been made but a few issues still remain such as proving the basis singled out by decoherence does not depend on how the system is decomposed. I believe we are not far away from a full resolution but until all the i's are dotted at t's crossed I for one still think some mystery remains. And who knows - dotting the i's and crossing the t's may show up something truly surprising.
What do you mean by a fully quantum theory of measurement, if QM isn't one already? (Keep in mind that QM includes decoherence). And what is it required for?

The von Neumann regress, if you mean what I think you mean, doesn't have anything to do with the consciousness causes collapse idea. The former is just the observation about what a theory is, and the latter is at best a wild speculation about reality.

I think the idea that the basis is independent of the decomposition is as likely to be true as the idea that 2x is independent of x.
 

Similar threads

Replies
130
Views
9K
Replies
29
Views
5K
Replies
69
Views
5K
Replies
23
Views
4K
Replies
14
Views
2K
Replies
15
Views
2K
Replies
28
Views
2K
Replies
27
Views
3K
Back
Top