Questions About Quantum Theory: What's Wrong?

  • Thread starter reilly
  • Start date
  • Tags
    Qm
In summary, after 75 years of success, some people still have issues with Quantum Theory. However, it is widely considered the most successful and tested physics theory. The problem lies in confusion between interpretation and formalism, as well as misconceptions about the randomness of QM events. QM was developed through experiments and it is necessary for understanding many aspects of physics.
  • #106
Locrian said:
Actually, can you give any instance in physics in which it is important? It's total lack of importance is part of the point other posters are trying to make.

Well, it's unlikely to make new predictions. It's mainly about resolving the long-standing issues.

Just understanding how macroscopic quasiclassical physics emerges from microscopic quantum physics is one such issue. Removing the idea of external observeration being needed for the universe in quantum cosmology would be another. And then there is taming paradoxes like delayed-choice, EPR, GHZ and so on.

This thread originally asked what's wrong with quantum mechanics. I'm relating that it appears that nothing is really wrong with it and it seems to have less problems than a lot of people think.

How important anyone thinks that is would be up to them. :smile:
 
Last edited:
Physics news on Phys.org
  • #107
We ditch copenhagen only if we find another interepretation which can predict physical phenomena as well, if not better than copenhagen and (hopefully) consistent with other theories (GR, for example).

Glad to here there is a debate going on somewhere, pity it is not on PF; but I disagree with your last paragraph, theories predict, interpretations explain. My case is that Quantum theory does predict with great accuracy, but the Standard Model interpretation does not explain at all well; therefore we should concentrate on interpretation before trying to improve further our ability to predict. Put another way, it could be that a better explanation would show that Quantum theory is far more complete than is realized at present and therefore the search for alternative theories (string etc) is a waste of time and effort.
 
  • #108
reply to post #76

reilly said:
Of course, many measurements, particularlry on large systems induce small changes in the system. But that's not the issue. The issue is "before and after", whether a coin toss, winning in poker, ascertaining the temperature of bath water -- who wants to injure their child with water that is too hot. While you may have some notions about the temp of the bath water -- a Baysean situation -- you don't know until you measure -- with your hand or foot, or with a thermometer. The water temp does not change, but your head does -- you go from "I don't know" to "I know". That's collapse.
That is only 'half' of the story of "collapse". It is the 'half' which concerns the knower – i.e. the subject.

The other 'half' of the story concerns the system which has been measured – i.e. the object. What did it go from? ... and what did it get to?

And this is my point. If one asserts that the "collapse" phenomenon is the same in a quantum scenario as it is in a classical scenario – (not merely with respect to the subject, but) also with respect to the object – then one has to (at least implicitly) assume the physical existence of "hidden variables" in the quantum case.
_______________
reilly said:
QM state vectors,based on the famous complete set of measurements, apart from a phase factor, by definition, give a complete description of the system at hand. That's as basic as it gets.
It is unclear to me why you have brought up the subject of a "CSCO". Is it because you are interpreting a statement such as the following in terms of CSCO's?

The quantum-mechanical state-vector description is "complete".

This is a statement about "physical reality". It purports that the "real factual situation" pertaining to the system at hand is completely characterized by the state vector; i.e. "hidden variables" have no physical existence.

So, perhaps then this is the point which you have been trying to make all along:

Do not attempt to interpret the state vector in terms of the "object" (i.e. the system at hand), but do so only in terms of the "subject" (i.e. the knower). Then, a quantum "collapse" scenario is no different from a classical one.

If that is your point, then I reply:

Of course, "collapse" will then be the same! You have chosen to disregard the one respect in which it can be different. But when that respect is taken under consideration, it turns out that "collapse" can then be said to be the same only if the state-vector description is not "complete"; i.e. "hidden variables" exist.
_______________
reilly said:
I still don't get the need for hidden variables -- in my dissertation I used QED to compute radiative corrections for various electron-nucleon scattering experiments, which helped map out the electromagnetic structure of nucleons. Should I be worried that I didn't use hidden variables?
In order to perform calculations, it suffices to use a minimal interpretation of the "shut-up-and-calculate" genre. Therefore, there would be no need whatsoever to invoke the notion of "hidden variables". Invoking such notions may, however, become relevant in the context of statements made regarding the nature of "reality".
 
  • #109
Reality is subjective, not objective. We are incapable of accessing 'objective' reality. The only reality we can observe is necessarily subjective. The mere act of making an observation perturbs the nature of a system in a very fundamental way. And we can only observe systemic effects because we cannot observationally isolate fundamental elements of systems. We are limited to observing their interactions - i.e., their relationships one to another. Generally relativity and QT are fundamentally connected in that respect - all interactions are relative.
 
  • #110
Chronos said:
Reality is subjective, not objective.
What about the viewpoint that our subjective reality is an illusion created by an objective universe.

Perhaps you only meant that by our very nature escaping a subjective viewpoint is impossible.
 
  • #111
Eye_in_the_Sky said:
_______________It is unclear to me why you have brought up the subject of a "CSCO". Is it because you are interpreting a statement such as the following in terms of CSCO's?

The quantum-mechanical state-vector description is "complete".

This is a statement about "physical reality". It purports that the "real factual situation" pertaining to the system at hand is completely characterized by the state vector; i.e. "hidden variables" have no physical existence.

That is a logical interpretation of an axiom.The first axiom.If u reject it,by claiming that the "hidden variables",which are obviously excluded by 1-st principle and by the claim that all one needs to know is an CSCO and solve SE,have "physical existence",then u don't have QM anymore.You'd have to reject the whole theory.Can u do that...?Iff you have a viable alternative;for 80 yrs one doesn't have that and my opinion is he won't...

You can't fight against a postulate (e.g.von Neumann's about collapsing state vector when measuring),without refuting all theory.As i said,as in other physical theories,postulates form not only a logical structure,but also a unitary structure...It's like having something against Einstein's second postulate of SR,just because this postulate asserts "c" constance withoout taking into account "hidden variables",or who knows what else...

Daniel.
 
  • #112
binarybob0001 said:
The problem we are discussing is written in Aristotles meditations.

I guess you mean Descartes :-p

Patrick.
 
  • #113
Andrew Mason said:
What is the essential postulate from which one can derive all of QM?

The superposition principle: if A and B are two possible physical states, then
a |A> + b |B> is also an existing physical state.

And then you need some embellishment (Hilbert spaces, operators etc...) to make this meaningful but this is the essential idea of quantum theory.

cheers,
Patrick.
 
  • #114
Stingray said:
To put it most succintly, it is logically inconsistent to claim that GR and QFT (+standard model) together form a fundamental description of our universe. On a purely mathematical level, either one of these theories works just fine without the other. But experiment has shown that we can't throw out either of them (at the appropriate scales). We also can't combine them in any consistent way (so far). It is therefore obvious that fundamental physics needs to be extended.

I am calling this a "logical inconsistency" because we are arriving at the problem without any direct experimental evidence. The only sense in which experiment is involved is that both GR and QM are essentially our favorite (minimal) extrapolations of all known experiments (at least we think that they are consistent with everything).

But isn't this putting the cart before the horse? The FACT that this is still a highly active research area means that you are already set in your ways that GR and QM/QFT cannot be merged. And if they are successful in merging those two, now it becomes logically consistent? I find that highly logically inconsistent!

Again, even when GR and QM/QFT cannot be made into a unified theory, I do not see why they are both logically inconsistent UNTIL there are experimental evidence to point to such a notion. Until we get to a scale where they both should work equally well and we can see where they both deviate, then we can't say anything. This is playing by your rule of requiring "direct experimental evidence". You too cannot claim of logical inconsistencies without directly experimental evidence. Last time I checked, we have no such evidence where QM/QFT and GR can be tested on equal grounds.

BCS theory has never been claimed to be fundamental physics. So no, there is no logical inconsistency implied by it being difficult to apply to high-Tc superconductors. It is just a limit to the model (apparently - I'm far from an expert).

What does being "fundamental" (another area we can debate on) has anything to do with what you are applying? You are using an example of an ongoing evolution of an idea and pointing out that just because it STILL cannot be merged into another, it is logically inconsistent. If we apply that principle, almost everything that we have in physics are logically inconsistent. There's nothing "fundamental" about this.

I find it highly puzzling that when there are still issues regarding the merging of QM with GR, it is QM that is pointed out to be "logically inconsistent". If you look at the degree of certainty in terms of experimental observations, QM outstrip GR by orders and orders of magnitude. The validity of QM can be found in all of your modern electronics. We can manipulate, engineer, and change various parameters to test many parts of QM EASILY. The body of evidence for QM is astounding. Now look at GR. I'm not claiming that it is wrong, but c'mon people. Look at the nature of the evidence and how many there are! Not a single evidence from GR can come even close to the degree of certainty of, let's say, the evidence for an energy gap in the superconducting state of a superconductor!

Yet, what do we get? QM cannot agree with GR, so QM must be logically inconsistent. I find that conclusion to be highly illogical based on the wealth of experimental evidence alone.

Zz.
 
  • #115
caribou said:
I see. :smile:

What about this problem of "interaction-free measurement" in which if a particle has a wave function which means it could be detected at A at time 1 or at B at time 2, we know by simple logic that if there was no interaction at A at time 1 then the particle will later be detected at B at time 2.

The wave function collapsed at time 1 because nothing happened. Or it collapsed at time 1 because we observed that nothing happened.

Either way, the wave function collapsed without any physical interaction.

Now that seems a bit strange to me.

The physicists whose work I was describing have found that wave function collapse is simply a mathematical shortcut and not a physical effect.

Now that makes more sense to me.

What do you think? :smile:

Say what?!

Isn't "interaction-free measurement" an oxymoron? Can you please construct a QM state that fits into your description above?

Zz.
 
  • #116
vanesch said:
The superposition principle: if A and B are two possible physical states, then
a |A> + b |B> is also an existing physical state.

And then you need some embellishment (Hilbert spaces, operators etc...) to make this meaningful but this is the essential idea of quantum theory.

cheers,
Patrick.

I think your post can be resumed in one word only

LINEARITY...

Daniel.
 
  • #117
dextercioby said:
I think your post can be resumed in one word only

LINEARITY...


Well, from a mathematical point of view, of course, the "superposition principle" and "linearity" are about the same. But there is something physical to the "superposition principle" which is maybe not captured by the term "linearity".
"Linearity" seems to be a requirement on the kinds of equations or so of a theory. For instance, one is tempted to say that Maxwell's equations are "linear". But people who say "linear" usually think of "first order approximation". You can potentially think of small non-linearities "correcting" the linear theory.
But the superposition principle in QM is not so much about the equations. It is about the possible states a system can be in. And here, the strange, bold and weird properties of QM all come together: if you postulate that configuration "A" is a possible state of your system (be it a particle, a field, a solar system, an atom, whatever) and if "B" is also a possible, different, state of your system, then there exists a DIFFERENT state for each complex couple (a,b) modulo a common factor, which is described by a |A> + b |B>.
This contains the essence, and all the weirdness, of QM. If "sitting on your chair" is one of your states, and "lying on your bed" is another one, then there are, by fundamental postulate, a miriad of different states you can be in, namely a x "lying on your bed" + b x "sitting on your chair", for each couple of complex numbers (a,b) modulo a complex factor.
This at first totally absurd idea is the very foundation of quantum theory. If you tweak it, you don't have a quantum theory anymore.

The other aspect of the superposition principle is less fundamental, but nevertheless important, namely the hypothesis that the time evolution operator U(t,t') is a linear operator over the state space. This one comes closer to your "linearity" requirement. It is conceivable that one could modify this (small non-linear corrections) and still talk about a kind of quantum theory. But you cannot do away with the first "state space" superposition.

cheers,
Patrick.
 
  • #118
ZapperZ said:
Again, even when GR and QM/QFT cannot be made into a unified theory, I do not see why they are both logically inconsistent UNTIL there are experimental evidence to point to such a notion. Until we get to a scale where they both should work equally well and we can see where they both deviate, then we can't say anything. This is playing by your rule of requiring "direct experimental evidence". You too cannot claim of logical inconsistencies without directly experimental evidence. Last time I checked, we have no such evidence where QM/QFT and GR can be tested on equal grounds.

You are right of course that the clash between GR and QM does not mean that QM has to be logically inconsistent. In fact, as far as I understand (which is not much) the most successful candidates to resolve the issue (superstrings and loop quantum gravity) stick in fact to QM and modify the gravity part. However, they are still far from achieving their goals, even just on paper, not even talking about experiments, so I'd say that the judge is still out (and will be - for a long time !).

Nevertheless, quantum theory (in the Copenhagen version) does have a serious inconsistency, or at least, an issue that should be resolved one day, and that is the projection postulate. For all its practical value (and no, you don't have to rewrite your PhD because you used it), the issue remains: what sets apart an interaction we label "measurement" from an interaction we consider "part of the system, in the hamiltonian" so that it does TWO TOTALLY INCOMPATIBLE THINGS to the wavefunction ?
Now, as I said, this is, for the foreseeable future, NO PRACTICAL ISSUE, because we are still doing quantum experiments which are so remote from our macroscopic, "classical" world that we can put a "Heisenberg cut" anywhere between the system and us, with identical results (thanks to decoherence theory). But it is an issue of principle, no ? And it might be touched upon by the eventual modifications needed to deal with gravity. Also, maybe one day, when our quantum experiments WILL have reached a level of sophisitication that is unheard of today, the issue will have practical consequences.

The second point is, that even if we have no direct experimental access, we know that the very early universe, as well as in the vincinity of black holes, quantum theory AND GR must play a role. So the clash between GR and QM is real, because real situations exist where both should be important. It is not that they deal with non-overlapping domains, even if we have no direct experimental access to their domain of overlap yet.

So although as of today, and the near (and even not-so-near) future, quantum theory as we know it, using Born's rule, gives satisfying results, and leads to many interesting applications and fine science, its limits are "in view": collapse or no collapse should be resolved one day, and QM/GR should be resolved one day. And I wouldn't make any bet that QM will come out of it without any modification. It might be. It might not.

cheers,
Patrick.
 
  • #119
vanesch said:
Nevertheless, quantum theory (in the Copenhagen version) does have a serious inconsistency, or at least, an issue that should be resolved one day, and that is the projection postulate. For all its practical value (and no, you don't have to rewrite your PhD because you used it), the issue remains: what sets apart an interaction we label "measurement" from an interaction we consider "part of the system, in the hamiltonian" so that it does TWO TOTALLY INCOMPATIBLE THINGS to the wavefunction ?

Well, here's where we differ. I can't tell if, even if we buy into CI, that we have a "logical inconsistency" or simply it offends our "tastes"! We find it uncomfortable to say that an electron occupies BOTH H atoms simultaneously in an H2 molecule, or that the superconducting current flows in BOTH directions at the same time in the Delft and Stony Brook's SQUID experiment. But nature owes us nothing to make us comfortable. To have something flowing in BOTH direction at the same time can be argued to be "logically inconsistent", but this assumes a priori that our common sense on how things should behave is valid. And we all know that our "common sense" are built on classical underpinnings. Our concept of "time", "position", "momentum", "energy", etc. are all classical ideas. When we force those into where it doesnt' fit, OF COURSE we will get strange answers (square objects through round holes).

My point is that what is there to distinguish between CI having inconsistent interpretation with us forcing something to confine to our tastes? We all agree that ALL experiments so far have agreed with QM's predictions. I find it less "offensive" to have QM make predictions that offends and contradicts my common sense. Usually, when that happens, it signifies new physics. An electron can fractionalize separately into its spin and charge components? Bring it on!

The second point is, that even if we have no direct experimental access, we know that the very early universe, as well as in the vincinity of black holes, quantum theory AND GR must play a role. So the clash between GR and QM is real, because real situations exist where both should be important. It is not that they deal with non-overlapping domains, even if we have no direct experimental access to their domain of overlap yet.

But that's what I said earlier. Till we get to THAT scale, we have no direct experimental evidence. Since the existence of black holes are still indirect, performing QM vs. GR experiments there are still a long way off. We have no experiments as of yet, and in the near future, to test such things. Thus, using such a scenario to imply "logical inconsistency" of QM is premature and certainly, at least in my book, illogical.

Take note that, at the very simplest level, QM HAS incorporated a "quantization" of gravitational potential. This is seen in the recent experiment of neutrons falling in gravitational fields.[1] While this isn't the GR effects we are looking for, it is at least another indication that QM has more gravitational consideration in it than GR has for QM.

Zz.

[1] V.V. Nesvizhevsky et al. Nature v.415, p.297 (2002).
 
  • #120
ZapperZ, you almost seem to be intentionally misinterpreting my posts. My last one was stated more precisely than the others. I said, for example, that

Stingray said:
On a purely mathematical level, either one of these theories [QFT or GR] works just fine without the other.

But we are not mathematicians. Experiment demands that the two theories cannot be considered as independent axiomatic systems (except for the very useful purpose of approximation :smile:). There must be a single underlying theory which at least reproduces both of these ideas in the regimes that they have been tested. My statement was that naive forms of this combined theory are logically inconsistent. So I implied that BOTH GR and QFT are inconsistent in this sense. You don't seem to like that terminology, so feel free to come up with a different word.

ZapperZ said:
The FACT that this is still a highly active research area means that you are already set in your ways that GR and QM/QFT cannot be merged.

I am completely confident that GR and QM/QFT CAN be merged in the sense I mentioned above (reproducing known experiments). I am almost as confident that this will require significant modification of one or probably both of these theories. As they stand today in textbook form, they are incompatible even at their most basic level. I don't think this can be resolved in any trivial way. Something has to give. Unfortunately, I think that we are arguing semantics again...

ZapperZ said:
Again, even when GR and QM/QFT cannot be made into a unified theory, I do not see why they are both logically inconsistent UNTIL there are experimental evidence to point to such a notion. Until we get to a scale where they both should work equally well and we can see where they both deviate, then we can't say anything.

Huh? We can perform imaginary experiments using the theory that we know, which gives us nonsense. We don't need a (real) experiment to tell us that something that has to change.

Now, it may very well be true that we will never find the "correct" theory of quantum gravity without experimental help, but that's a separate issue.

Also, what was your point in quoting Nesvizhevsky's paper? It is a nice experiment, but I don't think anyone really expected QM to fail at that level. It's even common to assign something like that as an undergraduate homework problem (the theoretical portion, obviously).
 
  • #121
ZapperZ said:
Isn't "interaction-free measurement" an oxymoron? Can you please construct a QM state that fits into your description above?

Okay, I'm going by the start of Chapter 18 of Robert Griffiths' book Consistent Quantum Theory. :smile:

What we have is a particle which goes through a beam-splitter and into two output channels with detectors in each, and the detectors are at different distances along each channel from the beam splitter.

After the beam splitter but before any detector, this particle is in a delocalized state. It's just the usual "particle state" = ("state A" plus "state B") over "the square root of 2".

And then, the more interesting of the two possible series of events is if the channel with the detector which is closer to the beam splitter doesn't have its detector triggered by a certain time, we know the other channel with the detector which is farther from the beam splitter will have its detector triggered at a later time.

So if we don't detect the particle by a certain time in one channel, it must be detected at a later time in the other channel. But that means we went from having a delocalized state to a localized state even though there was no detection and no interaction. So we learned where something is because we "measured" it not by interaction but with simple reasoning from a lack of interaction. The wave function collapsed because nothing happened. Which is strange.

Griffiths says:

While it might seem plausible that an interaction sufficient to trigger a measuring apparatus could somehow localize a particle wave packet somewhere in the vicinity of the apparatus, it is much harder to understand how the same apparatus by not detecting the particle manages to localize it in some region which is very far away.

This second, nonlocal aspect of the collapse picture is particularly troublesome, and has given rise to an extensive discussion on "interaction-free measurements" in which some property of particle of a quantum system can be infered from the fact that it did not interact with a measuring device.

Griffiths also says it would be difficult but not out of the question to do such an experiment. His explanation for the whole strange situation is that the collapse is a useful mathematical shortcut and not a physical effect.

This is not to say, however, that strange concepts all disappear in the conclusions being reached by the physicists whose work I've been relating in this thread. :wink:

Decoherent/consistent histories essentially follows Everett's approach but doesn't assume the other "worlds" are real. However, it still has its own peculiarities.

Roland Omnes' book Understanding Quantum Mechanics details a simple "ideal von Neumann experiment" at the start of Chapter 19 which has me going "What?!" myself, as it shows in an experiment quite similar to the "interaction-free" one above how you can basically measure which channel the particle is in but then later recombine the wave packets from both channels and this recombination will destroy the result of the earlier measurement! And this is even with the recombination of the particle's states occurring at any distance from the measuring device! :bugeye:

So we can in principle, using the wave function collapse viewpoint, then go and "uncollapse" the collapsed wave function. From any distance.

In an ideal experiment. In theory. :wink:

Omnes says:

This shows the most problematic aspect on an ideal measurement: the data it yields are not obtained once and for all. Apparently lost interferences can be regenerated later in the measuring device by an action on a distant system (the particle). There is no possibility for considering facts as being firmly established. One may see the result as a particularly vicious consequence of EPR correlations or express it by saying that Schrodinger's cat cannot be dead once and for all, because evidence for his survival can always be retrieved.

Thankfully, however, decoherence comes to the rescue in the real world and obliterates this alarming possibility so it has no any meaningful chance of occurring.

So my understanding is of all this is that, in theory, the particle state which didn't occur can come back and haunt the particle state which did occur. We have no wave function collapse and interaction-free measurements anymore but we do have is all the unrealized states smashed up and hidden all over the place.

Maybe I'm wrong but that's what they very much seem to be saying. I can well understand if people want to stick to Copenhagen. It works and works well, just it has a few relatively unimportant conceptual hiccups that are quite understandably ignored by most. :smile:

I decided, though, I wanted to read the latest and best research and you see the strange places it's lead me. :rolleyes:
 
  • #122
I'm afrain that neither me,nor Zz or Reilly do not share your enthusiasm and concern regarding the fate of the Copenhagen Interpretation of Nonrelativistic Quantum Mechnics.

That doesn't mean your post is not interesting...It is...

Daniel.
 
  • #123
caribou said:
Okay, I'm going by the start of Chapter 18 of Robert Griffiths' book Consistent Quantum Theory. :smile:

What we have is a particle which goes through a beam-splitter and into two output channels with detectors in each, and the detectors are at different distances along each channel from the beam splitter.

After the beam splitter but before any detector, this particle is in a delocalized state. It's just the usual "particle state" = ("state A" plus "state B") over "the square root of 2".

And then, the more interesting of the two possible series of events is if the channel with the detector which is closer to the beam splitter doesn't have its detector triggered by a certain time, we know the other channel with the detector which is farther from the beam splitter will have its detector triggered at a later time.

So if we don't detect the particle by a certain time in one channel, it must be detected at a later time in the other channel. But that means we went from having a delocalized state to a localized state even though there was no detection and no interaction. So we learned where something is because we "measured" it not by interaction but with simple reasoning from a lack of interaction. The wave function collapsed because nothing happened. Which is strange.

Er.. no. I disagree that you made no detection just because you didn't detect anything. It's like saying I have to make a measurement of BOTH entangled pairs to know the state of both particles. I don't. I need to make a measurement of only one. This is because both particles are part of a "macro particle" in which one measurement gives me both info. It is the same with your beam splitter. The issue here isn't the photon. It is the system in which the two detectors are "entangled" via the knowledge of one determing the state of the other. I could have easily done this with a double-slit experiment and put a detector at just one slit. If I know a particle passed through one slit, I do not need to make a determination that it didn't pass through the other.

But then does this mean that QM simply reflects our "state of knowledge" rather than an inherent property of the universe? If this is the case, then a superposition of two states is really a system in which it is in one state OR the other, and not a mixture of BOTH states simultaneously. This will be no different than tossing a coin. I will then reinvoke the Schrodinger Cat-type experiments of H2 molecules and those damn SQUID experiments. Via these experiment, I will say that we DO have evidence to point out that QM does in fact reflect an intrinsic property of nature and NOT just our state of knowledge. So it is not just a mathematical artifact.

Note that QM makes no mention of the mechansim that occurs upon a particular measurment. The "collapsing" wavefunction is purely interpretation, thanks to CI. I have always maintained that one needs to understand and separate out the formalism and the interpretation. This allows the possibility of the "shut up and calculate" school of thought that bypasses, for most part, the tediousness of "interpretation".

Zz.
 
  • #124
Stingray said:
Huh? We can perform imaginary experiments using the theory that we know, which gives us nonsense. We don't need a (real) experiment to tell us that something that has to change.

I disagree. You are discounting emergent phenomena that are entirely possible given the complexity of the situation. That's a distinct possibility that almost everyone ignores. I could tell you precisely the equation of motion of a bunch of gas particles, but there's nothing in that equation that will predict a phase transition and where it will occur. Such observation should caution anyone who thinks all of what we know can be extrapolated without any discontinuities. What if there is such a discontinuity between QM and GR equivalent of such a phase transition? Aren't there already people working on such ideas?

Again, using your criteria of a "direct" experimental evidence, we have none. And if you are convinced that QM and GR can be merged, then I do not see the issue of either of them being "logically inconsistent" in the first place. Or did I "purposely" misread your argument again?

Zz.
 
  • #125
ZapperZ said:
I disagree. You are discounting emergent phenomena that are entirely possible given the complexity of the situation. That's a distinct possibility that almost everyone ignores. I could tell you precisely the equation of motion of a bunch of gas particles, but there's nothing in that equation that will predict a phase transition and where it will occur. Such observation should caution anyone who thinks all of what we know can be extrapolated without any discontinuities. What if there is such a discontinuity between QM and GR equivalent of such a phase transition? Aren't there already people working on such ideas?

I completely agree with you. My entire point has been that we can't extrapolate GR and QM into regimes where both "should" be important.

There are actually people working on emergent spacetime and such things. It is certainly a reasonable possibility, although I don't have very much confidence that the theorists working on it right now are likely to succeed without experimental help (some of them disagree).

Again, using your criteria of a "direct" experimental evidence, we have none. And if you are convinced that QM and GR can be merged, then I do not see the issue of either of them being "logically inconsistent" in the first place. Or did I "purposely" misread your argument again?

I said that they can be merged in the sense that there will be a single theory which is everywhere self-consistent and reproduces all experiments that have been attributed to both GR and QM. I was so confident in this statement because it is extremely weak. It basically just says that the universe obeys knowable laws.

In contrast, I said that GR and QM IN THEIR CURRENTLY ACCEPTED FORMS do not go together. Again, I don't think this is a controversial statement.
 
  • #126
reply to post #111

dextercioby said:
That is a logical interpretation of an axiom.The first axiom.If u reject it,by claiming that the "hidden variables",which are obviously excluded by 1-st principle and by the claim that all one needs to know is an CSCO and solve SE,have "physical existence",then u don't have QM anymore.
I cannot say I understand what you have meant in the above.

Let me nevertheless try to clarify further what I have meant in connection with CSCO's.

Upon completing a ("filtering"-type of) measurement of a CSCO upon a system, the output quantum state is necessarily pure. ... Correct?
... Yes, of course.

Given that, then, the question can still be asked: Does this pure state provide a "complete" characterization of the "real factual situation" pertaining to the system?

If one answers this question in the affirmative, then one is forced to say that "collapse" as it relates to the object (i.e. the system in question) is not the same in a quantum context as it is in a classical context.

On the other hand, if one answers the question in the negative, then one leaves open the possibility that "collapse" with regard to the object can be the same in a quantum context as it is in a classical one. (Note: In such a case there would still have to be more to the state vector than just a "giver of probabilities".)

Now, I must emphasize that what I have said above is no more than part of an attempt to convince Reilly that "collapse" – as it relates to the object – can be said to be the same in quantum mechanics as it is classical mechanics only if one purports the physical existence of "hidden variables".

... And in the end (assuming we have reached it), my suspicion is that Reilly already knew this, but that his intention was to think of "collapse" only as it relates to the subject (i.e. the "knower"). In that case one is free to think of "collapse" as being the same in quantum and classical contexts. However, it would be misleading to say so without making the terms explicit.

-------------------
dextercioby said:
Iff you have a viable alternative;for 80 yrs one doesn't have that and my opinion is he won't...
Nowhere have I purported that the correct answer is "NO" to a question such as the following:

Does this pure state provide a "complete" characterization of the "real factual situation" pertaining to the system?

(Neither, however, have I anywhere purported that the correct answer is "YES".)
 
  • #127
ZapperZ said:
The issue here isn't the photon. It is the system in which the two detectors are "entangled" via the knowledge of one determing the state of the other.

I think I know what you mean. When we include the particle and detectors together we arrive sooner or later at a superposition of detection states, collapsing into one or the other possible correlated results as in EPR.

I believe this leads to questions such as what causes the collapse and when does it happen. We could have assigned it to have happened as far back as just after the beam splitter. This possibility seems to agree with wave function collapse being a mathematical rather than physical event and would certainly agree with your suggestion that "the 'collapsing' wavefunction is purely interpretation".

Also, the EPR-like correlation leads to questions about faster-than-light effects.

Of course, the decoherent histories theory I've been relating still has its own question in that if we always have a superposition of detector states, why do we observe one result to have happened and not another. Indeterminism is the escape from that question, I believe.

But then does this mean that QM simply reflects our "state of knowledge" rather than an inherent property of the universe? If this is the case, then a superposition of two states is really a system in which it is in one state OR the other, and not a mixture of BOTH states simultaneously. This will be no different than tossing a coin. I will then reinvoke the Schrodinger Cat-type experiments of H2 molecules and those damn SQUID experiments.

I'd add that Robert Griffiths has pointed out that a macroscopic superposition state can be made up from more than just the states we think of making it up, much like a vector in elementary geometry can be written as more than just the sum of one pair of perpendicular vectors we have chosen.

He then says that whatever we think of making up a Schrodinger cat state, it's not at all obvious that it's an alive cat and a dead cat. It's hard to tell what it means.

Looking into these other states is something I really want to have a look at soon to better understand the "fuzziness" of the basic physics. :smile:
 
  • #128
caribou said:
And then, the more interesting of the two possible series of events is if the channel with the detector which is closer to the beam splitter doesn't have its detector triggered by a certain time, we know the other channel with the detector which is farther from the beam splitter will have its detector triggered at a later time.

Indeed, this sort of "waveform collapsing" goes on all of the time, not only
at the moment of detection but also at every "non-detection"

Most of the wave function "gets lost" for example when it hits the screen
with the splits. Say it has a 10% chance to make it through the splits.
When the particle doesn't hit the screen we assume that his has gone
through the splits and the wave-function goes to back to 100% again behind
the screen due to unitarity.

We know from molecular modeling that we must assume the particle's charge
to be continuously distributed over the wave function. Would this mean then
that 90% of the charge does instantaneously "collapse" to the paths through
the splits? preferably not of course...


This is why my personal picture (Interpretation) of a single particle
in the (not so empty) vacuum is that of a "cloud" of:

N+1 particles plus N (virtual) anti particles rather then the interpretation
where the particle follows N+1 paths at the same time or (worse) is in
N+1 different worlds at the same time.

It's would now be unclear which particle in the cloud is the N+1'th "real"
particle and which are the N virtual particles. Unitarity is guaranteed
because there's only 1 more particle then there are virtual anti-particles.

The continuous distribution of charge could be easily attributed to the
tiny remaining dipole fields of the virtual particle pairs. The same can be
said for the other attributes which are continuously distributed over the
wave function.

If 90% hits the screen and an N+1'th (real) particle gets through we
presume that the virtual pairs go where "they usually go" and take their
energy with them unused. Like virtual pairs typically do. and thus, there
would be no need for a "collapse of the wave function" type of event.

Again, It's a just a personal picture, but it helps me. More then most of the
others.

Regards, Hans
 
Last edited:
  • #129
Good discussion indeed. The more I think about it, particulalry in regard to the "no measurement" collapse, I think that the wave function/state vector is a way to represent our knowledge. Because Prob(A)=1-Prob(not A), and with a properly executed experiment a la caribou, we know before hand that if during the measurement window, nothing happens in the B channel, then the particle necessarily is in he other channel. As ZapperZ has noted, there is entanglement, admittedly of a rather peculiar sort. I would tend to term it cognitive entanglement rather than apparatus originated entanglement, because this entanglement is due to fundamental logic of our brain. Whether you agree with me or not, it's clear that there is a collapse(change of knowledge) in the brain if the particle does not show up at B.

One could argue, I suppose, that once the particle does not show up in B, the initial wave function with the superposed states is no longer correct. The lack of result provides a new initial condition for the wave function.

I like this approach because it is consistent with the way we use probability in business, market research in particular. It's highly pragmatic: you don't know until you measure -- null results are allowed. Probability is probability, and ultimately it's in your head -- and don't forget, there are systems described by superpositions of states, as in control theory for example.

As I've mentioned before, this approach is championed by Sir Rudolf Peierls. I feel I'm in good company.

What I'm less sure of are the issues of H2 and SQUIDS. So, I'm off to Google-land. God forbid I should have to change my mind.

And to Eye In The Sky -- collapse in the subject indeed. Funny and funky changes in the object leave me feeling very uncomfortable.

And, by the way, we are now having the type of discussion I had hoped we would. Thank you.

Regards, Reilly
 
  • #130
caribou said:
Also, the EPR-like correlation leads to questions about faster-than-light effects.

But is this really the case? There's nothing that "travels" from one location to another, so how could this be "faster" than light? Furthermore, at no instant are people like Zeilinger claiming that such a scheme can send info faster than light. You STILL need to create pairs of entangled particles and then send them to far away locations. This can never be faster than c.

I'd add that Robert Griffiths has pointed out that a macroscopic superposition state can be made up from more than just the states we think of making it up, much like a vector in elementary geometry can be written as more than just the sum of one pair of perpendicular vectors we have chosen.

He then says that whatever we think of making up a Schrodinger cat state, it's not at all obvious that it's an alive cat and a dead cat. It's hard to tell what it means.

Looking into these other states is something I really want to have a look at soon to better understand the "fuzziness" of the basic physics. :smile:

But we can make an observation that would not disturb a particular superposition if there is a non-commuting observable. That's the whole point of the energy gap in the bonding-antibonding bands of H2 molecule and the energy gap in the SQUIDs experiments. By measuring the energy state (which does not commute with the position observable of the electron in an H2 molecule, and also does not commute with the current state of a superfluid across a Josephson junction), we can maintain those superposition and measure the CONSEQUENCES of such superposition. And a consequence of such superposition is just such energy gap! Without such superposition, this energy gap would not be present. This is the clearest indication, at least to me, that such a concept isn't just mumbo-jumbo. It has no classical counterpart, meaning it isn't this OR that, but rather this AND that, and in varying proportions.

So if there is an observable that does not commute with the dead-alive observable of the cat, that's the thing we should measure to detect such superposition. Of course, you have to maintain quantum coherence throughout the whole source+cat+box+etc. for such effects to be measured.

Zz.
 
  • #131
ZapperZ said:
Well, here's where we differ. I can't tell if, even if we buy into CI, that we have a "logical inconsistency" or simply it offends our "tastes"!

No, the way CI is formulated, it is a genuine inconsistency, in the sense that Jack and Joe can apply the rules of the game in equally accepted ways, and arrive at different conclusions.
For instance, Joe can claim that a "click in a photodetector" is a measurement, and apply "collapse of the wavefunction", while Jack, slightly more sophisticated, working in solid-state physics, works out the Hamiltonian of the photocathode and EM field and evolves the wavefunction with his hamiltonian of his photodetector.

Jack and Joe now have DIFFERENT wavefunctions: Joe has ONE (randomly selected, following Born's rule) component of a wavefunction Jack has calculated completely and deterministically. Although it will be difficult, Jack could think of interference experiments between the different components of his wavefunction while Joe doesn't: his wavefunction "collapsed".
As long as CI leaves in the dark WHAT is a "measurement" and when can we (in principle) write down a hamiltonian, we have an inconsistent theory in principle (according to the meaning of the word in logic). This is not a matter of taste.

But I know of course (thanks to decoherence) that this doesn't matter, for the time being and the near future, in practice, because obtaining these interferences Jack could in principle obtain, is damn hard.

That's why you can happily work with collapsing (or not) wavefunctions, use Born's rule at will, in the large majority of cases it won't make a bloody difference, and in those cases where it could, the experiments are too difficult... except that progress is made and maybe one day we can do interference experiments with cats, and even with humans :-)

cheers,
Patrick
 
  • #132
vanesch said:
That's why you can happily work with collapsing (or not) wavefunctions, use Born's rule at will, in the large majority of cases it won't make a bloody difference, and in those cases where it could, the experiments are too difficult... except that progress is made and maybe one day we can do interference experiments with cats, and even with humans :-)

Hmm, I guess it's the extreme level of abstraction that leads to such
preposterous interference extrapolations. All physics and geometry gets
abstracted out.

A buckyball goes through 1015 phase changes when it travels it's own
length during the experiments. (according to E=hf) It still goes through 109
phase changes when it travels the distance of a single nucleon (10-15 m)
That is "two" buckyballs have to overlap with an accuracy of 10-9 of the
size of a nucleus to be interfered out. And this with 70 atoms all vibrating
at 900 K coming out of an oven...

It's often forgotten that the deBroglie phase "travels" with c2/v which
goes to infinity if the speeds goes to zero. To interfere it is not only
necessary to be at the right place but it has also to be at exactly the
right time.



Regards, Hans.


(c2/v is easily shown: if the speed goes to zero then λ goes to infinity.
The frequency however continuous to be f = E0/h = m0c2/h.
The speed = fλ = c2/v).
 
Last edited:
  • #133
vanesch said:
For instance, Joe can claim that a "click in a photodetector" is a measurement, and apply "collapse of the wavefunction", while Jack, slightly more sophisticated, working in solid-state physics, works out the Hamiltonian of the photocathode and EM field and evolves the wavefunction with his hamiltonian of his photodetector.

The CI deals only with logical statements and no logical inconsistency is embedded in this formulation (only additionnal intepretations may lead to inconsystencies)
In your example, if Joe is true (the result of his measure is A for the associated observable), Jack has the collapsed updated wave function of Jack. And vice versa. That's all.
The fact that Joe does not measure really the observable he assumes is outside the scope of the problem. It is equivalent to say that jack is wrong when he claims he has the result measurement A of his supposed observable (either the result or the observable is wrong).
CI interpretation just states that when a measurement gives a result A (the result A is true), the wavefunction is collapsed into |A>. The collapse itself is not explained by the CI (as I "interpret" it, it is not very different from the "shut up and calculate").

Seratend.
 
  • #134
vanesch said:
No, the way CI is formulated, it is a genuine inconsistency, in the sense that Jack and Joe can apply the rules of the game in equally accepted ways, and arrive at different conclusions.
For instance, Joe can claim that a "click in a photodetector" is a measurement, and apply "collapse of the wavefunction", while Jack, slightly more sophisticated, working in solid-state physics, works out the Hamiltonian of the photocathode and EM field and evolves the wavefunction with his hamiltonian of his photodetector.

Jack and Joe now have DIFFERENT wavefunctions: Joe has ONE (randomly selected, following Born's rule) component of a wavefunction Jack has calculated completely and deterministically. Although it will be difficult, Jack could think of interference experiments between the different components of his wavefunction while Joe doesn't: his wavefunction "collapsed".
As long as CI leaves in the dark WHAT is a "measurement" and when can we (in principle) write down a hamiltonian, we have an inconsistent theory in principle (according to the meaning of the word in logic). This is not a matter of taste.

But I know of course (thanks to decoherence) that this doesn't matter, for the time being and the near future, in practice, because obtaining these interferences Jack could in principle obtain, is damn hard.

That's why you can happily work with collapsing (or not) wavefunctions, use Born's rule at will, in the large majority of cases it won't make a bloody difference, and in those cases where it could, the experiments are too difficult... except that progress is made and maybe one day we can do interference experiments with cats, and even with humans :-)

cheers,
Patrick

Well then, you should expect what's coming next from 10 miles away... show me an experimental observation that differentiate what Jack and Joe get. If they both end up with different and incompatible reality, then you should be able to predict different results depending on how you approach things.

Zz.
 
  • #135
ZapperZ said:
Well then, you should expect what's coming next from 10 miles away... show me an experimental observation that differentiate what Jack and Joe get. If they both end up with different and incompatible reality, then you should be able to predict different results depending on how you approach things.
Zz.

Yes, very easy :-p .
Just take 2 voltmeters: an old one (analogic, bought in a super market) (Joe) with a low impedance and the last new one (10 digits digital, HP) (jack) with a huge internal impedance and a third observer (john) that notes the results of joe and jack.
Both (joe and jack) they measure at the same time a non ideal current source. They get different results as noticed by john. So, where is the reality, if there is one?

Seratend.
 
  • #136
seratend said:
Yes, very easy :-p .
Just take 2 voltmeters: an old one (analogic, bought in a super market) (Joe) with a low impedance and the last new one (10 digits digital, HP) (jack) with a huge internal impedance and a third observer (john) that notes the results of joe and jack.
Both (joe and jack) they measure at the same time a non ideal current source. They get different results as noticed by john. So, where is the reality, if there is one?

Seratend.

So you're expecting that two instruments with different level of accuracy (and function) should give the same identical answer? How is this identical to what vanesch described?

Zz.
 
  • #137
ZapperZ said:
So you're expecting that two instruments with different level of accuracy (and function) should give the same identical answer? How is this identical to what vanesch described?

Zz.

No almost surely different answers. As vanesh says:

vanesch said:
No, the way CI is formulated, it is a genuine inconsistency, in the sense that Jack and Joe can apply the rules of the game in equally accepted ways, and arrive at different conclusions.

We can build an experiment (no need to go to an expensive QM experiment) where we get different and incompatible answers depending on the [possibily false] assumptions we have on the real measurements.

But surely I do not understand well what you want to say (to what part of the vanesh's post your previous post applies)?
(anyway, if you want the same answer, you always have the possibility -small probability - that the voltmeters may give the same answer, if joe is lucky with the current source and the offset error of its bad voltmeter).

However, this does not change the fact that CI is logically consistent. It just tries to underline the possibly false interpretations we can do.

Seratend.
 
  • #138
seratend said:
No almost surely different answers. As vanesh says:



We can build an experiment (no need to go to an expensive QM experiment) where we get different and incompatible answers depending on the [possibily false] assumptions we have on the real measurements.

But surely I do not understand well what you want to say (to what part of the vanesh's post your previous post applies)?
(anyway, if you want the same answer, you always have the possibility -small probability - that the voltmeters may give the same answer, if joe is lucky with the current source and the offset error of its bad voltmeter).

However, this does not change the fact that CI is logically consistent. It just tries to underline the possibly false interpretations we can do.

Seratend.

Oh, I get it.

I thought you were trying to use your example to illustrate venesh's point of "genuine inconsistency". In your example, there isn't one since one CAN explain why the results are different. This is very much like measuring different times in SR. Yet, we know why they should be different since we can explain them. This isn't a "genuine inconsistency".

To me, genuine or logical inconsistency is like when we shift our coordinate system and the outcome gives completely different description. Nature shouldn't care when we do something that superficial.

Zz.
 
  • #139
Seratend -- I'm a bit confused by Jack and Joe. What is it exactly that they are doing, what are they measuring? Apparently they are getting different answers when they should not, and I don't get it.

ZapperZ -- The SQUID experiment is, to say the least, disarming, and, like WOW. I've got a bunch of quesions, and have not found a good reference on GOOGLE. Any suggestions?

How "strong" is the barrier? That is, a bound state wave function, I presume, will be non-zero in both current channels. Is there anything like optical pumping and inverted levels? I'm not quite sure about the sequence of events - is one channel empty at the start?

Thanks, and regards,
Reilly
 
  • #140
reilly said:
ZapperZ -- The SQUID experiment is, to say the least, disarming, and, like WOW. I've got a bunch of quesions, and have not found a good reference on GOOGLE. Any suggestions?

How "strong" is the barrier? That is, a bound state wave function, I presume, will be non-zero in both current channels. Is there anything like optical pumping and inverted levels? I'm not quite sure about the sequence of events - is one channel empty at the start?

Thanks, and regards,
Reilly

OK, why don't I give you the exact citations of all the relevant papers and see if can answer your questions? I'm thinking that I should also put this up in my Journal since I have had to refer to them quite often on here.

The two experiments from Delft and Stony Brook using SQUIDs are:

C.H. van der Wal et al., Science v.290, p.773 (2000).
J.R. Friedman et al., Nature v.406, p.43 (2000).

Don't miss out the two review articles on these:

G. Blatter, Nature v.406, p.25 (2000).
J. Clarke, Science v.299, p.1850 (2003).

However, what I think is more relevant is the paper by Leggett (who, by the way, started it all by proposing the SQUIDs experiment in the first place):

A.J. Leggett "Testing the limits of quantum mechanics: motivation, state of play, prospects", J. Phys. Condens. Matt., v.14, p.415 (2002).

This paper clearly outlines the so-called "measurement problem" with regards to the Schrodinger Cat-type measurements.

Zz.
 

Similar threads

Replies
69
Views
5K
Replies
1
Views
6K
Replies
31
Views
5K
Replies
24
Views
2K
Replies
232
Views
18K
Replies
5
Views
2K
Replies
33
Views
3K
Back
Top