OK Corral: Local versus non-local QM

  • Thread starter wm
  • Start date
  • Tags
    Local Qm
In summary, The conversation discusses the issue of local versus non-local interpretations of quantum mechanics, specifically in relation to the EPRB experiment with spin-half particles. The participants hope to resolve the issue using mathematics. The concept of Many-Worlds Interpretation (MWI) is introduced and explained as a way to understand the distribution of information in the universe and how it relates to Alice's and Bob's worlds.
  • #211
DrChinese said:
But ueit's idea is STILL absurd because it is not really science. There is not the slightest bit of evidence that the polarizer settings have any causal connection to the creation of the entangled particles - nor does that of any other prior event (or set of events) whatsoever.

Please specify what evidence supports the idea of a non-realistic universe.
Please specify what evidence supports the idea of a non-local universe.

The fact is, QM specifies the cos^2 theta relationship while ueit's "hypothesis" actually does not. The same hypothesis also fails to accurately predict a single physical law or any known phenomena whatsoever. Additionally, there is no known mechanism by which such causality can be transmitted. Ergo, I personally do not believe it qualifies for discussion on this board.

You shift the burden of proof. Again.
 
Physics news on Phys.org
  • #212
ueit said:
A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.

Another exaggeration. This is not science! If you are such a demon with amazing god-like powers, please show how you might do this.

Otherwise, you should stick to accepted theory & experiment. Obviously, your hypothesis is FAR from accepted as there are no known field effects capable of providing this amount of information. And the Heisenberg Uncertainty Principle flat out excludes this hypothesis for even ONE particle.
 
  • #213
ueit said:
You shift the burden of proof. Again.

There is a big difference between your position and mine, and that allows me to do this successfully.

Your ad hoc opinion belongs in Theory Development, where you can attempt to develop it into a testable scientific hypothesis. On the other hand, my position is orthodox science with the backing of both theory and experiment.

Please quit telling readers here that entangled particle wave functions were predetermined from the initial pre-inflationary state of the universe unless you have some specific evidence of that.
 
  • #214
Just as a reminder, the old "Theory Development" forum was superseded some time ago by the "Independent Research" forum.
 
  • #215
jtbell said:
Just as a reminder, the old "Theory Development" forum was superseded some time ago by the "Independent Research" forum.

I stand corrected. :smile:

I would say that ueit has had a run with this idea; it has been fairly discussed; forum members (besides myself) have addressed the substantial weaknesses in the concept; and ueit has failed to provide any substantiation for claims that are getting bolder and bolder.

Given PF guidelines, I think it is time for ueit to move on regarding this ad hoc and untestable hypothesis.
 
  • #216
DrChinese said:
Please quit telling readers here that entangled particle wave functions were predetermined from the initial pre-inflationary state of the universe unless you have some specific evidence of that.

We are not so easily influenced. But I believe, if wavefunctions exist, and the big bang happened ( and other ifs too) that ALL wavefunctions are entangled - but not in a conspiratorial way. So that kind of entanglement is irrelevant to the Bell scenario.
 
  • #217
DrChinese said:
ueit said:
You shift the burden of proof. Again.

There is a big difference between your position and mine, and that allows me to do this successfully.

Your ad hoc opinion belongs in Theory Development, where you can attempt to develop it into a testable scientific hypothesis. On the other hand, my position is orthodox science with the backing of both theory and experiment.

Please quit telling readers here that entangled particle wave functions were predetermined from the initial pre-inflationary state of the universe unless you have some specific evidence of that.

This is a discussion about interpretations of QM. They're called interpretations, rather than theories, because they do not make testable predictions.

It's unreasonable to require experimental evidence to validate an interpretation. As a theory, QM is agnostic regarding determinism, so QM makes no predictions that support or contradict the notion that the current state of the universe is completely determined by a prior state. (Technically, one could stay that QM is a deterministic theory in the sense that it can model a deterministic reality.)

There are prediction-identical strongly deterministic interpretations of QM. Many worlds and Bohmian mechanics are two well-known examples - neither of which is at odds with orthodox science. Any experimental result that falsifies the notion of strong determinism would also invalidate both of those interpretations.

There are (as previously mentioned) philosophical and aesthetic reasons for refusing to accept arbitrary strong determinism. Arbitrary strong determinism is not predictive, and could easily be described as a 'conspiring universe'. However, since since strong determinism is not falsifiable, it cannot be contradicted by scientific experiments.

To be clear, it is categorically impossible to experimentally contradict the notion that wave functions are predetermined.

N.B.: Talking (or posting) about things in terms of 'your position' or 'my position' can be useful and convenient, but can also foster confusion or lead to people taking things personally.
 
  • #218
JesseM said:
NateTG said:
Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.
What does the assumption of statistical independence between spacelike separated events have to do with strong determinism? This statistical independence would be expected even in a completely deterministic universe, unless some past event that influenced both later events caused the correlation (but I explained in my last post to ueit why this doesn't seem to work if one event is that of a human brain making a choice, or any other event with sensitive dependence on initial conditions).

As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state [itex]\vec{h}[/itex] which corresponds to a complete list of the results of 'wavefunction collapses'.

What strong determinism has to do with statistical independence is that statistical independence may not be possible in a deterministic universe. As you have alluded to, by your reference to the human brain, the notion of statistical independence has also been called 'free will'.
 
  • #219
ueit said:
A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.

NateTG said:
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state [itex]\vec{h}[/itex] which corresponds to a complete list of the results of 'wavefunction collapses'.

Listen to what is being said! It is absolutely as extreme as arguing that "Jesus said so" is an interpretation of physics that we need to discuss. Yes, there would need to be a local copy of the history (past and present, and presumably future as well) of the entire universe present in every particle. Yet ueit even acknowledged this as absurd earlier (but has apparently returned to it).

I like my interpretations with at least a modicum of science included. :smile: As you can tell, I am no fan of ad hoc theories that make NO specific predictions (testable or otherwise). Clearly there is not one IOTA of connection between this "interpretation" and the results we actually observe since "superdeterminism" makes no predictive results whatsoever other than "anything goes".

As regards this thread specifically: Where does the cos^2 theta relationship come from? Bohmian Mechanics has a framework, as does QM. So please, don't elevate ueit's ideas from ad hoc personal philosophy to the level of legitimate science.
 
  • #220
NateTG said:
It's unreasonable to require experimental evidence to validate an interpretation.

NateTG, I'll disagree with you on this one. We should expect an physical interpretation to pass muster with an existing substantial body evidence.

1. If there is a map of the entire history of the universe embedded in every particle (as ueit says), why have we never noticed this in any experiment? There is not a single shed of evidence, direct or indirect, that this is so.

2. And there is substantial evidence - in the form of random results to any desired level - that there is NO connection between quantum states of many collections of particles that are in extremely close causal contact. An example being a radioactive sample of uranium, or polarization of photons emitted from ordinary light bulbs. It is amazing that ONLY the entangled photons from a PDC source (or similar) display these fascinating correlations, while all other particles are purely random. Yet ueit says all share a "common" knowledge of the evolution of the universe.

So while these blatant gaps may not bother you, they point out to me that this is a purely ad hoc personal theory and one which does not qualify as a legitimate interpretation of particle physics. Or perhaps there are papers that could be cited to at least add some air of science to this?

I think that discussions of interpretations are themselves completely legitimate, but ueit is well beyond that point. I recognize that you may have a different opinion of where that point is.
 
  • #221
DrChinese said:
Listen to what is being said! It is absolutely as extreme as arguing that "Jesus said so" is an interpretation of physics that we need to discuss.

What I'm arguing is that it's incorrect to say that 'Jesus said so' can be (scientifically) falsified. 'Jesus said so' is an awful interpretation of QM, but the salient argument for that is philosophical and not scientific in nature. However this claim:
On the other hand, my position is orthodox science with the backing of both theory and experiment.
represents itself as a scientific one.

Yes, there would need to be a local copy of the history (past and present, and presumably future as well) of the entire universe present in every particle. Yet ueit even acknowledged this as absurd earlier (but has apparently returned to it).

Philosophically speaking, nobody seems to have any trouble with every particle having a local copy of its own past. It's not difficult to conceive of a universe where every particle's past includes sufficient information to predict the entire universe's space-time.

As regards this thread specifically: Where does the cos^2 theta relationship come from? Bohmian Mechanics has a framework, as does QM. So please, don't elevate ueit's ideas from ad hoc personal philosophy to the level of legitimate science.

I don't want to put words into ueit's mouth, but what he seems to be trying to describe is very similar to Bohmian Mechanics in a universe with a singular origin, which is local, realistic, and prediction equivalent to QM.
 
  • #222
1. If there is a map of the entire history of the universe embedded in every particle (as ueit says), why have we never noticed this in any experiment? There is not a single shed of evidence, direct or indirect, that this is so.

If there is a wavefunction and it collapses why have we never noticed wavefunction collapse in an experiment? There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists or wavefunction collapse occurs.

2. And there is substantial evidence - in the form of random results to any desired level - that there is NO connection between quantum states of many collections of particles that are in extremely close causal contact. An example being a radioactive sample of uranium, or polarization of photons emitted from ordinary light bulbs. It is amazing that ONLY the entangled photons from a PDC source (or similar) display these fascinating correlations, while all other particles are purely random. Yet ueit says all share a "common" knowledge of the evolution of the universe.

This is a fallacious argument. Just because something something is unpredictable does not make it non-deterministic. This is true even in the classical example of a ball at the top of a hill.
 
  • #223
NateTG said:
Just to be clear, this an attempt to illustrate an 'unstated assumption' of Bell's Theorem and not an attempt to refute it, QM, or any experimental results.

Although the presence of wormholes can cause some problems with causality, even Einstein knew that they were consistent with GR (and are thus local). AFAICT It's a bit ambiguous whether this qualifies as Bell-local because people generally assume that pairs of entangled particles can be space-like separated.
[emphasis added]

Sorry I’ve totally missed any 'unstated assumption' by Bell that should be included in his theorem. Do you have a clear definition of what you’re referring to here?

Also I don’t see what you think is “a bit ambiguous” - - - are you trying to say it could be fair to call a pair of wormholes Bell-local?? If you understand Bell-local there is nothing ambiguous at all; the idea is just not Bell Local any more than QM is Bell Local!

There is nothing wrong with being non Bell-local that is how QM defines itself as a “complete” theory (no other can improve on) and a Local Realist cannot be complete. Remember Bell-local means BOTH Einstein local and Einstein realistic i.e. Einstein local and realistic. That means local variables created as part of the two separate photons when they were created including hidden variables. But that is not all; these variables do not change with any “Weird Action at’a Distance” (WAAD) of any kind! Any such requirement is NOT Bell-local.

All the Bell experiments have said is “Sorry Bell we still see weird action at a distance that cannot be answered by any Local Realistic Variable, not even a hidden but unknown one.”

Wormholes theories and "common histories able to manipulate and correlate outcomes" theories can solve Bell is because they are non Bell-local.

Just like MWI and Bohmian Mechanics are not Bell-local, they all allow WAAD to appear, they just have different ways to account for it - but it is still WAAD that a Local Realist cannot understand without extra dimensions of invisible guide waves. They provide predictions equivalent to QM because they are using non Bell-local solutions,

If they are Bell-local theories they could be provide a definition of Einstein’s unknown hidden variable; not a solution to WAAD equivalent to QM non-local solution.

A theory can not just change the rules for what Bell-local means and expect to gain more respect than QM; if it wants to claim being local it must define and describe the variable that replaces the uncertainty principle.
 
  • #224
NateTG said:
Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.

Strong determinism is nothing but plain old determinism followed to its logical conclusion. If one doesn't like the conclusion, too bad for him. However, it should be clearly stated that the reason for rejecting such theories has nothing to do with Bell.
 
  • #225
DrChinese, please answer my two questions:

Please specify what evidence supports the idea of a non-realistic universe.
Please specify what evidence supports the idea of a non-local universe.

Thanks!
 
  • #226
NateTG said:
...There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists...

Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming.

Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.:biggrin:If it walks like a duck and quacks like a duck...
 
  • #227
ueit said:
Juao Magueijo’s article “Plan B for the cosmos” (Scientific American, Jan. 2001, p.47) reads:
Inflationary theory postulates that the early universe expanded so fast that the range of light was phenomenally large. Seemingly disjointed regions could thus have communicated with one another and reached a common temperature and density. When the inflationary expansion ended, these regions began to fall out of touch.

It does not take much thought to realize that the same thing could have been achieved if light simply had traveled faster in the early universe than it does today. Fast light could have stitched together a patchwork of otherwise disconnected regions. These regions could have homogenized themselves. As the speed of light slowed, those regions would have fallen out of contact
It is clear from the above quote that the early universe was in thermal equilibrium. That means that there was enough time for the EM field of each particle to reach all other particles (it only takes light one second to travel between two opposite points on a sphere with a diameter of 3 x 10^8 m but this time is hardly enough to bring such a sphere of gas at an almost perfect thermal equilibrium). A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.
Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous.

So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself.

The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense.
ueit said:
I also disagree that “the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it”. We don’t have a theory to describe the big-bang so I don’t see why we should assume that it was a non-deterministic phenomena rather than a deterministic one. If QM is deterministic after all I don’t see where a stochastic big-bang could come from.
I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.
JesseM said:
I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process.
ueit said:
Yes, because this is a case where Newtonian theory applies well (small mass density). I’m not accustomed with GR formalism but I bet that the difference between the predictions of the two theories is very small.
Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth. If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there?
ueit said:
In Newtonian gravity the force is instantaneous. So, yes, in any system for which Newtonian gravity is a good approximation the objects are “pulled towards other object's present positions”.
You're talking as though the only reason Newtonian gravity could fail to be a good approximation is because of the retarded vs. current position issue! But there are all kinds of ways in which GR departs wildly from Newtonian gravity which have nothing to do with this issue, like the prediction that sufficiently massive objects can form black holes, or the prediction of gravitational time dilation. And the fact is that the orbit of a given planet can be approximated well by ignoring the other planets altogether (or only including Jupiter), so obviously the issue of the Earth being attracted to the current vs. retarded position of Mars is going to have little effect on our predictions.
ueit said:
The article you linked from John Baez’s site claims that uniform accelerated motion is extrapolated by GR as well.
Well, the wikipedia article says:
In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or contracting sphere) or cylindrically symmetric (like a spinning disk).
So either one is wrong or we're misunderstanding what "uniform acceleration" means...is it possible that Baez was only talking about uniform acceleration caused by gravity as opposed to other forces, and that gravity only causes uniform acceleration in an orbit situation which also has spherical/cylindrical symmetry? I don't know the answer, this might be another question to ask on the relativity forum...in any case, I'm pretty sure that the situation you envisioned where Mars is deflected from its orbit by a passing body would not qualify as either "uniform acceleration" or "spherically/cylindrically symmetric".
ueit said:
EM extrapolates uniform motion, GR uniform accelerated motion. I’m not a mathematician so I have no idea if a mechanism able to extrapolate a generic accelerated motion should necessarily be as complex or so difficult to simulate on a computer as you imply. You are, of course, free to express an opinion but at this point I don’t think you’ve put forward a compelling argument.
You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two.
ueit said:
If what you are saying is true then we should expect Newtonian gravity to miserably fail when dealing with a non-uniform accelerated motion, like a planet in an elliptical orbit, right?
No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue.
 
Last edited:
  • #228
NateTG said:
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe.
But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions. As I argued in my last post, ueit's argument about thermal equilibrium in the early universe establishing that all past light cones merge and become identical at some point doesn't make sense.
 
  • #229
NateTG said:
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state [itex]\vec{h}[/itex] which corresponds to a complete list of the results of 'wavefunction collapses'.

It's not exactly what I propose. Take the case of gravity in a Newtonian framework. Each object "knows" where all other objects are, instantaneously. It then acts as if it's doing all the calculations, applying the inverse square law. General relativity explains this apparently non-local behavior through a local mechanism where the instantaneous position of each body in the system is extrapolated from the past state. That past state is "inferred" from the space curvature around the object.
By analogy, we might think that the EPR source "infers" the past state of the detectors from the EM field around it, extrapolates the future detector orientation and generates a pair of entangled particles with a suitable spin.
 
  • #230
NateTG said:
DrC said: If there is a wavefunction and it collapses why have we never noticed wavefunction collapse in an experiment? There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists or wavefunction collapse occurs.

This is a fallacious argument. Just because something something is unpredictable does not make it non-deterministic. This is true even in the classical example of a ball at the top of a hill.

Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what? And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that?

It is sort of like invoking the phases of the moon to explain why there are more murders during a full moon, and not being willing to accept that there are no fewer murders at other times. Or do we use this as an explanation only when it suits us?

If this isn't ad hoc science, what is?
 
  • #231
DrChinese said:
Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what?

That's simple. Even if each particle "looks" for a suitable detector orientation before emission, only for entangled particles we have a set of supplementary conditions (conservation of angular momentum, same emission time) that enable us to observe the correlations. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle.

And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that?

I put Malus' law by hand without any particular reason other than reproduce QM's prediction. I'm getting tired of pointing out that the burden of proof is on you. You make the strong claim that no local-realistic mechanism can reproduce QM's prediction. On the other hand I don't claim that my hypothesis is true or even likely. I only claim that it is possible.

To give you an example, von Newman's proof against the existence of hidden-variable theories is wrong even if no such theory is provided. It was wrong even before Bohm published his interpretation and will remain wrong even if BM is falsified. So, asking me to provide evidence for the local-realistic mechanism I propose is a red-herring.

If this isn't ad hoc science, what is?

It certainly is ad-hoc but so what? Your bold claim regarding Bell's theorem is still proven false.
 
  • #232
ueit said:
1. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle.

2. I put Malus' law by hand without any particular reason other than reproduce QM's prediction.

3. Your bold claim regarding Bell's theorem is still proven false.

1. :smile:

2. :smile:

3. Still agreed to by virtually every scientist in the field.
 
  • #233
The question of what is or is not a valid loophole in Bell's theorem should not be a matter of opinion, and it also should not be affected by how ridiculous or implausible a theory based on the loophole would have to be. For example, everyone agrees the "conspiracy in initial conditions" is a logically valid loophole, even though virtually everyone also agrees that it's not worth taking seriously as a real possibility. If it wasn't for the light cone objection, I'd say that ueit had pointed out another valid loophole, even though I personally wouldn't take it seriously because of the separate objection of the need for ridiculously complicated laws of physics to "extrapolate" the future states of nonlinear systems with a huge number of interacting parts like the human brain. But I do think the light cone objection shows that ueit's idea doesn't work even as a logical possibility. If he wanted to argue that each particle has, in effect, not just a record of everything in its past light cone, but a record of the state of the entire universe immediately after the Big Bang (or at the singularity, if you imagine the singularity itself has 'hidden variables' which determine future states of the universe), then this would be a logically valid loophole, although I would see it as just a version of the "conspiracy" loophole (since each particle's 'record' of the entire universe's past state can't really be explained dynamically, it would seem to be part of the initial conditions).
 
  • #234
JesseM said:
Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous.

So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself.

The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense.

OK, I think I understand your point. The CMB isotropy does not require the whole early universe to be in thermal equilibrium. But, does the evidence we have require the opposite, that the whole universe was not in equilibrium? If not, my hypothesis is still consistent with extant data.

I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.

I don't think your argument applies in this case. For example, the pre-big bang universe might have been a Planck-sized "molecule" of an exotic type, that produced all particles in a deterministic manner.

Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth.

Forget about that example. Take Pluto's orbit or the Earth-Moon-Sun system. In both cases the acceleration felt by each object is non-uniform (the distance between Pluto and Sun ranges from 4.3 to 7.3 billion km, during a Sun eclipse the force acting on the Moon differs significantly from the case of a Moon eclipse). However, both systems are well described by Newtonian gravity hence the retardation effect is almost null. I think the main reason is that, quantitatively, the gravitational radiation is extremely small. The Wikipedia article you've linked says that Earth loses about 300 joules as gravitational radiation from a total of 2.7 x 10^33 joules.

If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there?

I'll do that.

You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two.

If my analogy with gravity stands (all kinds of motions are well extrapolated in the small mass density regime), the difference in complexity should be about the same as between the Newtonian inverse square law and GR.

No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue.

See my other examples above.
 
  • #235
JesseM,

I've started a new thread on "Special & General Relativity" forum named "General Relativity vs Newtonian Mechanics".
 
  • #236
paw said:
Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming.

Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.:biggrin:If it walks like a duck and quacks like a duck...

Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one.
 
  • #237
JesseM said:
But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions.

In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation.
 
  • #238
NateTG said:
In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation.
Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory.
 
  • #239
JesseM said:
Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory.

BM is explicitly non-local because it requires synchronization. The 'instantaneous communication' aspect can be handled by anticipation since BM is deterministic.

Now, let's suppose that we can assign a synchronization value to each space-time in the universe which has the properties that:
(1) the synchronization value is a continuous function of space-time
(2) if space-time a is in the history of space-time b then the synchronization value of a is less than the synchronization value of b.

Now, we should be able to apply BM to 'sheets' of space-time with a fixed synchronization value rather than instants. Moreover for flat regions of space-time, these sheets should correspond to 'instants' so the predictions should align with experimental results.

Of course, it's not necessarily possible to have a suitable synchronization value. For example, in time-travel scenarios it's clearly impossible because of condition (2), but EPR isn't really a paradox in those scenarios either.
 
  • #240
JesseM,

It seems I was wrong about the GR being capable to extrapolate generic accelerations. I thought that the very small energy lost by radiation would not have a detectable effect on planetary orbits. This is true, but there are other effects, like Mercury's perihelion advance.

I'm still interested about your oppinion regarding the big-bang though.

You said:

Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.

We know that GR alone cannot describe the big-bang as it doesn't provide a mechanism by which the different particles are created. So, if the pre-big-bang universe was not a null-sized object, but with a structure of some sort, and if it existed long enough in that state for a light signal to travel along the whole thing, would this ensure identical past light cones?
 
  • #241
NateTG said:
Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one.
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.
 
  • #242
Ontoplankton said:
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.

Well, ultimately, it comes down to what 'simplest' means. And that requires some sort of arbitrary notions.
 
  • #243
I don't know... it's easy to specify a wavefunction; you just write down some equations, and then without any complex further assumptions, you can talk about decoherence and so on to show that humans and their experiments are structures in the wavefunction. But how do you specify the collection of humans and their experiments, without deriving it from something more basic? I think any theory that's anthropocentric like that is bound to violate Occam.
 
  • #244
We have one theory which says that:
1. We can predict experimental results using some method X
2. There are things that are not observable used in X.
3. These unobservable things have physical reality.
And another theory that says:
1. We can predict experimental results using the same method X.
2. There are things that are not observable used in X

Even considering that 'physical reality' is a poorly defined notion, it seems like the latter theory is simpler.
 
  • #245
The crucial difference here being that in the former theory, 1 is explained by 2 and 3, whereas in the latter theory, 1 is an assumption that comes from nowhere. Occam is bothered by complex assumptions, not complex conclusions. Once you've explained something, you can cross it off your list of baggage.

Also, the latter theory isn't complete; either the unobservable things exist or they don't, and you have to pick one.
 
Last edited:
Back
Top