Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #351
RUTA said:
I spent hours yesterday with my philosopher of science colleague reading van Fraassen and Rovelli on RQM. We think we have it figured out (it's a challenge, thus the van Fraassen's paper). Crudely, it's information theory plus the light cone structure. Overall, physics is about information and special relativity and QM are rules for the exchange of information. RQM says information exchange is local per SR with correlations per QM. RQM does not provide an underlying mechanism for those QM correlations, so we were frustrated until we figured that out and quit looking for his ontology.

The last sentence contains what it appears I need to respond to. No, it doesn't specify a mechanism any more than you specify a mechanism by which you know a heads up coin has tails down. Now your obviously not going to get 1 to 1 like a coin with ensembles and relational variables. Recall, as you noted, this cast not just RQM but QM itself as an "information theory". Thus the content of the wavefunction is a specification of what is known from prior measurements, not the actual physical content of it. Read the quote from the paper I provided DrC again:
Relational EPR ([PLAIN said:
http://arxiv.org/abs/quant-ph/0604064)]From[/PLAIN] the relational perspective the Heisenberg picture appears far more natural: \psi codes the information that can be extracted from past interactions and has no explicit dependence on time; it is adjusted only as a result of an interaction, namely as a result of a new quantum event relative to the observer. If physical reality is the set of these bipartite interactions, and nothing else, our description of dynamics by means of relative states should better mirror this fact: discrete changes of the relative state, when information is updated, and nothing else. What evolves with time are the operators, whose expectation values code the time-dependent probabilities that can be computed on the basis of past quantum events.
The lack of time dependence mentioned is because the information defined is only that information available from past interactions. The only thing that evolves with time is the empirical time-dependent probabilities computed from past quantum events. If this was a physical theory, rather than an information theory, then you could properly talk about the evolution of relative variables. This is important to realize, that the claim is that both RQM and QM are information theories. Talking about how two particles correlate spins in this situation is pointless, because all it really did was fill in information we couldn't obtain from past measurements. It's like wondering how the other side of the heads up coin knew to be tails, only we are dealing with ensembles of relational variables here (quantum events).

RUTA said:
Given what I (mis?)understand about RQM, I would say it does not accomplish local realism, weakly or otherwise. He's in the nonseparable (not realism) class, clearly, but exactly how he doesn't say. I'm inclined to think he's saying QM is fundamental, so there is no "why" for its correlations. This is like SR postulating the constancy of c. It's a postulate, so there is no explanation for "why" everyone measures the same speed for light. It's just a brute fact about information and its exchange.
Nonseparable, in the sense used in RQM and claimed for QM, is the same sense in which 10 red and 10 blue marbles are randomly mixed and placed equally in each of 2 boxes. Now, without looking in those boxes, the "information" you have about the number of red and blue marbles in each box is nonseparable. Yet opening one box instantly provides information about what's in the other box, and requires no FTL mechanism regardless of separation.

Now RQM also justifies this "information theory" (RQM and QM) as a complete description of what can be known, and they're absolutely right in a 'purely' empirical sense. I still prefer a wider range of empirically equivalent model constructs.

RUTA said:
BTW, we think RBW can be used to provide a "why" for information theory and RQM, but that's another story.
RBW is an impressive construct. How difficult would it be to recognize if its ontology was remodeled into an exactly equivalent logical construct that reversed the concept of motion again? The issues raised wrt RQM makes me question this. Ontologies are mostly more akin to coordinate systems that truth statements in my view, with some caveats.
 
Last edited by a moderator:
Physics news on Phys.org
  • #352
zonde said:
Scientific means that there is something testable about idea apart from hand waving.
Where is that part in RUTA or unusualname views that you call them scientific?

My idea however can be tested easily any time (assuming of course you have equipment for basic Bell inequality test).
Another thing is that my idea is based on things that you observe in experiments and not on what's in other people heads. So you can call my idea crazy but it's much tighter bound with what you can observe out there compared to those plausible other ideas.

OK, your idea is crazy.
 
  • #353
my_wan said:
Nonseparable, in the sense used in RQM and claimed for QM, is the same sense in which 10 red and 10 blue marbles are randomly mixed and placed equally in each of 2 boxes. Now, without looking in those boxes, the "information" you have about the number of red and blue marbles in each box is nonseparable. Yet opening one box instantly provides information about what's in the other box, and requires no FTL mechanism regardless of separation.

Thanks my_wan for the marble analogy, it’s simple and beautiful.

I want to add a 'function' to this story, to make it 'compatible' with my understanding of what happen at Bell test experiments:
  • 10 white and 10 white marbles are randomly mixed and placed equally in each of 2 boxes (i.e. entangled photons are spinless before measurement).

  • Now, without looking in those boxes, the "information" you have about the number of marbles is that there are 10 white marbles in every box.

  • We separate the boxes by 18 km, so they cannot influence each other.

  • For the 'measurement' we have arranged a sloping bridge that randomly will make the marble roll of to the right, or to the left. If the marble goes left, it will land in red dyeing bath, and come out as a red marble for inspection. If the marble goes right, it will land in blue dyeing bath, and come out as a blue marble for inspection.

  • When we run this experiment thousands of times, with different angles on the sloping bridge (thus changing the probability for red vs. blue) it turns out that the marbles in the two boxes are perfectly correlated with each other. And not only that, the actual correlation is corresponding exactly to the predictions of QM probabilities for marbles on a sloping bridge!

Now, did I poke a hole in the 'RQM Box', or not...!?

(:wink:)
 
  • #354
DevilsAvocado said:
Thanks my_wan for the marble analogy, it’s simple and beautiful.

I want to add a 'function' to this story, to make it 'compatible' with my understanding of what happen at Bell test experiments:
  • 10 white and 10 white marbles are randomly mixed and placed equally in each of 2 boxes (i.e. entangled photons are spinless before measurement).
  • Now, without looking in those boxes, the "information" you have about the number of marbles is that there are 10 white marbles in every box.
  • We separate the boxes by 18 km, so they cannot influence each other.
  • For the 'measurement' we have arranged a sloping bridge that randomly will make the marble roll of to the right, or to the left. If the marble goes left, it will land in red dyeing bath, and come out as a red marble for inspection. If the marble goes right, it will land in blue dyeing bath, and come out as a blue marble for inspection.
  • When we run this experiment thousands of times, with different angles on the sloping bridge (thus changing the probability for red vs. blue) it turns out that the marbles in the two boxes are perfectly correlated with each other. And not only that, the actual correlation is corresponding exactly to the predictions of QM probabilities for marbles on a sloping bridge!

Now, did I poke a hole in the 'RQM Box', or not...!?

(:wink:)
Cool, but I don't see this working, because as an information theory it is our information that lacks spin, not necessarily the photon. But, let's assume they are all white, and design out thought experiment (detectors) to better represent the conservation laws we do have information about. The same conservation law that require EPR correlations to begin with.

We have 20 white marbles. We collide (interact) pairs of marbles such that they leave this collision/interaction in opposite directions on the X axis. 18 km away, in each direction, we have a pair of side by side paint buckets such that one is on each side of the X axis in each direction. I'll also show this is still valid even if the marbles are never painted (relational interpretation).

Now, when one marbles lands in the (-X,Y) quadrant bucket, conservation laws demand that it is more probable that the other marble will land in the (X,-Y) bucket. Furthermore, we don't even have to paint the marbles when they land in our buckets, they remain white. We merely define marbles that land in the (-X,Y) and (X,Y) buckets as red, and (-X,-Y) and (X,-Y) as blue (opposite properties per correlation). Now red and blue becomes a purely relational concept, relative to the configuration of the bucket detectors and their overall geometry. In that case red and blue are no more real after the measurement than before, except relative to the measuring device.

Note: If QM is an information theory, as RQM posits, it can't be a priori claimed this is reasonable analogy of how it works, as it provides 'information' that's by definition not available. RQM does explicitly posit the relational properties analogous the the relational red and blue properties above. So, like Bell's theorem, even if we assume RQM is perfectly valid in principle, it does not prove no FTL mechanism exists or isn't involved, only that they aren't required.
 
  • #355
ThomasT said:
If one of your electrical appliances doesn't turn on, what's the FIRST thing you do? Check to see that it's plugged in, right? Unfortunately the equivalent isn't happening wrt speculations surrounding the incompatibility between lhv models and entanglement.

You got the analogy backwards. This is exactly what happened in the 30 years from 1935 to 1965. A common view was that local realism was compatible with the predictions of QM. Bell put that to an end. At least mostly. :smile:

Anyway, I have personally spent plenty of time looking for cracks in Bell. Thousands of others have too. So you are really selling the physics community short, as well as repeating the same unsubstantiated claims.
 
  • #356
DrChinese said:
You got the analogy backwards. This is exactly what happened in the 30 years from 1935 to 1965. A common view was that local realism was compatible with the predictions of QM. Bell put that to an end. At least mostly. :smile:

Anyway, I have personally spent plenty of time looking for cracks in Bell. Thousands of others have too. So you are really selling the physics community short, as well as repeating the same unsubstantiated claims.
Here I agree with you, though I'm not so sure about "mostly". :smile:

EPR correlations do in fact rule out Einstein realism in the sense of absolute observables that can be completely passively observed empirically. Foundationally I don't see this as a problem, because something that exist but doesn't interact with anything isn't observable. Thus what we observe empirically are interactions, not things. However, with Bell's theorem we are provided with two choices: 1) Take the evidence EPR correlations provide as evidence to look beyond the standard local effects, which it is evidence though not proof. 2) Use the constraints imposed by EPR correlations to abandon that set of locally realistic models in contradiction, and try to ferret out the subset capable of maintaining consistency. Which to date hasn't provided any new physics, only interpretations with varying levels of cogency.

Both these possibilities needs investigated, and the first one to succeed, if either can, wins. Debate points notwithstanding, only physics. Thus there are no winners as of today. I therefore object to overstating claims on both ends. :-p

ThomasT,
Yes, DrC is correct that your analogy unduly short changes a lot of awe inspiring work by many many brilliant people. Many things I once assumed were extremely unlikely in fact now has come to pass.
 
  • #357
my_wan said:
Cool, but I don't see this working, because as an information theory it is our information that lacks spin, not necessarily the photon. But, let's assume they are all white, and design out thought experiment (detectors) to better represent the conservation laws we do have information about. The same conservation law that require EPR correlations to begin with.

Okay, you know these things better than me, and I’m 'digesting' RQM. But, still it seems to me that there is one thing missing in the story about the marbles, and I’ll highlight the 'weak' part in the red:
my_wan said:
Nonseparable, in the sense used in RQM and claimed for QM, is the same sense in which 10 red and 10 blue marbles are randomly mixed and placed equally in each of 2 boxes.

To my understanding, the elegance of Bell in investigating "spooky action at a distance", was to implement "randomness at a distance", and setting the final parameters for the experiment in separate light-cones, thus prohibiting any local influence.

To me it looks like you implement the randomness at the "local source", and then put the objects in the in "closed information box". Is this really consistent with Bell test experiments (BTE)...?

my_wan said:
Note: If QM is an information theory, as RQM posits, it can't be a priori claimed this is reasonable analogy of how it works, as it provides 'information' that's by definition not available. RQM does explicitly posit the relational properties analogous the the relational red and blue properties above. So, like Bell's theorem, even if we assume RQM is perfectly valid in principle, it does not prove no FTL mechanism exists or isn't involved, only that they aren't required.
I’m clearly missing something in RQM. I agree that QM is an information theory, and we really can’t know what’s in the "box". BUT, we can measure differences in different BTE setups. If we use not entangled photons in BTE, we get a result that don’t correspond to the predictions of QM, and if we use entangle photons – we get correspondence with QM predictions. This must mean that something happens that cannot be explained with 'every-day-local-reality', even if we all agree that there’s no 'usable information' sent FTL.
 
  • #358
IcedEcliptic said:
I have been quietly reading this for some time, and I agree. I don't ascribe to the RUTA or unusualname view, but I recognize them as scientific and plausible.
You have a strange interpretation of scientific and plausible.

IcedEcliptic said:
Zonde, ThomasT, you are selling your own crazed theories.
Which indicates that you don't understand what either of us is saying. We're advocating certain approaches to different aspects of the EPR-Bell stuff.

Here's a link that explains what I'm saying better that I could:

http://arxiv.org/PS_cache/quant-ph/pdf/0001/0001112v3.pdf
 
  • #359
DevilsAvocado said:
To my understanding, the elegance of Bell in investigating "spooky action at a distance", was to implement "randomness at a distance", and setting the final parameters for the experiment in separate light-cones, thus prohibiting any local influence.
You have to consider it in historical context (short version). This is a very very important historical piece to understand. If you get this everything should conceptually click into place.

When the mechanics was introduced by Born in 1926, the probabilities were to be understood as fundamental, without cause. A year later at the Solvay Conference, Heisenberg and Born declared the quantum revolution was over, that the physics was essentially complete and final. This stoked the famous Bohr–Einstein debates in earnest. Einstein used the conservation of energy to obtain information about the interference process, now called the EPR paradox, which he said contradicted the principle of indeterminacy. Einstein never suggested that the EPR correlations didn't exist. Ironically people now often think EPR correlations proves Einstein wrong, when in fact his argument depended on the correlations being real. If EPR correlations weren't real then his argument that they violate indeterminacy is invalid, i.e., ridiculous.

It became the accepted wisdom that indeterminacy was real, and the randomness was fundamental, without cause. Jump ahead 40 odd years, and we get Bell, Aspect, etc., with unambiguous experimental confirmation of EPR. Now, since indeterminacy is by definition true and fundamental, it means Einstein was wrong, even though he correctly predicted EPR correlations to argue against indeterminacy. Thus there must be some kind of new effect (FTL) to keep both indeterminacy and EPR correlations, and maintain that Einstein was wrong with his correct prediction. It is a paradox only because we still maintain that indeterminacy is true at the most fundamental level of nature.

Meanwhile, the search for a local EPR mechanism became the standard by which a search for hvt's was conducted, only with indeterminacy as a fundamental property in spite of EPR being correctly predicted by Einstein to undermine indeterminacy as a "fundamental" property.

Now here's the caveat: If their exist a causal mechanism for indeterminacy, such that it is real, but not fundamental, then EPR correlations prove exactly what Einstein said they prove. It is only through the acceptance of indeterminacy as a "fundamental" property that extra FTL mechanisms are needed to save the fundamental character of indeterminacy.

DevilsAvocado said:
To me it looks like you implement the randomness at the "local source", and then put the objects in the in "closed information box". Is this really consistent with Bell test experiments (BTE)...?
Yes I did implement randomness at the "local source", and you rightly want to know if this really is consistent with EPR correlations. Well that depends. I'll lay out exactly what that depends on in the most general case.

1) If indeterminacy is a truly fundamental property of 'actual reality': then "local source" is not consistent.
2) If indeterminacy has a causal mechanism in 'actual reality': then "local source" is consistent.
(And we don't know what 'actual reality' is at this time, or even if it's a meaningful claim.)

Ironically (again to make this very important point clear), people often think that for Einstein to be right the correlation experiments must fail, when in fact Einstein correctly predicted them with the expectation that they were real, not to prove EPR correlations didn't exist, but to prove that because they did exist indeterminacy had a more fundamental cause.

Your next paragraph is perfect to finish this argument with.

DevilsAvocado said:
I’m clearly missing something in RQM. I agree that QM is an information theory, and we really can’t know what’s in the "box". BUT, we can measure differences in different BTE setups. If we use not entangled photons in BTE, we get a result that don’t correspond to the predictions of QM, and if we use entangle photons – we get correspondence with QM predictions. This must mean that something happens that cannot be explained with 'every-day-local-reality', even if we all agree that there’s no 'usable information' sent FTL.
Note the "local source" legitimacy requirements above call for "actual reality". Note that RQM turns QM into an "information theory". If it's a theory about the information we have about the reality, rather than a theory about the reality of nature it claims to be, then by definition we can't claim any part of it as a fundamental property of "actual reality", including indeterminacy.

There are further constraints EPR correlations place on such hvt's. That is that indeterminacy is very real, just not a fundamental causeless property of nature. The only realistic way I know to pull that off is with ensembles of many properties, like the thermodynamic model I referenced. Ironically this means that constraints on the causal mechanism required to explain indeterminacy must itself be indeterminate, at least on empirical, not fundamental, grounds. More or less analogous to classical thermodynamic properties.

What I find strange here is people searching realistic causal mechanisms (FTL or not) when the very notion of a causal mechanism subverts the justification for needing a causal mechanism to explain EPR. Of course for FTL I guess it could be assumed EPR needs a causal mechanism, but indeterminacy doesn't. But what about the question of whether the wavefunction itself is real. That would directly imply a causal mechanism for indeterminacy, unambiguously subverting the need for a FTL causal mechanism for EPR correlations.
 
  • #360
DrChinese said:
You got the analogy backwards. This is exactly what happened in the 30 years from 1935 to 1965. A common view was that local realism was compatible with the predictions of QM. Bell put that to an end. At least mostly. :smile:
My understanding was that qm and hidden variables were assumed to be incompatible due to von Neumann's influence. Bell pointed out the flaw in von Neumann's proof 30 years after Greta Hermann did (but noboday paid any attention to her). This is in one of the Mermin papers I referenced.

DrChinese said:
Anyway, I have personally spent plenty of time looking for cracks in Bell. Thousands of others have too. So you are really selling the physics community short, as well as repeating the same unsubstantiated claims.
What I'm talking about isn't a 'crack in Bell'. In my opinion, he really has ruled out lhv theories. You just don't get yet how that can be and still not need nonlocality or ftl propagations. The paper I linked to by Unnikrishnan in an earlier post should help clarify where I'm coming from.
 
  • #361
ThomasT said:
My understanding was that qm and hidden variables were assumed to be incompatible due to von Neumann's influence.

That was not completely accepted, although it was certainly influential. Einstein would - in my opinion - have accepted the Bell proof as conclusive had he lived to see it. But he did not accept von Neumann's.
 
  • #362
ThomasT said:
You have a strange interpretation of scientific and plausible.

Which indicates that you don't understand what either of us is saying. We're advocating certain approaches to different aspects of the EPR-Bell stuff.

Here's a link that explains what I'm saying better that I could:

http://arxiv.org/PS_cache/quant-ph/pdf/0001/0001112v3.pdf

I understand, that you are both borderline crackpots.
 
  • #363
ThomasT said:
Here's a link that explains what I'm saying better that I could:

http://arxiv.org/PS_cache/quant-ph/pdf/0001/0001112v3.pdf

And this is almost like referencing yourself. Basically, the article says: Entanglement is local realistic, proving that entanglement is local realistic.

It isn't that simple. At least De Raedt offered up a formula that leads to a local realistic dataset. Where is the same for this? As I keep pointing out, there is a simple test for any candidate LHV and this one FAILS miserably. Again.
 
  • #364
I thought crackpots were dealt with on this site, I am satisfied that Zonde and ThomasT are that, and offering personal theories.
 
  • #365
my_wan said:
If you get this everything should conceptually click into place.

WOW+WOW+WOW+WOW+WOW!

This is the most intelligent and interesting post I’ve ever read on PF! ABSOLUTELY MIND-BLOWING!

Not only a "click", I almost broke my neck, jumping up & down in my chair! :biggrin:

...I feel dizzy, exhausted...

my_wan said:
Now here's the caveat: If their exist a causal mechanism for indeterminacy, such that it is real, but not fundamental, then EPR correlations prove exactly what Einstein said they prove. It is only through the acceptance of indeterminacy as a "fundamental" property that extra FTL mechanisms are needed to save the fundamental character of indeterminacy.

I get this, and it’s absolutely fantastic and beautiful! There might be a causal mechanism to create the non-causal mechanism we (so far) believe is "ground zero" in QM. And that causal mechanism must look like it’s indeterminate, even if it’s not on fundamental grounds. WOW!

I always loved Einstein, but at the same time I always liked the randomness of the universe – and my free will. It looks like you given me a wonderful solution to this dilemma. Thanks!

(I guess all this is strongly related to "you-know-what"...? :wink:)

my_wan said:
What I find strange here is people searching realistic causal mechanisms (FTL or not) when the very notion of a causal mechanism subverts the justification for needing a causal mechanism to explain EPR. Of course for FTL I guess it could be assumed EPR needs a causal mechanism, but indeterminacy doesn't. But what about the question of whether the wavefunction itself is real. That would directly imply a causal mechanism for indeterminacy, unambiguously subverting the need for a FTL causal mechanism for EPR correlations.

This is very good: We are looking for causal "FTL" mechanism to explain the non-causal mechanism (QM probabilities) in EPR/BTE – where probabilities are the actual proof for EPR/BTE being a true paradox!? That’s a REAL paradox!

To sum up: If "FTL" is not true, then there must be local casual explanation for EPR/BTE, and QM is incomplete (anyway).

(But... wait a minute... if QM and HUP are incomplete? Wouldn’t that mean we could send FTL messages with Quantum teleportation!? Thus meaning FTL is true?? :rolleyes:)

I have shown this video before, but it’s very nice wind up for this wonderful news, where Alain Aspect talks about EPR, Albert Einstein & Niels Bohr and the incompleteness of QM. And when Alain Aspect, in the end of the movie (7:50), says – "You cannot get the wool information of system!" – I suspect that one word could be slightly 'wrong'... :biggrin:

https://www.youtube.com/watch?v=<object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/m8P--jFe3vM&hl=en_US&fs=1&rel=0&color1=0x006699&color2=0x54abd6"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/m8P--jFe3vM&hl=en_US&fs=1&rel=0&color1=0x006699&color2=0x54abd6" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object>

THANKS!
 
Last edited by a moderator:
  • #366
IcedEcliptic said:
I understand, that you are both borderline crackpots.

IcedEcliptic said:
I thought crackpots were dealt with on this site, I am satisfied that Zonde and ThomasT are that, and offering personal theories.
Zonde is arguing that the fair sampling loophole has not been sufficiently closed, which has been an accepted argument, based on accepted scientific methodology, for years. So, he's certainly not being a crackpot. Even though I think that what I'm focusing on sort of moots any loophole argument.

If you don't understand what Zonde's saying, then I wouldn't expect that you would understand what I'm saying either. Read Unnikrishnan's paper that I linked to. Pay attention to the part about an internal nondynamical phase variable imparted at emission.

If you think it's crackpotty to think that correlations between counter-propagating photons emitted during the same atomic transition could possibly be due to their being emitted during the same atomic transition, then, as I mentioned in a previous post, you have a strange interpretation of scientific and plausible.
 
  • #367
DrChinese said:
And this is almost like referencing yourself. Basically, the article says: Entanglement is local realistic, proving that entanglement is local realistic.

It isn't that simple. At least De Raedt offered up a formula that leads to a local realistic dataset. Where is the same for this? As I keep pointing out, there is a simple test for any candidate LHV and this one FAILS miserably. Again.
I think you might have missed the point. Unnikrishnan reproduces the qm predictions with an explicitly local model. Afaik, his result hasn't been refuted.
 
  • #368
ThomasT said:
I think you might have missed the point. Unnikrishnan reproduces the qm predictions with an explicitly local model. Afaik, his result hasn't been refuted.

I just did. You cannot simply SAY it is local realistic. You must demonstrate such. Obviously, this one fails. Otherwise, we would be treated to the solution. You MUST be able to generate a dataset which is realistic.

I am not going to waste my time trying to figure out this gibberish. If you want to use his formula to present a valid set of data, I will look at it. BUT QUIT SAYING IT WITHOUT SHOWING IT!

The entire point of Bell is that he cannot follow through on his claims. I can claim 1=2 too. So what? I can even "prove" it! So please take a minute to understand, we don't want an absurd claim when a dataset can do the trick. Remember: it must have answers for 0/120/240!
 
  • #370
DevilsAvocado said:
(But... wait a minute... if QM and HUP are incomplete? Wouldn’t that mean we could send FTL messages with Quantum teleportation!? Thus meaning FTL is true?? :rolleyes:)

No, because if indeterminacy is not fundamental, then the information producing the correlation can simply be carried by the particles from the original interaction, like the marble analogy. Only the mechanism that carries it must be indeterminate, but not fundamentally so, i.e., a relational ensemble of real (though observer characterized) states. A rabbit is defined by an ensemble of states, such that no two rabbits are precisely the same, or even precisely the same moment to moment, yet they all carry the property "rabbit" wherever they go. Unless they interact with an ensemble with the property called "wolf".

The issue of completeness also has some caveats. If you have a basic computer, and a theory that provides you with the output of every possible input, would that be a complete theory of that computer? In fact it would, even if you had no idea what or how that computer did what it did, or even if there was a computer there. Your "complete" I/O theory would then be an "information theory", not a physical theory. Like RQM defines QM to be, and justifiably labels it complete in that context. The term "complete" carries an entirely different status in that situation, than what is assumed by Einstein realism, which needs to know every part and operation of the physical computer to be labeled complete. The search for hvt's entails finding and dissecting that computer, whether FTL mechanisms are involved or not.
 
Last edited:
  • #371
ThomasT said:
Zonde is arguing that the fair sampling loophole has not been sufficiently closed, which has been an accepted argument, based on accepted scientific methodology, for years. So, he's certainly not being a crackpot. Even though I think that what I'm focusing on sort of moots any loophole argument.

If you don't understand what Zonde's saying, then I wouldn't expect that you would understand what I'm saying either. Read Unnikrishnan's paper that I linked to. Pay attention to the part about an internal nondynamical phase variable imparted at emission.

If you think it's crackpotty to think that correlations between counter-propagating photons emitted during the same atomic transition could possibly be due to their being emitted during the same atomic transition, then, as I mentioned in a previous post, you have a strange interpretation of scientific and plausible.

I've read that paper, and I believe that Dr. Chinese summed up that tidibt quite well. The issue is not that the formalism if QM is not untenable in some ways, but rather that you are arguing for a loophole that I, along with many others believe has been utterly disprove as a factor. You both argue against something, and grasp at straws to do so, but really your pages of arguments boil down to not liking SQM, and not having a superior theory in its place.

The paper is like your arguments which Dr. Chinese continues to rip to confetti; it is scattered, distracting, and fundamentally lacking in substance. Its only strength is that you have yet to state enough of your position to refute it on purely scientific grounds, but that is yet another weakness. You go on and on about a non-existent loophole, and Zonde has ideas about Malus' Law that are between laughable and upsetting.

Your papers are crocks, you have pages of nonsense and rhetoric, but you continue your circular retreat, and in theory you will keep this up until a mentor finally accepts that you are a genuine crackpot and not just misguided.
 
  • #372
DrChinese said:
That was not completely accepted, although it was certainly influential. Einstein would - in my opinion - have accepted the Bell proof as conclusive had he lived to see it. But he did not accept von Neumann's.
Not sure how you can support that when in fact Einstein's argument required EPR correlations to be real (Bell's inequalities to be violated) as his justification (now empirical) to claim indeterminacy wasn't "fundamental". If indeterminacy has a cause, not fundamental, then that causal mechanism can in principle carry the relevant correlation information with the particle from the time the correlation was created. If indeterminacy is fundamental, without cause, then this is not possible and a FTL mechanism is required for EPR correlations. But if indeterminacy is acausal, why can't correlations be acausal?

Once you allow quantum randomness to have a causal mechanism of any sort, then this same mechanism, in principle, allows correlation information to be carried by the particle from the initial interaction. It's ONLY the lack of a causal mechanism of indeterminacy that makes EPR weird. I don't get why that's so difficult.

No, Einstein most certainly would not have accepted Bell proof as a FTL mechanism, he would have considered it proof of his original claim that indeterminacy has a "causal" mechanism, which carries the correlation information from the initial interaction.
 
  • #373
Here is a link to the original EPR paper:
http://www.phys.uu.nl/~stiefelh/epr_latex.pdf

Take special note of the definition of reality. It specifically stated that a comprehensive definition was unnecessary, and what is provided is merely sufficient for the needed purposes, repeatedly. Sounds like my writing, and notes that many other ways exist to recognize reality. The point here is that narrowing in on the one definition provided isn't a valid rebuttal (hence the Neumann rejection), because it was merely chosen as "sufficient" as one way of recognizing reality relevant to the EPR paper argument.

Now the key sentence:
http://www.phys.uu.nl/~stiefelh/epr_latex.pdf said:
[...] we arrive at the conclusion that two physical quantities, with non-commuting operators, can have simultaneous reality.
Why did they come to this conclusion?
Because EPR correlations are REAL.
Because Bell's inequalities are VIOLATED.
I don't think, in their wildest imagination, they considered that future generations would actually call an empirically verified prediction a failure.

Now look at the definition given for a complete theory:
http://www.phys.uu.nl/~stiefelh/epr_latex.pdf said:
Every element of physical reality must have a counterpart of physical theory.
Thus the very act of postulating a FTL "mechanism" justifies the conclusion of the original EPR paper , due to the above correct prediction:
http://www.phys.uu.nl/~stiefelh/epr_latex.pdf said:
We are thus forced to conclude that the quantum-mechanical description of physical reality given by the wavefunction is not complete.
Thus either FTL mechanisms or local hidden variables fully justifies the claims of the paper, only a rejection of the paper itself is required to justify FTL mechanisms.

Now to reiterate "completeness". RQM defines QM as an "information theory". For an information theory to be "complete" does not require that every physical element that defines that information be defined. The definition provided by the EPR paper specifically extended "completeness" to not only include complete information, but also a physical specification of what defines that information. Thus both sides are arguing "completeness" while rejecting that their are two sets of definitions in use.

In no way, shape, or form did Einstein ever reject the validity of any prediction of QM whatsoever. He was an integral part of its development till it was declared complete.
 
  • #374
my_wan said:
Not sure how you can support that when in fact Einstein's argument required EPR correlations to be real (Bell's inequalities to be violated) as his justification (now empirical) to claim indeterminacy wasn't "fundamental". If indeterminacy has a cause, not fundamental, then that causal mechanism can in principle carry the relevant correlation information with the particle from the time the correlation was created. If indeterminacy is fundamental, without cause, then this is not possible and a FTL mechanism is required for EPR correlations. But if indeterminacy is acausal, why can't correlations be acausal?

Once you allow quantum randomness to have a causal mechanism of any sort, then this same mechanism, in principle, allows correlation information to be carried by the particle from the initial interaction. It's ONLY the lack of a causal mechanism of indeterminacy that makes EPR weird. I don't get why that's so difficult.

No, Einstein most certainly would not have accepted Bell proof as a FTL mechanism, he would have considered it proof of his original claim that indeterminacy has a "causal" mechanism, which carries the correlation information from the initial interaction.

Well, there are some differences in our views of the historical record. But that is really not surprising, it sort of depends on how you read them and in which order.

When you say "EPR correlations", I assume you mean the so called perfect correlations. Yes, EPR assumes those. There is an element of reality for all angle settings where Alice and Bob pick the same settings. So we agree about that. Einstein accepted this and assumed - reasonably for the time - that a locally causal theory could eventually replace/augment QM at some point in the future.

But Einstein obviously could never have known about Bell inequalities... they weren't discovered until almost 10 years after his death. That changed things dramatically, as Einstein's hopes were no longer feasible. I easily believe that Einstein would have accepted the Bell result as irrefutable. But would he have abandoned locality over realism (or vice versa)? I can't say.
 
  • #375
my_wan said:
... No, Einstein most certainly would not have accepted Bell proof as a FTL mechanism, he would have considered it proof of his original claim that indeterminacy has a "causal" mechanism, which carries the correlation information from the initial interaction.

I agree. The whole EPR question was if QM could be considered incomplete, in need of LHV or FTL. And if we today have proven that Einstein was right (QM needs LHV or FTL), then Einstein would of course be an advocate of finding a local mechanism to explain the paradox. I don’t think he would have started the research claiming – Well, I was right about EPR and the incompleteness of QM, and that also sadly proved GR/SR being totally wrong, in the proved FTL mechanism... Let’s start from scratch! :biggrin:
 
  • #376
my_wan said:
No, because if indeterminacy is not fundamental, then the information producing the correlation can simply be carried by the particles from the original interaction, like the marble analogy. Only the mechanism that carries it must be indeterminate, but not fundamentally so, i.e., a relational ensemble of real (though observer characterized) states. ...

I understand this, almost... the 'thing' that looks to us as a FTL mechanism, cannot be 'exposed' in its underlying determinism...?

Why I asked was because in the video in https://www.physicsforums.com/showpost.php?p=2721494&postcount=365" Anton Zeilinger talks about Quantum teleportation, and the reason why we can’t "beam" Alice over to Bob, is because HUP makes it impossible to measure Alice exactly, without destroying her (and then the joke about the "Heisenberg compensator" that "works very well, thank you").

But I have another (hopefully) tricky question for you:

Suppose we send 100 entangled photons towards the polarizer’s, and there is no FTL mechanism, all is handled by an 'underlying relational local ensemble' at the source.

What happen and how is this handled if we for example have one year between every entangled pair? Is there a "Global RAM" that 'memorizes' the statistics to be consistent with predictions of QM??
 
Last edited by a moderator:
  • #377
my_wan said:
1. Here is a link to the original EPR paper:
http://www.phys.uu.nl/~stiefelh/epr_latex.pdf

2. ... I don't think, in their wildest imagination, they considered that future generations would actually call an empirically verified prediction a failure.

3. In no way, shape, or form did Einstein ever reject the validity of any prediction of QM whatsoever. He was an integral part of its development till it was declared complete.

1. Waa, my own site is still down, which also contains this paper. SO thanks for posting a link.


2. Oh, but they were quite wrong on this key point. Per their conclusion:

"One could object to this conclusion on the
grounds that our criterion of reality is not suf-
ficiently restrictive. Indeed, one would not ar-
rive at our conclusion if one insisted that two
or more physical quantities can be regarded
as simultaneous elements of reality only when
they can be simultaneously measured or pre-
dicted. On this point of view, since either one
or the other, but not both simultaneously, of
the quantities P and Q can be predicted, they
are not simultaneously real. This makes the
reality of P and Q depend upon the process
of measurement carried out on the first system
in any way. No reasonable definition of reality
could be expected to permit this.

While we have thus shown that the wave
function does not provide a complete descrip-
tion of the physical reality, we left open the
question of whether or not such a description
exists. We believe, however, that such a theory
is possible."

Note how they say: no reasonable definition of reality will allow this! They absolutely did not see that P & Q are NOT simultaneously real (unless of course, there are FTL effects). Further, they believed that a more complete description of the system is possible! We now know that is not true. At least, not as EPR envisioned.


3. I would agree that Einstein did not reject the predictions of QM.
 
  • #378
?:bugeye:? This is confusing... even if it never is mentioned in words in the paper, when EPR talks about "P and Q" they are talking about Spin(p,q), right?? Or to be precise, simultaneously Vertical + Horizontal Spin in the same particle, right??

If this is correct, then what they are saying is:

1) The quantum-mechanical description of reality given by the wave function is not complete *OR*

2) When the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality. Starting then with the assumption that the wave function does give a complete description of the physical reality, we arrived at the conclusion that two physical quantities, with non-cummuting operators, can have simultaneous reality.

Meaning: If we measure V spin at Alice we also make V spin at Bob real (entanglement). But then we can also measure H spin at Bob, but quantities that do not commute cannot be simultaneous real according to QM = QM description of reality given by the wave function is not complete!

Or did I miss something (again)??

Anyone interested in commuting, sorry communicating, are welcome.
 
  • #379
DevilsAvocado said:
?:bugeye:? This is confusing... even if it never is mentioned in words in the paper, when EPR talks about "P and Q" they are talking about Spin(p,q), right?? Or to be precise, simultaneously Vertical + Horizontal Spin in the same particle, right??

If this is correct, then what they are saying is:

1) The quantum-mechanical description of reality given by the wave function is not complete *OR*

2) When the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality. Starting then with the assumption that the wave function does give a complete description of the physical reality, we arrived at the conclusion that two physical quantities, with non-cummuting operators, can have simultaneous reality.
QUOTE]

Could be P and Q (momentum and position), or could be non-commuting spins. I think it is a lot easier to talk about spin because then there is no confusion of the purity of their states (i.e. they are on a completely equal basis with each other, and each can take on a random value of either + or -).

An electron has 3 spins: x, y and z.

Now, keep in mind that they acknowledged that if QM is complete, there cannot be simultaneous independent reality of P & Q. On that I agree with EPR. The part I disagree with (as do most) is that it is an unreasonable to deny the simultaneous reality of P & Q. It is that last leap which led to their error.
 
  • #380
Nice, you called me out on the right issues, but made bad assumptions about my position.

DrChinese said:
Well, there are some differences in our views of the historical record. But that is really not surprising, it sort of depends on how you read them and in which order.
Yes, it was a nutshell history, and I did short change the significant contributions Bell's inequalities made with respect to realism. In my latest post I made it sound 'as if' the EPR correlations of the original paper are equivalent to Bell's inequalities, and this is very far from true. The fact is Bell's inequalities make fine distinctions in the character of some classes of causal mechanisms undreamed of in the original EPR paper, which merely chose a simple definition as "sufficient" for the simple case of existential correlations. The case for Einstein being wrong is actually predicated on a stronger argument made by Bell's theorem, which in fact rules out the simple operational definition of of reality used in the paper. The EPR did repeatedly note this, and referred to this definition this way: "Regarded not as necessary, but merely as sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality." The underlined left a door open here that Bell's theorem has yet to close. The door that I started this debate with.

DrChinese said:
When you say "EPR correlations", I assume you mean the so called perfect correlations. Yes, EPR assumes those. There is an element of reality for all angle settings where Alice and Bob pick the same settings. So we agree about that. Einstein accepted this and assumed - reasonably for the time - that a locally causal theory could eventually replace/augment QM at some point in the future.

Yes, I spoke in terms of perfect correlations only in the last few posts. I can't specifically object on the term "element of reality", as it doesn't define itself. So up next I'll reiterate exactly what Bell's theorem assumes and the issue with that.

DrChinese said:
But Einstein obviously could never have known about Bell inequalities... they weren't discovered until almost 10 years after his death. That changed things dramatically, as Einstein's hopes were no longer feasible. I easily believe that Einstein would have accepted the Bell result as irrefutable. But would he have abandoned locality over realism (or vice versa)? I can't say.
True, Einstein couldn't know, but clearly the equivocation on the definition of reality in EPR, and later the rejection of Neumann's proof, showed an understanding of the issues in drawing a linear relation between ontic (perhaps unobservable) elements of nature and the variables they define. Neumann considered an ensemble K such that the subsets ki, kj, ..., should hold the properties of K, which they clearly don't. This is essentially the "preexisting property assumption" required by Bell's notion of reality. The same simplistic notion used in EPR, with equivocation. Bell was quiet clear that the inequality derivation itself didn't require any notion of realism at all. Question is, is this a "sufficient" case for realism in the general case, as it was in the more restricted argument in EPR? The answer is absolutely no. Bell merely ruled this simplistic class of intrinsic properties. Emergence is an ubiquitous phenomena in nature, making such linear assumptions unreasonable.

This paper goes over in some detail the limits of what Bell's theorem can say wrt realism.
http://arxiv.org/abs/0811.2862"
There simply is no a priori reason to assume ontic entities have preexisting properties independent of measurement in the classical or empirical sense. The fact is that models that take advantage of contextual/relational variables can successfully models correlation statistics. Within thermodynamics variances between intrinsic and extrinsic variables is a normal feature, which means non-commuting variables are to be expected. Conjugate variables have a very tight analog to these QM properties, which are only untenable when you assume measurables are preexisting properties. Preexisting properties also lead to the so called vacuum catastrophic, deepens the mystery of why the total energy of the Universe is apparently zero, virtual particles, etc.

None of this proves a realistic model is valid, local or otherwise. But to say the Bell's inequalities rules out locally realistic theories is simply not tenable. Yes Bell's results are irrefutable, it simply overreaching to say that ruling out naive linear relations between properties and things says much about realism at all. I see no reason to give up either locality or realism without a much better reason than an grossly overstated interpretation of Bell's theorem.
 
Last edited by a moderator:
  • #381
DrChinese said:
1. Waa, my own site is still down, which also contains this paper. SO thanks for posting a link.


2. Oh, but they were quite wrong on this key point. Per their conclusion:

"One could object to this conclusion on the
grounds that our criterion of reality is not suf-
ficiently restrictive. Indeed, one would not ar-
rive at our conclusion if one insisted that two
or more physical quantities can be regarded
as simultaneous elements of reality only when
they can be simultaneously measured or pre-
dicted. On this point of view, since either one
or the other, but not both simultaneously, of
the quantities P and Q can be predicted, they
are not simultaneously real. This makes the
reality of P and Q depend upon the process
of measurement carried out on the first system
in any way. No reasonable definition of reality
could be expected to permit this.

While we have thus shown that the wave
function does not provide a complete descrip-
tion of the physical reality, we left open the
question of whether or not such a description
exists. We believe, however, that such a theory
is possible."

Note how they say: no reasonable definition of reality will allow this! They absolutely did not see that P & Q are NOT simultaneously real (unless of course, there are FTL effects). Further, they believed that a more complete description of the system is possible! We now know that is not true. At least, not as EPR envisioned.


3. I would agree that Einstein did not reject the predictions of QM.
Yes, but this analysis is predicated on the 'not real' as defined by indeterminacy. What the paper was out to reject. Thus, to demonstrate what was defined there as "no reasonable definition of reality" requires the variables, I suppose are relational, to not be contained in a local ensemble, such that communication between the correlated pair is required for nonlocal realism. Yet to demonstrate that is the case via Bell's theorem requires the assumption that the measured properties of a single ensemble (particle) are innate properties of the subsets of that ensemble. It is the rejection of this assumption that allows local realistic models in accordance with Bell's inequalities.

Bell was genius is providing us with these constraints, and I prefer to use them as a map, rather than overgeneralize the impossibilities they provide. Thus we are at the same impasse, with properties of ensembles not innate to the subsets of that ensemble verses properties of particle pairs not innate to either particle alone, regardless of separation, as the defining difference of perspective.

So we have choice A and B:
A: Entangled particle pairs with individual properties not innate to either particle alone.
B: Particles are ensembles with properties not innate to the subsets of that ensemble.
Personally I think B is the most reasonable choice.
 
  • #382
my_wan said:
True, Einstein couldn't know, but clearly the equivocation on the definition of reality in EPR, and later the rejection of Neumann's proof, showed an understanding of the issues in drawing a linear relation between ontic (perhaps unobservable) elements of nature and the variables they define. Neumann considered an ensemble K such that the subsets ki, kj, ..., should hold the properties of K, which they clearly don't. This is essentially the "preexisting property assumption" required by Bell's notion of reality. The same simplistic notion used in EPR, with equivocation. Bell was quiet clear that the inequality derivation itself didn't require any notion of realism at all. Question is, is this a "sufficient" case for realism in the general case, as it was in the more restricted argument in EPR? The answer is absolutely no. Bell merely ruled this simplistic class of intrinsic properties. Emergence is an ubiquitous phenomena in nature, making such linear assumptions unreasonable.
I would like to agree with this point and provide something along the line from more practical side.
There is so called "entanglement distillation" and it's description in wikipedia says:
"Entanglement distillation can in this way overcome the degenerative influence of noisy quantum channels by transforming previously shared less entangled pairs into a smaller number of maximally entangled pairs (Bell states)."
To me this seems very much like emergence.
 
  • #383
zonde said:
I would like to agree with this point and provide something along the line from more practical side.
There is so called "entanglement distillation" and it's description in wikipedia says:
"Entanglement distillation can in this way overcome the degenerative influence of noisy quantum channels by transforming previously shared less entangled pairs into a smaller number of maximally entangled pairs (Bell states)."
To me this seems very much like emergence.

Only in the sense that the word used is the same.
 
  • #384
my_wan said:
This paper goes over in some detail the limits of what Bell's theorem can say wrt realism.
http://arxiv.org/abs/0811.2862"
There simply is no a priori reason to assume ontic entities have preexisting properties independent of measurement in the classical or empirical sense. The fact is that models that take advantage of contextual/relational variables can successfully models correlation statistics. Within thermodynamics variances between intrinsic and extrinsic variables is a normal feature, which means non-commuting variables are to be expected. Conjugate variables have a very tight analog to these QM properties, which are only untenable when you assume measurables are preexisting properties. Preexisting properties also lead to the so called vacuum catastrophic, deepens the mystery of why the total energy of the Universe is apparently zero, virtual particles, etc.

None of this proves a realistic model is valid, local or otherwise. But to say the Bell's inequalities rules out locally realistic theories is simply not tenable. Yes Bell's results are irrefutable, it simply overreaching to say that ruling out naive linear relations between properties and things says much about realism at all. I see no reason to give up either locality or realism without a much better reason than an grossly overstated interpretation of Bell's theorem.

I am not impressed by Laudisa, I am loosely familiar with his work as I scan almost every local realistic paper going into the arxiv. And I must say I am rather surprised by your position, it does not seem to follow from your prior statements. But I admit I still may not follow your position as there are some apparent contradictions (which I am sure are not actual contradictions). :smile:

1. Using Bell as a map (which I think is proper): do you think local realistic theories can yield predicitions consistent with QM?

2. How is the Bell generally acceptede conclusion "grossly overstated"? I mean, after decades of effort there is not ONE single local realistic candidate theory to consider. Every one can, thanks to Bell, be batted out of consideration. You must have seen how the work of Hess, Santos, and numerous others has been systematically dismantled. Not bad for being overstated: QM, 100; LR,0.

3. You say "contextual/relational variables can successfully models correlation statistics". To me, a contextual/relational model is not observer independent. Therefore, it is not realistic. So these sound like the words of someone who in fact denies realism. So are you in that camp or not?
 
Last edited by a moderator:
  • #385
DrChinese said:
I am not impressed by Laudisa, I am loosely familiar with his work as I scan almost every local realistic paper going into the arxiv. And I must say I am rather surprised by your position, it does not seem to follow from your prior statements. But I admit I still may not follow your position as there are some apparent contradictions (which I am sure are not actual contradictions). :smile:
Laudisa does ramble a lot. :smile:
I also think he overstates the certainty of validity of local models, which object to on the same grounds I object to certainty in ruling local models in general out. It seems your question #3 below contains the issue creating the apparent contradiction. I'll go though the questions.

DrChinese said:
1. Using Bell as a map (which I think is proper): do you think local realistic theories can yield predicitions consistent with QM?
In principle yes, whether cogency can actually pan out for the standard model I can't say. I was recently challenged by one of my favorite skeptics to write a computer program that mimicked EPR correlation statistics. I found this that claims to have done it (haven't looked that close yet):
http://msc.phys.rug.nl/pdf/athens06-deraedt1.pdf
I was considering a variation of an encryption scheme I once wrote, based on some (now defunct) notions of cross frame information embedding. Actually with FTL models I might reconsider a limited version of that. It embedded an encrypted message in a fake encrypted message. Anyway I'm considering these quasirandom sequences and what rules might be needed to mimic detector setting choices. Interesting problem anyway.

DrChinese said:
2. How is the Bell generally acceptede conclusion "grossly overstated"? I mean, after decades of effort there is not ONE single local realistic candidate theory to consider. Every one can, thanks to Bell, be batted out of consideration. You must have seen how the work of Hess, Santos, and numerous others has been systematically dismantled. Not bad for being overstated: QM, 100; LR,0.
I would refer to anything that is stated as 'proof' when it fails to rule out an entire class of possible exceptions grossly overstated. I'll get to that class in your next question. Making a 'proof' claim requires more than just invalidating the special cases on the table. Admittedly it also rules out entire classes of lhv's. It also lend cogency to FTL considerations, but local toy models can and do mimic EPR statistics, including stochastic hidden variables. I can't object to the claim of relatively unlikely, but almost certainly is an overstatement of what has been demonstrated by Bell's Theorem.

DrChinese said:
3. You say "contextual/relational variables can successfully models correlation statistics". To me, a contextual/relational model is not observer independent. Therefore, it is not realistic. So these sound like the words of someone who in fact denies realism. So are you in that camp or not?
I have a bit of confusion how you are defining contextual variables myself. Earlier I seen it referred to as measuring separate realities in this thread. That was a bit ambiguous considering MWI. Here you say the relational model is not observer independent, but fail to specify what it's independent of. There is a difference between a configuration space, and a variable which is dependent on the perspective in which that configuration space is measured. Thus the whole point of contextual variables is that they are not observer independent, but the reality of the configuration space is. Analogs to these types of variables everywhere, the most relevant of which are in GR. What follows is not a claim, but a demonstration of the issues involve in complaining that contextual variables are not observer independent.

Consider what a water wave means to a single water molecule. It's nothing more than a small momentary deflection, not even significant relative to the general random motion. Same thing for air molecules when I say "boo". What part of "boo" is contained in each air molecule? Is the sound "boo" a preexisting property of air molecules? Conjugate variables are common enough in classical physics. What properties are preexisting in this world is a good question, perhaps even the constants?

In GR we make a well justified operational distinction between mass and rest mass. In the general case mass is a contextual variable, but the mass is real. So how relevant is that distinction? Consider a particle in QFT: A particular excitation of a field. Ask what happens if the entire field was uniformly excited by this magnitude. We could assume the total vacuum energy density increases accordingly, but this reasoning lead us to the vacuum catastrophe, and I'd say a prediction 107 orders of magnitude off is trouble for that assumption. Then we have a zero total energy of the universe, [tex]GM_t^2/R = M_tc^2[/tex]. This is pretty strong indication to me that the the entire universe, and everything we empirically measure about it, are purely contextual variables. Could it be that local field variances fully defines all empirical properties contextually, such that uniform absolute magnitudes of anything is meaningless, like gauge fields? This does not mean the configuration space that defined the variables isn't real, and almost certainly covariant. But trying to define reality solely in terms of the variables we measure wouldn't make much sense, in spite of the reality of covariant field variances.

As noted, I'm not trying to convince you that this is the way it is. Significant theoretical issues make this outline problematic. I'm merely trying to point out the issues in assuming that because contextual variables are not observer independent realism is out. Here I described a scenario where *all* variables are contextual, and still maintained realism. Everything you measure gets its metric from you, or some instrument, self referencing. You are a product of the very thing you are measuring, and not even space and time itself, the metric on which measurements are predicated, is non-contextual.
 
Last edited by a moderator:

Similar threads

  • Quantum Physics
2
Replies
45
Views
2K
  • Quantum Physics
Replies
4
Views
994
Replies
20
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
18
Views
2K
Replies
3
Views
1K
  • Quantum Physics
3
Replies
100
Views
9K
Replies
6
Views
3K
Replies
3
Views
740
Back
Top