Exploring the Paradoxical Einstein-Podolsky-Rosen Experiment

  • Thread starter guguma
  • Start date
  • Tags
    Experiment
In summary: So, in that sense, an intelligent being alone would not be able to cause the collapse.In summary, the Einstein, Podolsky, Rosen paradox challenges the Copenhagen interpretation of quantum mechanics by using the constancy of the speed of light. It suggests that if we observe a particle and wait long enough before measuring its spin, we would collapse the wavefunction and find definite spins for both the particle and its anti-particle. However, experimental tests and the Bell's inequality have shown this to be inconsistent with Quantum Mechanics.
  • #36
vanesch said:
It is because nobody knows a *physical* process that gives rise to a *collapse* (all elementary physical processes - except gravity - are described by quantum mechanical unitary operators), that one ended up resorting to this kind of stories.
I think that's true, and the reason that nobody knows how the physical process of collapse works is that by definition it requires a virtually infinite degree of complexity. Nobody knows how the air gets into our lungs, in detail, when we breathe, yet we have a perfectly good theory for how that process will end up shaking out. So it is with measurement-- we know how classical systems behave, so we intentionally couple quantum systems to classical ones so that we can better understand the outcome, even though we don't know in detail how that outcome occurred. We choose what information we want to track, and what information we feel we can get away with "averaging over"-- the result has our fingerprints all over it. Those fingerprints create the philosophical difficulties with associating all this with objective reality, not quantum behavior itself. If an electron could think, how would it construct a theory of quantum mechanics? I wager it would look totally different, because the electron would have no use for classical couplings.
 
Physics news on Phys.org
  • #37
peter0302 said:
About wavefunction collapse - there should be no physical difference between interaction of a photon-electron and the observation of a human eye other than complexity, assuming one doesn't buy into the "conscious observer" requirement. So, if it is complexity that gives rise to "collapse," whereas simple interactions give rise to "entanglement," "collapse" must be nothing more that entanglement that is too complex to be measurable, and thus the near-infinitely entangled wavefunction of the system becomes indistinguishable from a "collapsed" wavefunction. That's about the same as saying a "thermodynamically irreversible measurement" if I'm not mistaken, right?
Even a thermodynamically irreversible interaction between two systems can be modeled as just a giant entanglement, as I understand it this is the approach taken in the analysis of decoherence--you'd actually have to model things this way if the systems were completely isolated from outside, like the Schroedinger's cat thought-experiment. But in terms of the double-slit experiment, even if the electron just becomes "entangled" with a photon as it passes through the slit, as long as the entanglement is such that a measurement of the photon could have told you which slit the electron went through at some point, that is enough to ensure that when the electron is measured at the detector, it will show no interference, regardless of what happens to that photon. For a similar example, see the delayed choice quantum eraser (which is interesting because it allows you to measure the entangled particle in such a way that the information about which slit the first one went through is 'erased'), and you might also take a look at the thread Does a beam of entangled photons create interference fringes? and the follow-up thread entanglement and which-path.
 
  • #38
Unfortunately the "Does a beam of entangled photons create interference fringes?" doesn't seem to answer the question at all! No one can agree.
 
  • #39
peter0302 said:
Unfortunately the "Does a beam of entangled photons create interference fringes?" doesn't seem to answer the question at all! No one can agree.
It's really only RandallB who disagreed on that thread, despite the fact that he was given links to professional physicists saying they wouldn't. And if you look at the "entanglement and which-path" thread, he asked for links to actual experimental results showing this, and other people on the thread posted several.
 
  • #40
Well, wait a minute, the Dopfer thesis, which has been commented on positively by Zelinger (who was her advisor), suggests that one member of a pair of entangled photons does indeed produce an interference pattern depending on how the other member of the pair is detected. So who's right?
 
  • #41
peter0302 said:
Well, wait a minute, the Dopfer thesis, which has been commented on positively by Zelinger (who was her advisor), suggests that one member of a pair of entangled photons does indeed produce an interference pattern depending on how the other member of the pair is detected. So who's right?
Isn't the Dopfer experiment based on coincidence-counting? According to orthodox QM you can recover interference patterns in selected subsets of entangled photons, just not in their total pattern. Also, as noted by Cramer here (in the paragraph which begins with 'At the AQRTP Workshop ...'), something called "Eberhard's theorem" (which seems to have been proven here) proves definitively that according to orthodox QM, it is impossible for experimenters to communicate faster than light using the results of measurements on entangled particles, which would be the case if you could tell what happened to the entangled partners of a group of photons just by looking at what pattern they form in a double-slit experiment. Cramer's hope that a modified Dopfer experiment might actually allow FTL communication seems to be based on the idea that orthodox QM might be subtly incorrect, and require some additional nonlinear terms. But from what he writes in that article, it seems Cramer would agree that if one just wants to know what results are predicted by standard QM for the modified Dopfer experiment, the answer is that one cannot gain information about what happened at a distant detector by just looking at the total pattern at the detector near you.
 
  • #42
You're definitely right about what orthodox QM predicts. However, I have a hard time reconciling that with the Dopfer paper.

It's difficult because of the language issues (the paper is only in German). And you're right that she uses coincidence counting, but I'll be damned if I can figure out why:

Dopfer takes two entangled beams. She sends one through a double slit and sends the other to a converging lens. All depends on where the detector behind the converging lens is placed. If the detector is placed at the imaging plane corresponding to the double-slit, allowing you to know "whick slit", then the photons actually detected behind the _real_ double slit show a gaussian pattern. If the detector is placed at the focal plane corresponding to the origin of the beam, making it impossible to know "which slit," the photons detected behind the _real_ double slit show an interference pattern.

So it is totally unclear to me (as it is to Cramer!) why Dopfer needs coincidence counting at all in order to look at whether the photons behind the real double slit are creating an interference pattern or not.

[Edit]
The only reason I can see for needing the coincidence counter in the Dopfer experiment is solely for the purpose of knowing which photon detected behind the lens corresponds to which photon detected behind the double slit. But, unlike experiments like DCQE, the coincidence counter is not picking out photons to form an interference pattern. So, as Cramer asks, why can't we put a CCD behind the double slit and see a visible interfernece pattern? No one has an answer to this.
 
Last edited:
  • #43
peter0302 said:
Dopfer takes two entangled beams. She sends one through a double slit and sends the other to a converging lens. All depends on where the detector behind the converging lens is placed. If the detector is placed at the imaging plane corresponding to the double-slit, allowing you to know "whick slit", then the photons actually detected behind the _real_ double slit show a gaussian pattern. If the detector is placed at the focal plane corresponding to the origin of the beam, making it impossible to know "which slit," the photons detected behind the _real_ double slit show an interference pattern.
As no other entanglement experiment works that way, I'm confident this one doesn't either. The coincidence counter is always essential to see anything that depends on where the detector of the other beam is placed. There's a good reason for that-- all of quantum mechanics was developed for entangled particles where you only look at one "beam" (where can one find a source of "unentangled particles"? They're not available at the store.)
So, as Cramer asks, why can't we put a CCD behind the double slit and see a visible interfernece pattern? No one has an answer to this.
One would certainly expect a visible interference pattern if the amplitudes for those experimental outcomes are logically allowed to interfere in the proper information accounting of the full setup. The only way to eliminate that is to correlate the outcomes with some other results that are not consistent with interference from both slits. Thus coincidence counting must be an essential component of seeing entanglement effects of any kind, or quantum mechanics would never have worked from the outset. I'm confident that ideas to the contrary are just a mistake in how the outcomes of the experiment are being reported/interpreted. Unfortunately, I don't speak German.
 
Last edited:
  • #44
Found an interesting post on another forum about the Dopfer experiment and why the coincidence count is crucial to seeing an interference pattern:

http://www.docendi.org/re-t4876.html?

PostReplies wrote:
> That's what Cramer is doing. Here's another page I found explaining
> his experiment and a very interesting earlier experiment which was
> encouraging:
>
> http://www.paulfriedlander.com/text/...experiment.htm

There is no nonlocal communication here. It's critical to understand that
the four graphs (around halfway down the page) do not represent images
recorded on a photographic plate or CCD. Rather, they represent hit rates on
a yes-or-no detector (think Geiger counter) as it's physically moved across
the detection field while the other detector is held fixed, and only the
cases where both detectors register a particle are counted. This makes a big
difference! You will get completely different results this way than with
photographic film. To see why, consider a simplified experiment in which
each detector can be moved to four different locations (D1 in locations
11,12,13,14 and D2 in locations 21,22,23,24). Suppose our light source is
such that all the light beams it generates pass through locations whose sum
is even -- for example, it will generate beam pairs going through 11 and 21,
but never through 11 and 22. The possible combinations are marked with "X"
below.

[tex]

\begin{array}{l | c|c|c|c |} \ &21&22&23&24\\
\hline 11&X&\,&X&\,\\
\hline 12&\,&X&\,&X\\
\hline 13&X&\,&X&\,\\
\hline 14&\,&X&\,&X\\
\hline
\end{array}
[/tex]

Now suppose D1 is held fixed (at any position) while D2 is moved, and
simultaneous clicks of D1 and D2 are recorded. Regardless of the fixed value
of D1, you will get a bright, dim, bright, dim pattern, which is our
simplified discrete version of an interference pattern. But if you consider
only the data from D2, without the coincidence counter, there will be no
interference pattern, just an equal distribution over all four locations.
Similarly, if you replace the detectors with photographic plates, there will
be no interference pattern on either plate.

Now in front of D1 insert a scrambling device that perturbs each incoming
photon so that, regardless of where it was originally headed, it's now
equally likely to go to any of the locations 11,12,13,14. Now, when you
again hold D1 fixed while varying D2 and counting coincidences, you will no
longer see an interference pattern. But the raw data from D2 has not changed
at all -- all that has changed is which raw detection events we subsequently
threw away at the coincidence counter.

This is what's going on in Dopfer's experiment (as both Dopfer and Zeilinger
realize).

-- Ben
And a subsequent post:
Gerry Quinn wrote:
> I'm not convinced your 'simplified' version of the experiment is
> actually the same experiment at all!

It's not intended to be the same, just to illustrate the importance of
the coincidence counter. It's pretty similar though, aside from being
discrete and classical and omitting the slits.

> The primary function of the coincidence counter, as described in the
> linked URL, is to separate valid pairs of entangled photons.

That's wrong; it's a misunderstanding by the guy who wrote that page,
and it's presumably the cause of all his other misunderstandings. The
two detectors have very narrow detection cross sections, and
deliberately miss most of the photons that pass them by. If the
coincidence counting had the purpose you suggest, it would make sense to
replace the fixed detector by one with a much wider cross section, since
this would give you a much larger data set. In reality this would
destroy the signal: the better the detector at D2, the less difference
there will be between the two graphs labeled "Measurement at D1". These
graphs do not show measurements at D1. They show a slice through the
parameter space of the nonseparable function f(x1,x2) that relates
detector position to coincidence count. With a wider detector at D2
you'd instead get integral f(x1,x2) dx2, which would look completely
different.

-- Ben
 
Last edited by a moderator:
  • #45
peter0302 said:
About wavefunction collapse - there should be no physical difference between interaction of a photon-electron and the observation of a human eye other than complexity, assuming one doesn't buy into the "conscious observer" requirement. So, if it is complexity that gives rise to "collapse," whereas simple interactions give rise to "entanglement," "collapse" must be nothing more that entanglement that is too complex to be measurable, and thus the near-infinitely entangled wavefunction of the system becomes indistinguishable from a "collapsed" wavefunction. That's about the same as saying a "thermodynamically irreversible measurement" if I'm not mistaken, right?

The big difference between a "physical" observation (with irreversible entanglement and all that) and a "subjective" observer, is that the first one doesn't require there to be ONE outcome, while the second one does, as this is what is experienced. Both agree on the KINDS of outcomes, which is fixed by the irreversible entanglement. But this entanglement doesn't give "preference" to any of the terms! It just fixes the KINDS of terms into classically-like states. That's why you don't see a ghostly superposition of "pointer to the left" and "pointer to the right" (or, more dramatically, live cat and dead cat). These terms are not present. But there IS a term present which describes a coherent, classical environment+result+pointer_to_the_left+... AND there IS a term with the same thing, but "pointer_to_the_right". There is no term with a superposition of both.

THAT is what irreversible entanglement gives us. But it doesn't ERASE all terms but one. Now, for a physical observation device, as EACH of its states in this list is entirely compatible with a classical state, ALL of the terms will be "ok". If there is a computer, say, well, then in one term there will be a certain output on the screen, and it will be compatible with what a webcam saw, and what is printed on the paper and all that, and in ANOTHER term there will be another output on the screen, which will be compatible with another image on a webcam, and with another output of a printer. So ALL of these terms are "internally classically consistent". No "physical observer" will find anything odd, as in EACH of its states, everything will be consistent. So there is no NEED for collapse as long as you only require physical observers, which can only check for relative consistency. No "absurd state" is generated. At no point, there will be an internal conflict within a state of this physical observer system. At no point, there will be a conflict between what's on the webcam and on the printer.

The difference is that *subjectively* we have the impression to see only ONE of these terms. And THEN we need some or other form of "collapse", which PICKS one of the different, irreversibly entangled terms. Now, from the moment that a subjective observer observes a physical observer, as he will pick ONE branch in this list (being a subjective observer), he will only observe the physical observer in ONE single consistent observation state, but it would be an error to deduce that this physical observer can only be in one such state! You can't know! You don't know if physical observers *appear* to have only one result (which is NOT what unitary quantum theory tells us), or if they have several consistent results, of which YOU (as subjective observer) only observe one (which is consistent with all the rest within that branch).

This is the famous AND/OR problem: decoherence gives us a list of different consistent classically-looking states (and as such, eliminates the "spooky superpositions and inconsistencies" of the kind "half-dead" and "half-live" cat), which appear in the wavefunction as a result of decohering interactions. So we now have a quantum state which has "classical state 1" AND "classical state 2" AND ...

A physical observer doesn't meet any inconsistency in being in all these states, because each one corresponds to an entirely consistent classical picture.

But a subjective observer doesn't experience this. He only finds ONE of these states in the list. To him, things appear as if he could have observed "classical state 1" OR "classical state 2" OR...

So, decoherence doesn't solve the AND/OR problem, which is the problem of collapse.
 
  • #46
Ken G said:
I think that's true, and the reason that nobody knows how the physical process of collapse works is that by definition it requires a virtually infinite degree of complexity. Nobody knows how the air gets into our lungs, in detail, when we breathe, yet we have a perfectly good theory for how that process will end up shaking out. So it is with measurement-- we know how classical systems behave, so we intentionally couple quantum systems to classical ones so that we can better understand the outcome, even though we don't know in detail how that outcome occurred.

That would be nice, but it doesn't work. You see, no matter how complicated the interactions are, if they follow standard quantum mechanics, they are all described by unitary time evolution operators. And here's the problem: if the overall time evolution operator is unitary, no matter how complicated and convoluted, then superpositions survive it.
So we have a general mathematical property of the time evolution operator which gives us problems, and for which we don't have to know its details and complexity.

In your analogy, it is as if, say, the total energy in the air inhaled was not conserved. We don't need to know all the details of all the molecules in the inhaling process: we know that each of them is going to conserve energy, and from that, we can deduce that total energy will be conserved. So, no matter how complicated is the air flow, we have a general theorem, deduced from the elementary interactions, which gives us conservation of energy. So if we see that the air flow doesn't conserve energy in an inhalation process, we cannot simply dismiss this by saying that "well, as we can't know the complexity of the inhalation process, this might as well work out this way". No, we know that if the air molecules follow energy-conserving interactions, it is IMPOSSIBLE to obtain a violation of conservation of energy, NO MATTER HOW COMPLICATED will be its flow.
 
  • #47
peter0302 said:
You're definitely right about what orthodox QM predicts. However, I have a hard time reconciling that with the Dopfer paper.

It's difficult because of the language issues (the paper is only in German). And you're right that she uses coincidence counting, but I'll be damned if I can figure out why:

Dopfer takes two entangled beams. She sends one through a double slit and sends the other to a converging lens. All depends on where the detector behind the converging lens is placed. If the detector is placed at the imaging plane corresponding to the double-slit, allowing you to know "whick slit", then the photons actually detected behind the _real_ double slit show a gaussian pattern. If the detector is placed at the focal plane corresponding to the origin of the beam, making it impossible to know "which slit," the photons detected behind the _real_ double slit show an interference pattern.

So it is totally unclear to me (as it is to Cramer!) why Dopfer needs coincidence counting at all in order to look at whether the photons behind the real double slit are creating an interference pattern or not.

[Edit]
The only reason I can see for needing the coincidence counter in the Dopfer experiment is solely for the purpose of knowing which photon detected behind the lens corresponds to which photon detected behind the double slit. But, unlike experiments like DCQE, the coincidence counter is not picking out photons to form an interference pattern. So, as Cramer asks, why can't we put a CCD behind the double slit and see a visible interfernece pattern? No one has an answer to this.


I didn't look into this, but I might guess an experimental "feature": the efficiency of the photon counter is not uniform, and depends on the incident lens image. As such, there might be a preferential selection for "which way" photons, or for "interference" photons, if the detection efficiencies are different or non-uniform over the detector, which would come down to an actual coincidence count experiment as usual, at least for a fraction of the sample.
 
  • #48
vanesch said:
You see, no matter how complicated the interactions are, if they follow standard quantum mechanics, they are all described by unitary time evolution operators. And here's the problem: if the overall time evolution operator is unitary, no matter how complicated and convoluted, then superpositions survive it.
That's actually not true expressly because of the inadequacy of the concept of "superposition" in regard to a macro system. Many people think a "superposition" is a fundamental state, but it's not-- the fundamental state is called a "pure state", and what "superposition" really means is a relationship between two non-commuting measurements-- the measurement that first prepared the initial pure state, and the later measurement you are using the concept of superposition to predict. So if there is no "initial measurement" that prepared the system in a known state, then there is no such thing as the "superposition". The idea breaks down right away, even before consideration of any unitarity of the operators.

Put differently, once has to assume the macro system is describable as a pure state before one can even apply your argument-- but that assumption is borne out by no experiment. I see it very similar to the pre-quantum view that particles had an exact position and velocity, we just didn't have the precision to specify them. But that had never been shown to be true by experiment, and in fact, turned out to not be true-- we were just taking our own theories too seriously. Quantum mechanics was our wake-up call to not do that, so let's not do it to quantum mechanics!
So we have a general mathematical property of the time evolution operator which gives us problems, and for which we don't have to know its details and complexity.
It's only a problem if we think reality is a slave to our preconceptions.
No, we know that if the air molecules follow energy-conserving interactions, it is IMPOSSIBLE to obtain a violation of conservation of energy, NO MATTER HOW COMPLICATED will be its flow.
I wouldn't shout that, it simply isn't true. Energy is only very nearly exactly conserved by anything dynamical, because of the finite lifetime of the system. The classic mistake of "classical" physics is to take its principles as if they were absolute statements of reality, yet when we go to the quantum realm, we find they are not. Why would we think we can do that in reverse-- to claim that a macro system can be in a pure state even though we have no idea how to accomplish that feat, or even to demonstrate that we accomplished it?

In terms of the "correspondence principle", this means if we are to take that as a scientific principle, it must be demonstrable, which means the principle should actually be stated "aggregating quantum principles as we aggregate the quantum systems into a classical one cannot make a false prediction about the classical system"-- but that doesn't establish that a classical system can be in a pure state, because no experiment will either refute or establish that pure state. My answer to the "cat paradox" is very simple: cats cannot be in pure states, and coupling them to pure states ends the purity of the quantum state-- not the converse. Again, that's the whole point of coupling quantum systems to measurement devices that we can count on to behave classically.
 
Last edited:
  • #49
Ken G said:
That's actually not true expressly because of the inadequacy of the concept of "superposition" in regard to a macro system.

If you stick by the axioms of quantum theory (which you are free to do so or not, but I'm looking at the *toy world* in which these axioms are considered true), then EVERY state of the system is described by an element of a projective hilbert space. There's no distinction between "macro" and "micro" states. EVERY state.

Now, if you assume that this is not applicable to certain kinds of systems, then you're playing with *another* toyworld. It will then follow different rules, but for sure, you cannot say that it is purely described by the axioms of quantum mechanics. And then you have the difficulty of explaining what is "micro" and what is "macro" and what applies where.

So, for sake of argument, I stick to this toy world in which the axioms of quantum mechanics are strictly valid. By definition then, the physical state is given by a state vector. And from here on, we go further.

Many people think a "superposition" is a fundamental state, but it's not-- the fundamental state is called a "pure state", and what "superposition" really means is a relationship between two non-commuting measurements-- the measurement that first prepared the initial pure state, and the later measurement you are using the concept of superposition to predict. So if there is no "initial measurement" that prepared the system in a known state, then there is no such thing as the "superposition". The idea breaks down right away, even before consideration of any unitarity of the operators.

This is in a Copenhagen like view, where you have a classical world with "quantum gates" or whatever, where systems are classically prepared, then "plunge into the quantum world", and re-emerge classically when they are observed.

But clearly in that view, not everything is at all times described by the axioms of quantum mechanics.

Put differently, once has to assume the macro system is describable as a pure state before one can even apply your argument-- but that assumption is borne out by no experiment. I see it very similar to the pre-quantum view that particles had an exact position and velocity, we just didn't have the precision to specify them. But that had never been shown to be true by experiment, and in fact, turned out to not be true-- we were just taking our own theories too seriously.

To me, the exercise is to take the theory TOTALLY seriously, in its toy world.
And yes, in the toy world of classical physics, particles DO have perfectly well defined positions and momenta.

I wouldn't shout that, it simply isn't true. Energy is only very nearly exactly conserved by anything dynamical, because of the finite lifetime of the system.

Uh, but a system with a finite lifetime doesn't violate the conservation of energy! It simply wasn't in a pure energy eigenstate - otherwise it could not evolve, and hence not have a finite lifetime.


The classic mistake of "classical" physics is to take its principles as if they were absolute statements of reality, yet when we go to the quantum realm, we find they are not. Why would we think we can do that in reverse-- to claim that a macro system can be in a pure state even though we have no idea how to accomplish that feat, or even to demonstrate that we accomplished it?

Because in the toy world defined by the axioms of quantum mechanics, that's what postulated!

In terms of the "correspondence principle", this means if we are to take that as a scientific principle, it must be demonstrable, which means the principle should actually be stated "aggregating quantum principles as we aggregate the quantum systems into a classical one cannot make a false prediction about the classical system"-- but that doesn't establish that a classical system can be in a pure state, because no experiment will either refute or establish that pure state. My answer to the "cat paradox" is very simple: cats cannot be in pure states, and coupling them to pure states ends the purity of the quantum state-- not the converse. Again, that's the whole point of coupling quantum systems to measurement devices that we can count on to behave classically.

That's the Copenhagen view. But it leaves you with the unsatisfied impression that there is no available description for the link between quantum theory (which is valid microscopically, and clearly not macroscopically here) and classical theory which does the opposite. It is simply by the big distance between "micro" and "macro" that we don't seem to be bothered by what actually makes nature "jump theories" in between.

In such a viewpoint, there's no need to talk about things like decoherence. At a certain point, you simply DECIDE to say that now, we switch to classical, no more superpositions. You can do that whenever you feel like not following through the quantum interactions anymore. A photon interacting with an electron can be "classical" or "quantum" according to how much pain you want to give yourself.
You can call a photo-electric effect a "measurement", and if you stop there, that can be good enough. You can also call it a quantum-mechanical interaction, and careful experimenting might give you some interference effects. So if you decide to study that, it is still "in the quantum world", but if you don't bother, well then it was in fact already classical.
 
  • #50
vanesch said:
I didn't look into this, but I might guess an experimental "feature": the efficiency of the photon counter is not uniform, and depends on the incident lens image. As such, there might be a preferential selection for "which way" photons, or for "interference" photons, if the detection efficiencies are different or non-uniform over the detector, which would come down to an actual coincidence count experiment as usual, at least for a fraction of the sample.
From the comments by "Ben" I posted in post #44, I think the issue is that each detector actually only covers a very narrow range of positions (along the axis perpendicular to the incoming photons), and the position of each detector must be varied over many trials to build up an extended interference pattern or non-interference pattern; it seems like the issue is that the interference pattern in one detector is built up by varying the position of that detector while holding the position of the other detector constant, so you're only looking at the subset of photons at one detector whose entangled twins went to one very narrow range of positions on the other side of the apparatus. If you used something like a wide CCD which could detect photons at a significant range of positions, so that you didn't need to vary the horizontal positions of the two detectors in order to build up interference patterns or non-interference patterns, I think the result would be that you wouldn't see an interference pattern at either detector.
 
  • #51
peter0302;16459rfieter39 said:
I've never heard any reputable physicist say they believe that. Most theories I've heard either don't attempt to define wavefunction collapse, or hold something along the lines of a wavefunction collapse being caused by a thermodynamically irreversible interaction. Anyone who thinks that there needs to be an intelligent being involved is, for lack of a better word, nuts.

Humans make or cause measurements, period. And, again, if you ask many practicing physicists, they will say the same thing. How could you discover superconductivity, or the form factors of nucleons, or the neutral K meson system temporal interference patterns without human participation? To suggest that humans are not necessary for observations flies in the face of common sense and centuries of experimental and observational work. If true, what can an orange, a piece of metal, or a bottle of Scotch measure or observe? And, how do they do it?


So I'm nuts, as must be most of my professors at Stanford and Harvard, and my colleagues and students at Tufts , not to mention friends and colleagues at Brown, Berkeley, Yale, Russia, Washington University, University of Washington, Rockefeller University, Fermi Lab, University of Minnesota, and more. Further in my 50 years of working with quantum theory, I've noticed that the writers in the Physical Review and other journals, and the authors of many if not most texts, by your claim, are nuts.

In practice, most physicists use what I call the the Minimal Bohr-Born approach. Schrodinger Eq, with the Hamiltonian as the generator of time displacements, and Born's probability interpretation -- and, of course, all that Hilbert Space stuff. Forget about collapse of the wave function; it's not relevant to most of, say, high energy physics, QED, and QFT -- I know this because it was my field,as a teacher and as a researcher of modest accomplishments. And, we always make sure that humans are the observers.

The deal is, the typical graduate student worries a lot about these interpretation issues -- I had a major fling with Bohm as an undergraduate. As I moved along in QM and field theory, I realized, for me, that Bohm's approach just made no sense, nor did it support the work necessary to do QED. In other words, it really did not work. There was simply no way I could have done my thesis a la Bohm on radiative corrections to electron scattering. And there is no way I could have done my thesis without human observers. Never even thought of using a football or one of my kid's teddy bears to do the measurement -- if I had, then "nuts" might have been an appropriate epithet from my colleagues.

And I might as well admit that my dalliance with the Many Worlds approach lasted less than 5 minutes.

Frankly, I find your comment about "nuts" offensive, and indicative of a minimal background in physics. Most physicists take four or five years of intensive study to get the hang of QM, including the practical role of interpretation. It's a lot of difficult work. If you have not done this you will not have a real clue about QM as physicists practice it. From my experience, the best indication that someone is knowledgeable with quantum physics is that they have read and mastered Dirac's Quantum Mechanics book.

Thats' not to say that exploring interpretations is worthless. Nor are discussions about EPR, Bell, and so forth. Have at it, but know that many physicists do not pay much attention to these issues. And, with all due respect, it's generally easy to tell whether someone knows their stuff -- could you serve as an expert witness on QM?

Sorry to hit on you so hard, but I'm tired of so much misinformation and forceful statements of false claims in some threads. Some posts demonstrate a lack of knowledge about the history of physics, and make forceful statements about things that are not true, and likely never will be true.

You call me nuts, and I thus dismiss your post -- except for this particularly offensive comment. It's clear that you are interested in physics, and know just enough to be dangerous.If you want to learn, ask questions -- like "what is an observation? Can non-humans make observations? Try to learn enough to read Dirac -- learn about QM in practice -- from the hydrogen atom to basic radiation theory. Then revisit interpretations -- and make sure you study the Peierls -(Wigner) approach-- both are Nobel prize winners ; they postulate that the wave function collapse occurs as the neural networks in the brain provide the single answer from a measurement;that is, the wave function describes your state of knowledge.

How many years have you worked with QM; how many courses have you taken?

And don't, please, call me or any professional physicist nuts. I'm not, they are not.
Regards,
Reilly Atkinson
 
  • #52
reilly said:
Humans make or cause measurements, period. And, again, if you ask many practicing physicists, they will say the same thing. How could you discover superconductivity, or the form factors of nucleons, or the neutral K meson system temporal interference patterns without human participation? To suggest that humans are not necessary for observations flies in the face of common sense and centuries of experimental and observational work. If true, what can an orange, a piece of metal, or a bottle of Scotch measure or observe? And, how do they do it?

You certainly are a professional physicist Prof. Atkinson, but I do not think that the physical mechanism of the universe is that human-centered. Even though we do make measurements we also conclude that the universe will keep ticking according to the conclusions we have drawn by our measurements whether we look at it or not.

Measurement is just the interaction of two physical systems, whether complex or not, even though we force them to interact or not, every physical interaction will be a measurement whether we look at it and write it down or not.

So what I think is that consciousness has no effect on anything, actually this certainly demands another discussion "What is Consciousness" and "What is A Conscious Entity" and personally I do not differentiate a bottle of scotch from a human being, both are made up of the same raw material and only their functionality and their stable integrity differs which does not put one or the other above in ranking in terms of natural law.
 
  • #53
JesseM said:
If you used something like a wide CCD which could detect photons at a significant range of positions, so that you didn't need to vary the horizontal positions of the two detectors in order to build up interference patterns or non-interference patterns, I think the result would be that you wouldn't see an interference pattern at either detector.


Here's why it shouldn't matter.

When the detector behind the lens is at the focal point, it is stationary, and should be registering all the photons there are to be registered coming from the beam.

In that set up, the detector behind the double slit, we know from the paper, picks up an interference pattern in the form of a reduced photon count at the expected interference minima.

Even though the detector scans, this shouldn't matter. Even if there were a wide CCD behind the slits, there should be fewer photons at the expected interference minima than at the maxima. The coincidence circuit should not be necessary in this specific setup.
 
  • #54
Frankly, I find your comment about "nuts" offensive, and indicative of a minimal background in physics. Most physicists take four or five years of intensive study to get the hang of QM, including the practical role of interpretation. It's a lot of difficult work. If you have not done this you will not have a real clue about QM as physicists practice it. From my experience, the best indication that someone is knowledgeable with quantum physics is that they have read and mastered Dirac's Quantum Mechanics book.
Wait a minute, I just want to make sure I understand you. You are telling me that you, personally, believe that human-scale intelligence is required for quantum mechanics to have meaning and/or for anything in the universe to exist?
 
  • #55
guguma said:
You certainly are a professional physicist Prof. Atkinson, but I do not think that the physical mechanism of the universe is that human-centered.

Oh, don't take away that illusion from me :cry:

After "l'État, c'est moi!", I wanted to be able to say: "The universe, that's me!" :-p
 
  • #56
peter0302 said:
Wait a minute, I just want to make sure I understand you. You are telling me that you, personally, believe that human-scale intelligence is required for quantum mechanics to have meaning and/or for anything in the universe to exist?

Yes I do I do believe there ain't no QM(with apologies to Louis McNeice) without people
, as do most physicists. However I'm pretty comfortable with the notion that humans are not necessary for the universe to exist. Read Dirac, Bjorken and Drell, Cahn and Goldhaber's "The Experimental Foundations of particle physics, Weinberg's Field Theory series, Mandel and Wolf's Optical Coherence and Quantum Optics, the biographies of Bohr and Einstein by A. Pais, numerous papers by Bohm and Pines on the electron gas, and on and on and on. Why, you can even read my thesis.

You managed to avoid dealing with most of my post. Why?
Reilly Atkinson











particle
 
  • #57
peter0302 said:
Here's why it shouldn't matter.

When the detector behind the lens is at the focal point, it is stationary, and should be registering all the photons there are to be registered coming from the beam.
No, it shouldn't. Both beams result in photons being directed at a range of horizontal positions (horizontal relative to the axis of the beams), regardless of focus, as can be seen in the graph of the detection patterns from each detector in Dopfer's thesis which is reproduced on this page (fig. 4). If each detector can only detect a very narrow range of horizontal positions, so that the only way to build up the wider range seen in the graphs is to vary each detector's position over many trials, that means there'd be many instances when photons missed the detector because it was in the wrong position on that trial.
peter0302 said:
In that set up, the detector behind the double slit, we know from the paper, picks up an interference pattern in the form of a reduced photon count at the expected interference minima.
Only by varying the position of the detector behind the double slit over many trials can you build up an interference pattern, if I'm understanding Ben's post correctly.
peter0302 said:
Even though the detector scans, this shouldn't matter. Even if there were a wide CCD behind the slits, there should be fewer photons at the expected interference minima than at the maxima. The coincidence circuit should not be necessary in this specific setup.
I don't think that's correct. If you replaced each detector with wide CCDs, I think neither would show any interference in the total pattern of photon hits; but then if you looked at the subset of photon hits on one CCD that correspond to hits on the other CCD lying within some narrow range of positions (as narrow as the range of the detectors in the original dopfer experiment), this subset of hits could show an interference pattern.
 
Last edited:
  • #58
reilly said:
-- and make sure you study the Peierls -(Wigner) approach-- both are Nobel prize winners ; they postulate that the wave function collapse occurs as the neural networks in the brain provide the single answer from a measurement;that is, the wave function describes your state of knowledge.

After reading this, I feel one should not take every statement of a Nobel laureate seriously! After all, their neural networks (these two) also churn out single answers/conclusions--which may not always be the right ones.
 
  • #59
guguma said:
You certainly are a professional physicist Prof. Atkinson, but I do not think that the physical mechanism of the universe is that human-centered. Even though we do make measurements we also conclude that the universe will keep ticking according to the conclusions we have drawn by our measurements whether we look at it or not.

Measurement is just the interaction of two physical systems, whether complex or not, even though we force them to interact or not, every physical interaction will be a measurement whether we look at it and write it down or not.

So what I think is that consciousness has no effect on anything, actually this certainly demands another discussion "What is Consciousness" and "What is A Conscious Entity" and personally I do not differentiate a bottle of scotch from a human being, both are made up of the same raw material and only their functionality and their stable integrity differs which does not put one or the other above in ranking in terms of natural law.

I wrote what I did only after some serious consideration. Physics is about describing nature. If you do your homework, you will find this idea goes back at least to the Greeks. Newton's Laws are computational recipes, just like the Schrodinger Eq. The only difference between the two is that Newton's ideas have been around for much longer
than the Schrodinger eq. . The consequence of that is that we have had several centuries to understand the descriptive power of Newton. So we have built a common intuitive consensus that we understand Newton, which is a huge difference from before Newton.

What's wrong with computational recipes?
Ask yourself exactly how it is you understand Newton?

At least some of the time people go on and on about something that was settled 20 years ago -- virtual particles are a good example. Spend a little time checking out your thoughts about something -- do a Google. As a retired professor I say, as I did to me students, do your homework and more. Those that do learn more, and make informed posts, which generally elicits more good stuff. How much time, guguma, have you spent understanding the concept of measurement in QM -- yours is a view not commonly held? There's 80 years of history to consider.Not for a moment do I think that humans are necessary to keep the universe ticking, as you put it. Perhaps a drunk could consider a bottle of scotch in anthropormorphic terms, and I suspect that only folks going to AA might concur.

Is a car wreck a measurement? Could the act of peeling a banana be an anti-measurement? Could waves crashing on a beach be a measurement? I keep asking this question, but I have yet to get an answer.

Consciousness has big effects on many things. This forum would not exist without consciousness. I'm sure that you can think of other things. And, when was the last time you saw bottles of scotch playing tennis, or going to school?

I'm literally dying to know how my sofa makes measurements, how do my pots and pans, stored so that they are in contact, make measurements, how does my hair make measurements, how does the sun make measurements?

Then there's a second round of questions: what do these various things measure, how do they do it, and how can I know?. I've asked this question also, without any answers. I'm in hope that my questions will be answered, not ignored.
Regards,
Reilly Atkinson
 
Last edited:
  • #60
First let me thank you for your participation, I realize I am dealing with someone of considerable knowledge so there is much that may be gained.
vanesch said:
If you stick by the axioms of quantum theory (which you are free to do so or not, but I'm looking at the *toy world* in which these axioms are considered true), then EVERY state of the system is described by an element of a projective hilbert space. There's no distinction between "macro" and "micro" states. EVERY state.
But that's kind of ducking the issue-- I am maintaining that we have not the least experimental justification to require that "the "axioms of quantum mechanics" must apply to macro systems! It's simply an example of taking our own physics too seriously, just like the post-Newtonians did when they grappled with a purely deterministic-seeming universe. Now, you point out that perhaps we are not requiring this, we are choosing it-- but if that's true, why are we choosing this is if it is not required? Where do we benefit from this choice if it is not forced on us by nature?
Now, if you assume that this is not applicable to certain kinds of systems, then you're playing with *another* toyworld. It will then follow different rules, but for sure, you cannot say that it is purely described by the axioms of quantum mechanics.
Right, I see that we are on the same page-- we are playing with "toy worlds" here, but the issue on the table is which one best describes the real world in a given situation. As I have never seen anyone meaningfully apply quantum mechanics to the state of a cat as a whole, I claim that is a clear case of using the wrong "toy world".
And then you have the difficulty of explaining what is "micro" and what is "macro" and what applies where.
As it turns out, we do not suffer much from this problem. Nevertheless, it is a real problem, and your solution will not handle it any better than mine. Indeed, even the theory of large nuclei does not follow the approach you are suggesting! The first step always looks something like "well we can't really solve quantum mechanics for this system, so here's what we do instead". That's a nucleus, not a cat. A nucleus is complex but at least it can be coupled only weakly to its environment. A cat that is weakly coupled to its environment is not a cat for long.

So, for sake of argument, I stick to this toy world in which the axioms of quantum mechanics are strictly valid. By definition then, the physical state is given by a state vector. And from here on, we go further.
Certainly you can start somewhere, and see where it gets you, that's an excellent way to do science. But the question is, where does this get you, in regard to a cat, or in regard to wavefunction collapse? Are we trying to motivate actual new observations, or are we trying to satisfy ourselves that we in some sense understand the outcomes of impossible ones? What gets us somewhere is the mindset that says we are coupling quantum systems to macro systems expressly because we can rely on the macro system to act classically, which our brains like and we can actually call it a "measurement". What other kinds of experiments can we do? Given that, where is the gain for us in treating our macro system quantum mechanically, and why did we need a macro system involved in the first place if we were just going to treat it quantum mechanically? Let's just let the electrons measure each other if that's our perspective.

This is in a Copenhagen like view, where you have a classical world with "quantum gates" or whatever, where systems are classically prepared, then "plunge into the quantum world", and re-emerge classically when they are observed.
That does seem like a valid way to say it.
But clearly in that view, not everything is at all times described by the axioms of quantum mechanics.
True, but we already know that. Does not every experiment begin with "controls" chosen by the experimenter? Those controls do not emerge from the axioms, they are in a sense axioms of their own. They are "how we do science"-- and that's not in quantum mechanics, it is an assumption that quantum mechanics is tacked onto. It requires classical manipulation to apply those "uber-axioms", that's where we exit the self-consistent realm of quantum axioms. Electrons are lousy scientists.
To me, the exercise is to take the theory TOTALLY seriously, in its toy world.
And yes, in the toy world of classical physics, particles DO have perfectly well defined positions and momenta.
At least you are clear in what you are doing, so there is no sense to which it is "wrong", there is only a sense of "what is it good for". So I ask, what is it good for to take our theories totally seriously? To me it sounds like we are entering into pretense in an effort to "cover our tracks", like a detective at a crime scene saying "assume I don't leave any fingerprints of my own". I think that leads to erroneous conclusions-- we do better if we say "I will be careful to not mistake my own fingerprints for that which I am trying to study". Then instead of taking our axioms too seriously, we pay close attention to what we are actually doing, and try to separate that from what the world is actually doing.

Uh, but a system with a finite lifetime doesn't violate the conservation of energy! It simply wasn't in a pure energy eigenstate - otherwise it could not evolve, and hence not have a finite lifetime.
Yes, I read your words wrong-- you said one cannot get a violation of energy conservation, and I heard that as one has to conserve energy. We should say it wasn't in an energy eigenstate-- but since all systems have finite lifetimes (even the whole universe), then there is no such thing as a strict energy eigenstate. So how does anything conserve energy if I cannot define its energy? The point is, these are not exact concepts, they are approximations we apply in our toy worlds but we should recognize that our "fingerprints" are all over the result. Why when should we take it totally seriously?
That's the Copenhagen view. But it leaves you with the unsatisfied impression that there is no available description for the link between quantum theory (which is valid microscopically, and clearly not macroscopically here) and classical theory which does the opposite.
I would say that this link is very much a mysterious landscape. Would you contend that there is a theory that bridges this gap? And I don't mean expanding in powers of h, that only handles the equations not the overall axioms. You still have to prepare the system you are testing, and at some point you will always begin to be testing your own uncertainties in how you prepared it, rather than its fundamental dynamics-- like that air in your lungs. At some point you will react to that uncertainty by throwing up your hands and averaging over it, and voila, that's precisely where you enter the realm of classical physics and the pure state needs no collapsing because it already collapsed when you did that averaging. So this transition phase you speak of is not a physical change, it's a change in your analysis mode, and you will tailor it to get the best results.
It is simply by the big distance between "micro" and "macro" that we don't seem to be bothered by what actually makes nature "jump theories" in between.
I completely agree-- and a good thing too. Theories in that "middle ground" would be awful! But note the same thing can be said about classical theories like plasma physics. If you have a handful of particles, you track their positions and momenta. Add more particles and that becomes unwieldy, so you make an "awful transition" into a realm with many more particles, the kinetic level, where you recover a comfortable stance using "distribution functions". But it's still a pain to track the history of all the particles that go into the distribution function, so as you start to get more collisional "shuffling", you next make an "awful transition" into the fluid domain, and with enough collisions you again recover the comfort zone of magnetohydrodynamics. We are very lucky that these "awful transitional" regions are relatively narrow next to the full dynamical ranges we are interested in, or else, quite frankly, I wouldn't be doing physics! So this is nothing new in quantum mechanics.
In such a viewpoint, there's no need to talk about things like decoherence. At a certain point, you simply DECIDE to say that now, we switch to classical, no more superpositions.
But we don't just "decide" when a measurement has occurred in one of our experiments-- we actually have to engineer a classical system with the express purpose of decohering some quantum state. It's like eating a watermelon-- you don't just decide at some point to shove the fruit into your mouth, you have to find a way to slice it first. That involves physical interaction with the system in question, and that interaction will at some point allow us to eliminate the need to consider superpositions and instead return to the comfort zone of a purely probabilistic approach. And as we all know in quantum erasure experiments, it is very important to pay painstaking attention to what physical interaction is actually occurring before we can conclude we've "made it" to that zone.
You can do that whenever you feel like not following through the quantum interactions anymore. A photon interacting with an electron can be "classical" or "quantum" according to how much pain you want to give yourself.
That is true. But if one gets pain with no increase in predictive power, one has a bad pedagogical approach. Like the state of a cat. Perhaps you are objecting to my assertion that "a cat isn't in a pure state", when instead I should have said "it benefits us nought to imagine that a cat as a whole is in a pure state, as we have no way to control that kitten-state at the beginning of our experiment, and no way to measure it at the end". As such, I see it as an imaginary idea, an example of taking our axioms too seriously at one level but ignoring the other axioms we need to do science.
You can call a photo-electric effect a "measurement", and if you stop there, that can be good enough. You can also call it a quantum-mechanical interaction, and careful experimenting might give you some interference effects. So if you decide to study that, it is still "in the quantum world", but if you don't bother, well then it was in fact already classical.
You are saying that we tailor our descriptions to the experiment we are interested in. I totally agree! So what experiment are we interested in when we use the pure state of a whole cat?
 
Last edited:
  • #61
reilly said:
You managed to avoid dealing with most of my post. Why?
Reilly Atkinson
Because your post is deriding, self-important, pompous, arrogant, and a host of other adjectives that are inappropriate for civilized conversation. (Note I'm insulting the post, not you). I made an off the cuff remark that was not intended to offend anyone and so I apologize if it offended you. However, I believe that your viewpoint is absurd. The other points in your post regarding our respective credentials are irrelevant to the discussion.
 
  • #62
JesseM said:
No, it shouldn't. Both beams result in photons being directed at a range of horizontal positions (horizontal relative to the axis of the beams), regardless of focus, as can be seen in the graph of the detection patterns from each detector in Dopfer's thesis which is reproduced on this page (fig. 4). If each detector can only detect a very narrow range of horizontal positions, so that the only way to build up the wider range seen in the graphs is to vary each detector's position over many trials, that means there'd be many instances when photons missed the detector because it was in the wrong position on that trial.
That's true of the detector behind the slits. That is not true of the detector at the focal plane. The whole point of putting the detector at the focal plane is to catch every photon. In that branch of the experiment, that detector stays stationary. See Figure 4.5(p.36) of the original paper. http://www.quantum.univie.ac.at/publications/thesis/bddiss.pdf

Now the next point is critical. There will be MORE photons detected behind the lens than behind the slits, regardless of coincidence counting. Why? The slits block a lot of photons. So by introducing coincidence counting, we are taking a subset of the photons behind the lens, but NOT a subset of the photons behind the slits. In other words, even with the coincidence counting, we're counting _every_ photon that goes through the slits. Ergo, the pattern should be the exact same whether it's a CCD or a scanning detector.

Change the experiment now, and put the detector behind the lens at the _imaging_ plane instead of the focal plane. Now which-slit information is obtainable _if_ we can segregate out those photons that actually passed through _any_ slit. So the coincidence counter is, in fact, necessary in this part of the experiment in order to isolate from those detections behind the lens the corresponding photons that actually passed through the slits, and thereby we can determine "which slit" they went through, and, as QM tells us should happen, the interference pattern disappears.

So, to recap, the coincidence counter is necessary to select out the subset of photons from the post-lens detector corresponding to photons that actually passed through the double slit, and thereby we can see which slit those photons went through. But, when the post-lens detector is at the focal plane, there's no chance of knowing which-slit information anyway, and so there's no need to segregate out those photons that actually passed through the slits. All photons detected behind the slits will be detected behind the lens as well. Therefore, coincidence counting is not necessary to see the interference pattern.
 
Last edited by a moderator:
  • #63
guguma said:
So what I think is that consciousness has no effect on anything, actually this certainly demands another discussion "What is Consciousness" and "What is A Conscious Entity" and personally I do not differentiate a bottle of scotch from a human being, both are made up of the same raw material and only their functionality and their stable integrity differs which does not put one or the other above in ranking in terms of natural law.

reilly said:
Is a car wreck a measurement? Could the act of peeling a banana be an anti-measurement? Could waves crashing on a beach be a measurement? I keep asking this question, but I have yet to get an answer.

I completely agree with everything that Reilly has said here, but I actually think I can see a sense to which the two of you are talking about different things. So let me see if I can elucidate a position that may represent a kind of common ground. We all agree that human science is a human endeavor (other intelligences no doubt have their own approaches-- we can laugh at the efforts of a dog, but somewhere in the cosmos is one that would blow our minds), and the goal of this endeavor is to achieve understanding and power in our relationship with our universe. As such, it happens in our brains, or in the mechanisms that our brains build or intepret-- the "measuring devices". So we decide what a measurement is, and in that sense they don't exist without us (reilly's point), yet once we have decided what a measurement is, we may find analogous processes happening without our consent (guguma's point). Whether or not we will call the analogous process a "measurement" could potentially create a lot of semantic confusion where there may or may not be any real disagreement. I would say that a measurement is a mechanism set up by an intelligence to couple a natural process (quantum or otherwise) of unknown behavior with a classical system of well-known behavior. The idea is to leverage what is known about the device into an understanding of the unknown natural system, and that leveraging occurs in the mind of an intelligence. But analogous processes can be found in nature, so it really doesn't matter if we call those measurements or not, we just have to be clear what we mean-- the distinction is only something more than semantic in quantum mechanical applications when other misconceptions are in place.

So what are those other misconceptions, in regard to quantum mechanics? They have to do with starting from a pure state of a subsystem, and having interactions with its environment that destroy the coherences that initially allowed us to see it as a pure state of one measurable and a superposition state of another, and instead forces us to see it as a mixed state of the measurable connected with the nature of the decoherence that was set up. This final state is a real mess viewed quantum mechanically, because it is a projection of a much larger system that we don't even want to begin to consider, but this is not a loss it's a win-- the mixed state of the final measurable is something we are fully comfortable with, it is the die that has been rolled but not looked at yet. So we might call it a "measurement" only if the die was rolled by us and looked at by us, or we might use a more general description that can happen naturally and does not have to be looked at, the fundamental issue is that our crucial participation in the act of doing science came when we intentionally destroyed those coherences so that we could fit that outcome into the "boxes" of scientific thought. We sought the mixed state, that was on us. This is what I believe reilly is saying as well.

In this view, the role of consciousness comes in when the quantum behavior is long gone-- when a brain checks in on what the die actually rolled. Physics never tells us that, it didn't classically when the level of mixing precludes it, and it won't quantum mechanically when we have intentionally applied suitable mixing, so there has always been and likely always will be an incompleteness to physics that can only be resolved by the consciousness. But we dealt with that thousands of years ago when we first started thinking about our environment; quantum mechanics has nothing to add to this. That's what I think guguma is also saying-- there is no fundamental role for consciousness, except, as reilly might add, in the whole process of science itself.

So in terms of the word "measurement", I think guguma's point is that coherences get destroyed naturally too. We can reserve the term "measurement" for the intentional version if we choose, but either way, our goal is to use what happens in our experiments to understand what happens outside of our experiments, so we will always require some concept of what a hypothetical measurement is (if we imagine Maxwell's demon jumping in and doing a measurement, what would the outcomes be, etc.). So the common ground here is that the role of consciousness is quite sophisticated, and hard to express in a scientific theory that can never transcend the intelligence that made it. We need intelligence (and the perception of our own intelligence, which is what I think we mean by consciousness) to do the science that builds the concepts, and then we need it to get out of the way while we apply those concepts to consciousness-free systems.

I see this common ground as what has always been the fundamental paradox of science-- we are like parasites that try as hard as we can not to "kill" our hosts, the natural processes we wish to understand. If we do not interact with our hosts at all, we are too passively observing to be able to achieve much science, and if we interact too much, we can only understand the other hosts that are similarly infected. The best we can do is try to keep track of how we are affecting the host so we can in some sense "subtract" that influence when we need to. I wonder if a behavioral scientist studying gorillas in their natural habitat understands this paradox all too well.
 
Last edited:
  • #64
peter0302 said:
However, I believe that your viewpoint is absurd.
Then I submit you do not understand the viewpoint in the way he expressed it. Maybe one has to already see its validity before one can understand its meaning, that's a tricky problem we all face, I have encountered the same problem voicing something similar and no doubt I've been on the other end as well. But I found the viewpoint to be convincing to the point of being virtually self-evident.
The other points in your post regarding our respective credentials are irrelevant to the discussion.
Although I agree that arguments here should stand on their own, the credentials are relevant to why you should suspect that you don't understand a viewpoint that you can view as absurd. One must admit, the range between "absurd" and "self-evident" is about as large as it can get-- a remarkable feature for a debate between intelligent and basically well-informed people.
 
Last edited:
  • #65
They are relevant to why you should suspect that you don't understand the viewpoint.
If you read my original post, it was in response to someone saying some philosophers believed an intelligence was required to collapse the wavefunction. I said anyone who thought that was "nuts." I stand by that, if not my choice of words. (By the way, we were talking in the context of the Zen Cultists who made "What the Bleep do You Know?"

Now, in Reilly's subsequent attempts to elaborate his position, he really says nothing of substance. He seems to ask what is a measurement, and suggests that human intellect might be required for measurement. Well, I suppose that's true, but I'm really not interested in rewriting dictionaries. Does any physical process at a subatomic level depend on whether a human consciousness is aware of it or not? That's my question. If a "professional physicist" believes the answer to that is yes, we have a seirous problem. On that, I frankly can no longer tell what Reilly's position is and his response to my statement (which was not even directed to him!) is so full of arrogance that I am not even interested in what he has to say.
 
  • #66
peter0302 said:
That's true of the detector behind the slits. That is not true of the detector at the focal plane. The whole point of putting the detector at the focal plane is to catch every photon. In that branch of the experiment, that detector stays stationary. See Figure 4.5(p.36) of the original paper. http://www.quantum.univie.ac.at/publications/thesis/bddiss.pdf
Figure 4.5 does not show the results, only a picture of the setup. Again, look at fig. 4 from this page, which comes directly from the thesis; the right side is for the "in focus" case, and the top graph on that page shows the upper detector, but you still see photons at a range of horizontal positions, in the top-right case they're concentrated into two discontinuous peaks. The point of focusing the light is that tells you the direction of photons at the upper detector based on the position it is when they hit it, which because of the entanglement tells you the direction the photons at the bottom detector went through. If you vary the upper detector while keeping the bottom one fixed, and you do a coincidence count so that you ignore photon hits at the upper detector when there wasn't a corresponding photon hit at the bottom detector, then the graph for the upper detector will show two discontinuous bands, since you're only measuring upper-detector photons whose momentum was such that the entangled bottom-detector photons went through one of the slits instead of being blocked by the screen.

You can also see this in figure 4.8 on p. 38 of the thesis, showing that if the bottom detector D2 is held at a fixed position while the upper detector D1 has its position varied, if you graph the results for D1 over many trials (presumably only 'counting' hits where the bottom detector D2 also registered a hit) you should get two discontinuous peaks. And fig. 4.26 on p. 68 seems to show the actual experimental results for this case.
peter0302 said:
Now the next point is critical. There will be MORE photons detected behind the lens than behind the slits, regardless of coincidence counting. Why? The slits block a lot of photons. So by introducing coincidence counting, we are taking a subset of the photons behind the lens, but NOT a subset of the photons behind the slits.
I don't think that's correct, because again, the upper detector D1 behind the lens isn't picking up every photon--it's intentionally made narrow so it can only pick up photons at a certain location. This is what "Ben" in the message I quoted, who seems to have some familiarity with the thesis, is saying; and fig. 4.7 and 4.8 on p. 38 seem to confirm this, since the double-headed blue arrow looks like it's indicating that the position of the upper detector has to be varied to produce a graph of photon positions, even for the "in focus" case in fig. 4.8. Likewise, the bottom detector D2 behind the slits isn't picking up every photon that makes it through the slits--the double-headed blue arrow on D2 in fig. 4.5 on p. 36 and in fig. 4.6 on p. 37 seems to show that they have to vary the position of the D2 to build up the pattern there, even in the "in focus" case in fig. 4.6. And here they are only "counting" hits at D2 that correspond to hits at D1 where D1 is held fixed at a single position; again, if D1 is narrow it won't pick up all the photons even in the "in focus" case. I imagine if you replaced D1 with a wide CCD that could pick up photons at a large horizontal range of positions, and then graphed all the hits at D2 that corresponded with hits anywhere on the CCD, the pattern at D2 would never show interference, even in the "out of focus" case.
 
Last edited by a moderator:
  • #67
peter0302 said:
If you read my original post, it was in response to someone saying some philosophers believed an intelligence was required to collapse the wavefunction. I said anyone who thought that was "nuts." I stand by that, if not my choice of words. (By the way, we were talking in the context of the Zen Cultists who made "What the Bleep do You Know?"
I suspected it might be about that movie, and indeed I suspect that this discussion has taken on an adversarial air among people who probably agree on rather quite a lot of things-- like certain ridiculous elements of that movie. But there is room to disagree on other more technical but equally important issues-- and that is what I think is happening. As to the absurdity of the view that intelligence is needed to collapse a wavefunction, I see that issue as fraught with semantic peril, and that may be a large contributor to apparent disagreement that is not really there. I would say it all depends on what means by "collapse". I think of the collapse as the destruction of coherences that allow a pure state of one observable to be a superposition state of another, rendering a 'collapsed' mixed state if the decoherence acts in the necessary way. This does not require intelligence, and is the end of the quantum mechanics. Others say the "collapse" doesn't happen until the mixed state is further reduced by "noting the actual result", which does require an intelligence but has nothing directly to do with quantum mechanics. So it's an important but semantic confusion that can result, and I'm not sure how much of that is behind what you are saying here.
Now, in Reilly's subsequent attempts to elaborate his position, he really says nothing of substance. He seems to ask what is a measurement, and suggests that human intellect might be required for measurement.
He is talking about the second type of "collapse", and that's where I agree with him. But if you are talking about the first type, then I can agree with both of you-- as long as we recognize that decoherence to result in a measurement is set up intentionally by an intelligence, even though analogous processes can happen naturally.
Does any physical process at a subatomic level depend on whether a human consciousness is aware of it or not? That's my question.
Are you talking about a process or an understanding of a process, and what is the difference?
If a "professional physicist" believes the answer to that is yes, we have a seirous problem.
Yes-- we'll need to work more on the meanings of our words. Rewriting dictionaries is quite essential, I'm afraid-- you can never rely on standard ones to do science.
On that, I frankly can no longer tell what Reilly's position is and his response to my statement (which was not even directed to him!) is so full of arrogance that I am not even interested in what he has to say.
I think he took the "nuts" remark personally. I hope you can both just leave that behind, I don't think you meant it personally, and I don't think he meant to be condescending, only frustrated that his positions were being discarded without sufficient consideration.
 
  • #68
JesseM said:
I don't think that's correct, because again, the upper detector D1 behind the lens isn't picking up every photon--it's intentionally made narrow so it can only pick up photons at a certain location.
No, don't you see in Fig 4.5 of the original paper - D1 is fixed.

This is what "Ben" in the message I quoted, who seems to have some familiarity with the thesis, is saying; and fig. 4.7 and 4.8 on p. 38 seem to confirm this, since the double-headed blue arrow looks like it's indicating that the position of the upper detector has to be varied to produce a graph of photon positions, even for the "in focus" case in fig. 4.8.
Yes, yes, but figure 4.7 and 4.8 are a different variation of the experiment from figure 4.5. In 4.5 and 4.6, D2 is "fix".

Now look at Figures 4.23 through 4.26. For all of the "D2 ist fix." Watch how the pattern slowly changes from an interference pattern to two Gaussian patterns as D2 is moved from the focal plane (which-path destroyed) to the imaging plane (which-path intact). That whole time, "D2 ist fix."

I imagine if you replaced D1 with a wide CCD that could pick up photons at a large horizontal range of positions, and then graphed all the hits at D2 that corresponded with hits anywhere on the CCD, the pattern at D2 would never show interference, even in the "out of focus" case.

That's the issue. I don't know if that's right.

Why the *BLEEP* hasn't anyone actually tested this?
 
  • #69
Take a look at the photon COUNT as it goes from figure 4.23 to figure 4.26. Goes way way way down per unit area doesn't it?

I will bet anybody here a steak dinner that we've all got it backwards. The interference pattern will ALWAYS show up without coincidence counting. When D2 is moved to the imaging plane, a _subset_ of photons winds up being detected which form two gaussian patterns corresponding to the known 'which-path" information.

Wouldn't that result be perfectly consistent with the HUP and still not allow FTL communication?
 
  • #70
JesseM said:
I don't think that's correct, because again, the upper detector D1 behind the lens isn't picking up every photon--it's intentionally made narrow so it can only pick up photons at a certain location.
peter0302 said:
No, don't you see in Fig 4.5 of the original paper - D1 is fixed.
Of course it is--why do you think that contradicts my statement? I interpret it to mean they are looking at the subset of photons arriving at D2 for which their entangled twin went to that one fixed position that D1 is at (even though there'd be plenty of hits at D2 where there was no hit at D1, but there would have been a hit if D1 was replaced by a wider CCD).
JesseM said:
This is what "Ben" in the message I quoted, who seems to have some familiarity with the thesis, is saying; and fig. 4.7 and 4.8 on p. 38 seem to confirm this, since the double-headed blue arrow looks like it's indicating that the position of the upper detector has to be varied to produce a graph of photon positions, even for the "in focus" case in fig. 4.8.
peter0302 said:
Yes, yes, but figure 4.7 and 4.8 are a different variation of the experiment from figure 4.5. In 4.5 and 4.6, D2 is "fix".
You mean D1 is fixed. But why do you think that proves D1 isn't narrow? Figures 4.7 and 4.8 seem to show that D1 needs to be moved if they want to build up the pattern of photons at that location while keeping D2 fixed, which wouldn't be necessary if D1 was already wide enough to pick up all the photons coming through the lens.
peter0302 said:
Now look at Figures 4.23 through 4.26. For all of the "D2 ist fix." Watch how the pattern slowly changes from an interference pattern to two Gaussian patterns as D2 is moved from the focal plane (which-path destroyed) to the imaging plane (which-path intact). That whole time, "D2 ist fix."
You're confused, it is D1 which is behind the lens and which is moved from the focal plane to the imaging plane--look at figures 4.7 and 4.8, which show the upper detector D1 being moved from a distance f from the lens (focal plane) to a distance 2f (imaging plane), while the bottom detector D2 behind the double-slit is held fixed. The schematic graphs there correspond to the actual data in figures 4.23-4.26.
 
Back
Top