New version of double-slit experiment

In summary, the conversation discusses a proposed version of the double-slit experiment where photons are emitted individually through a vacuum onto a detector screen, with the double-slit barrier being shifted after each emission. The goal of the experiment is to investigate the behavior of photons and their knowledge of their surroundings. The conversation also touches on the relationship between QED and the 2-slit experiment, and the idea that photons may know the location of every particle in the universe. The expert notes that this understanding is faulty and discusses the misconceptions and illogical jumps made in the conversation.
  • #106
universecode said:
Not in theoretical sciences! There are theorems that take dozens if not hundreds of pages to prove and every line is a key line. Rushing the argument is not a virtue here.

Really? I never knew that.

The fact is that after 100 posts, we are still at the same question you had in post #1. Which is whether it matters if you move a detector around before a detection event occurs and while the particle is in flight. We have answered repeatedly, NO, it does not matter. Exactly what is there to ponder?

Finally: we really hope you are not attempting to push a personal argument here. We are discussing established physics. Your speculation is actually not allowed (see the FAQ). And it is beginning to look like that. I would urge you to get to the point more quickly. You will note that diggnforgold is confused the progress, which is not a good sign. There are more folks reading this than just a handful. This discussion should benefit others too.
 
Physics news on Phys.org
  • #107
diggnforgold said:
Where is the original thread? I would like to know the particulars of the new take on the double slits experiment!

Sadly, this is the original thread.

And as an FYI: despite the title, there is no *new* version of the double slit experiment being discussed here.
 
  • #108
universecode said:
The fact of switching the source on/off is completely independent from the probabilities eventually observed at each detector. Regardless of how diabolical the operator decides to behave the probabilities after sufficiently long run will tend to be the same as if there were no operator and the source was just controlled by a timer which switches the source on every two hours and then off 10 minutes after switching on?

Before we go on, I have to back up some.

When you said "we have been observing a photon registered by one of the detectors consistently every 2 hours plus-minus 10 minutes" I read that as saying that the expectation value for the number of photons detected during the twenty minutes around the two-hour mark is one: sometimes we get zero in that twenty minute period, often we get one, occasionally we get two or more, over enough trials it averages out to one. That's consistent with a source that is switched on ten minutes before the hour every two hours and left to run for twenty minutes.

If you had some other probability distribution in mind, you'll have to describe very clearly exactly how it is produced. Be clear about what it means to emit a photon every twenty minutes: It means that the source is illuminating the detector with a very dim light for twenty minutes, during the time that the detector is illuminated there is some probability of a photon being detected at the detector, and the expectation value of the number of detections across the twenty minutes that the detector is illuminated is one.
 
  • #109
Nugatory said:
...

If you had some other probability distribution in mind, you'll have to describe very clearly exactly how it is produced. Be clear about what it means to emit a photon every twenty minutes: It means that the source is illuminating the detector with a very dim light for twenty minutes, during the time that the detector is illuminated there is some probability of a photon being detected at the detector, and the expectation value of the number of detections across the twenty minutes that the detector is illuminated is one.

Well said. The other issue is that since OP won't disclose where he is going, we don't know if any of this matters or not. So far, I haven't seen a hint of an actual question, at least one that hasn't already been answered multiple times.
 
  • #110
Nugatory said:
That's consistent with a source that is switched on ten minutes before the hour every two hours and left to run for twenty minutes.

Thanks, I actually made a small correction after that post to make sure it reads better and you should understand it now as a source that is switched on right on the hour every two hours and left to run for 10 minutes with 1 photon expectation emitted within this 10 minute interval. Presumably a typical distribution of photon emissions within this 10 minute interval would be Poisson, but I suppose in such setup it is not relevant for our eventual probabilities observed by the detectors.

Here is a repeat of the correction:

Let me also make a small correction. Previously I said:
"... we have been observing a photon registered by one of the detectors consistently every 2 hours plus-minus 10 minutes..."

I am correcting that to the following:
"... we have been observing a photon registered by one of the detectors consistently within 10 minute interval past every 2 hours ..."
i.e., we receive a photon within 10 minutes past 1.00am, then a photon within 10 minutes past 3.00 am and so on.

Does this change anything we have already confirmed?

If not, then is it safe to assume that after sufficiently long run required to build appropriate statistics to estimate probabilities (which is what we do in both cases of having detectors in one or the other sets of locations) we can be certain that this is what actually happens i.e., the source is predictable in a sense that it does emit a photon within 10 minute interval past every two hours?
 
  • #111
I guess I need to make another small correction:

the source is predictable in a sense that it does emit a photon within 10 minute interval past every two hours?

It should be:
"... the source is predictable in a sense that on average it does emit one photon within 10 minute interval past every two hours?"
 
  • #112
universecode said:
I guess I need to make another small correction:

...

"... the source is predictable in a sense that on average it does emit one photon within 10 minute interval past every two hours?"

OK, great, now what is the setup? Until we know more, we won't know if the details will make a difference or not.
 
  • #113
universecode said:
Now, may I confirm something that I think will be very useful for anyone who might decide to read this so they can clearly understand what's going here.

The fact of switching the source on/off is completely independent from the probabilities eventually observed at each detector. Regardless of how diabolical the operator decides to behave the probabilities after sufficiently long run will tend to be the same as if there were no operator and the source was just controlled by a timer which switches the source on every two hours and then off 10 minutes after switching on?

Ok, now that we're done with the long sidebar about how the source behaves (every two hours it's turned on; during the time that it's on it illuminates the detctor with light at intensity such that the expectation value is that only one detection event will occur) then... In general, the above is wrong.

The distribution of detections at the detector most certainly does depend on when the detector is switched on and off. Only if the diabolical operator diabolically chooses to switch the source on and off in exactly the same way as the timer would there be no difference.

(I have to point out that the times and distances we're dealing with here are classical; this one light-hour scenario is like using a very dim light on Earth to illuminate a detector on Saturn. When you scale the times and distances down, you will find that there are limits on what the operator can do and how quickly he can turn the source on and off).
 
  • #114
Nugatory said:
Ok, now that we're done with the long sidebar about how the source behaves (every two hours it's turned on; during the time that it's on it illuminates the detctor with light at intensity such that the expectation value is that only one detection event will occur) then... In general, the above is wrong.

The distribution of detections at the detector most certainly does depend on when the detector is switched on and off. Only if the diabolical operator diabolically chooses to switch the source on and off in exactly the same way as the timer would there be no difference.
Thanks, I need to think about what you said...


(I have to point out that the times and distances we're dealing with here are classical; this one light-hour scenario is like using a very dim light on Earth to illuminate a detector on Saturn. When you scale the times and distances down, you will find that there are limits on what the operator can do and how quickly he can turn the source on and off).
I think this is fine by me - the objective is to test a unified behavior of both quantum and classical theories. Hence we must be observing quantum effects on classical time and distance scales.
 
  • #115
Nugatory said:
Ok, now that we're done with the long sidebar about how the source behaves (every two hours it's turned on; during the time that it's on it illuminates the detctor with light at intensity such that the expectation value is that only one detection event will occur) then... In general, the above is wrong.

The distribution of detections at the detector most certainly does depend on when the detector is switched on and off. Only if the diabolical operator diabolically chooses to switch the source on and off in exactly the same way as the timer would there be no difference.

I though about this and so far I am failing to see how the probability distribution of photon emissions would make a difference on the expected probability of photon detections at the detectors if nothing changed with regards to the source/detectors locations or anything around them.
Of course if you meant the form of the distribution of detections with respect to time then yes - the form will depend on it but the expected value will converge to the same one regardless.
 
  • #116
universecode said:
I though about this and so far I am failing to see how the probability distribution of photon emissions would make a difference on the expected probability of photon detections at the detectors if nothing changed with regards to the source/detectors locations or anything around them.
Of course if you meant the form of the distribution of detections with respect to time then yes - the form will depend on it but the expected value will converge to the same one regardless.

The average could be arbitrarily close to 1 photon, but that also means sometimes you get 0, 2, 3, etc. photons instead. There would be a fairly high standard deviation. I don't know if this is relevant to your secret example or not.

On the other hand, it is *possible* to create a photon source that will deliver 1 photon to a specific target detector with a very high degree of certainty. This involves turning off the source once 1 photon is delivered, much as Nugatory said. There are a few other caveats too. You would occasionally get 0 or 2, but far less likely. On the other hand, this technique generally would not work for multiple targets.
 
  • #117
universecode said:
I though about this and so far I am failing to see how the probability distribution of photon emissions would make a difference on the expected probability of photon detections at the detectors if nothing changed with regards to the source/detectors locations or anything around them.
Of course if you meant the form of the distribution of detections with respect to time then yes - the form will depend on it but the expected value will converge to the same one regardless.

I don't understand what you're saying here. Several points of confusion:

First you say "I am failing to see how the probability distribution of photon emissions would make a difference on the expected probability of photon detections at the detectors if nothing changed with regards to the source/detectors locations or anything around them". Then in the very next sentence you explain how that happens: "of course the distribution of detections with respect to time [will change]".

Second, you're still speaking in terms of "photon emission". But the source cannot be made to emit photons in a controlled way; as I explained a few posts back, it's just a very dim light illuminating the detectors when it's on. If you want anything more interesting than that, you have to specify exactly how and when you're turning the source on and off to get that more interesting distribution.

Third, you say "the form [of the probability distribution] will depend on it but the expected value will converge to the same one regardless". That's confusing in several ways:
-The expected value doesn't "converge", it's something that we calculate directly by integrating the PDF across a particular time interval. When we do a large number of measurements, our results will approach the expected value - that's what makes it "expected".
-Different PDFs can produce the same expected value across a particular time interval, but that doesn't make them the same PDF, and we can distinguish them experimentally by measuring across other time intervals. Five minutes of high intensity followed by five minutes of low intensity has the same expectation value as ten minutes of moderate intensity over a ten minute period, but will produce very different results if we sample across five minutes instead.
 
  • #118
Apologies for delay, I am back to this again.

Nugatory said:
I don't understand what you're saying here. Several points of confusion:
First you say "I am failing to see how the probability distribution of photon emissions would make a difference on the expected probability of photon detections at the detectors if nothing changed with regards to the source/detectors locations or anything around them". Then in the very next sentence you explain how that happens: "of course the distribution of detections with respect to time [will change]".
What I meant here is that the form of PDF of arrivals at each detector can be any and depends on the properties of the source and its PDF of emissions but the expected value of the PDF of arrivals at each detector will not change as long as locations of the source and the detectors have not changed.

Second, you're still speaking in terms of "photon emission". But the source cannot be made to emit photons in a controlled way; as I explained a few posts back, it's just a very dim light illuminating the detectors when it's on. If you want anything more interesting than that, you have to specify exactly how and when you're turning the source on and off to get that more interesting distribution.
Sure, the source cannot be controlled precisely but I think we have already established that it is possible to have a source that has some PDF of emissions with 1 expected emission during the 10 minute interval after being switched on.

Third, you say "the form [of the probability distribution] will depend on it but the expected value will converge to the same one regardless". That's confusing in several ways:
-The expected value doesn't "converge", it's something that we calculate directly by integrating the PDF across a particular time interval. When we do a large number of measurements, our results will approach the expected value - that's what makes it "expected".
-Different PDFs can produce the same expected value across a particular time interval, but that doesn't make them the same PDF, and we can distinguish them experimentally by measuring across other time intervals. Five minutes of high intensity followed by five minutes of low intensity has the same expectation value as ten minutes of moderate intensity over a ten minute period, but will produce very different results if we sample across five minutes instead.
Sure, all that is correct. By "converging" I meant that during the experiment the number of arrivals at each detector divided by the number of times the source is switched on (since it is switched on every two hours it will be total running time in hours divided by 2) will converge to some value which is the expected value of the PDF of arrivals for a particular detector or the probability of arrival at each detector.

So, to re-iterate what we have been talking about here are some statements:
1. We are in a patch of our universe extremely remote from anything and all around is just vacuum of space.

2. We have a source which is switched on every two hours by a precise timer and switched off 10 minutes after. The source has some PDF of emissions within that 10 minute interval with expected value 1.

3. We have detectors located at various distances from the source but they are all around 1 to 1.2 light-hours away from the source, let call the set of such locations as L1

4. We run experiment for a sufficient period of time to observe probabilities of arrival at each detector in L1, let's call them P1

5. We move detectors to another location L2 but still having them between 1 and 1.2 light-hours away from the source, and run the experiment again to observe new set of probabilities P2

6. We know that there is an expected one photon emission within 10 minute interval past every two hours and since detectors are located 1-1.2 light-hours away we can assume that each photon "travels" for at least 1 hour but not more than 1.2 hours before it reaches a detector.

7. What I would like to know is, given our setup, what existing QM theories would predict with regards to probabilities of arrivals observed at the detectors if during 1 hour after 10 minutes past every two hours (i.e. after the source being on for 10 minutes) we move detectors from L1 (where probabilities of arrival are P1) to L2 (where probabilities of arrival are P2). Then 0.2 hours later we move detectors back to L1 and they stay there waiting for the next time the source is turned on/off.
The timing of detector moves is controlled by second timer which is initially synchronised with the timer controlling the source.

In my understanding, if outcome is created at the measurement, all theories should predict that probabilities will be P2 because when each of our photons reaches the detectors they will be at L2 (even though at time of emission detectors were at L1) and we know that at L2 probabilities are P2.

Is this correct and this is what QM would predict?
I understand relativity may have something to say about this too, can someone explain what effect might be observed here?
 
  • #119
universecode said:
7. What I would like to know is, given our setup, what existing QM theories would predict with regards to probabilities of arrivals observed at the detectors if during 1 hour after 10 minutes past every two hours (i.e. after the source being on for 10 minutes) we move detectors from L1 (where probabilities of arrival are P1) to L2 (where probabilities of arrival are P2). Then 0.2 hours later we move detectors back to L1 and they stay there waiting for the next time the source is turned on/off.
The timing of detector moves is controlled by second timer which is initially synchronised with the timer controlling the source.

In my understanding, if outcome is created at the measurement, all theories should predict that probabilities will be P2 because when each of our photons reaches the detectors they will be at L2 (even though at time of emission detectors were at L1) and we know that at L2 probabilities are P2.

Is this correct and this is what QM would predict?

In your purely hypothetical situation in which all other variables are held constant and the only change is movement from L1 to L2 or back, the probability will be P2 because the detectors are at L2 when the detection event (or non-event) occurs. Relativity is not really a factor here regardless of the speed at which you move the detector. It's a "just in time" effect.
 
  • #120
DrChinese said:
In your purely hypothetical situation in which all other variables are held constant and the only change is movement from L1 to L2 or back, the probability will be P2 because the detectors are at L2 when the detection event (or non-event) occurs. Relativity is not really a factor here regardless of the speed at which you move the detector. It's a "just in time" effect.

Thanks, well, if what you are saying is correct i.e., all known theories predict P2 this is an example of where my theory predicts different result that it would be P1 without contradicting anything already shown to be correct, as far as I know thus far.
 
  • #121
universecode said:
Thanks, well, if what you are saying is correct i.e., all known theories predict P2 this is an example of where my theory predicts different result that it would be P1 without contradicting anything already shown to be correct, as far as I know thus far.

By "your theory", do you mean the idea that measurement results are determined at the source, as you suggested further up in this thread (for example, case a of post #3)? Such theories do indeed make different predictions than QM, and they have been already been refuted by other experiments.
 
  • #122
Nugatory said:
By "your theory", do you mean the idea that measurement results are determined at the source, as you suggested further up in this thread (for example, case a of post #3)? Such theories do indeed make different predictions than QM, and they have been already been refuted by other experiments.

Thanks, yes, outcome determined at the source, but I have to be clear about what I mean by the "source".
Any interaction (with any particle) creates new source at every such interaction, hence all the experiments I've seen so far are not refuting my idea. Would you be able to direct me to an experiment which takes this into account?
 
  • #123
universecode said:
Thanks, yes, outcome determined at the source, but I have to be clear about what I mean by the "source".
Any interaction (with any particle) creates new source at every such interaction, hence all the experiments I've seen so far are not refuting my idea. Would you be able to direct me to an experiment which takes this into account?

"Any interaction (with any particle) creates a new source at every such interaction" is a bit vague, but to the extent that it means anything, it's a basic feature of quantum mechanics, which deals only with interactions and uses the word "source" to identify the classical portion of a system involved in some interactions. To be more precise, you would have to use the language of state preparation and measurement.

The experiments that test Bell inequalities all take this into account.
 
  • #124
universecode said:
Thanks, well, if what you are saying is correct i.e., all known theories predict P2 this is an example of where my theory predicts different result that it would be P1 without contradicting anything already shown to be correct, as far as I know thus far.

First, the predictions of QM have already been verified in regard to changes to the setup made at the last fraction of a second (see Weihs et al 1998 for example). I believe such has been pointed out a number of times. Just because you don't accept it, really doesn't mean much. Second, you haven't made any predictions, although you have waved your hands a substantial amount. Third, even if you had, you would need a theory to go with it for other physicists to be interested. Just making counter-predictions to established theory won't go very far.

Of course, you are free to conduct any experiment you like using your own time and resources.

And lastly, further discussion of your "theory" would violate PF forum rules on personal speculation. This is a moderated science forum, and you will need established references to continue. If you have any further questions on quantum mechanics, please feel free to start a thread. If you continue to argue on behalf of ideas with no connection to established science, you can expect to be reported quickly. This has gone on long enough, there are other readers here to consider besides yourself.
 
  • #125
DrChinese said:
First, the predictions of QM have already been verified in regard to changes to the setup made at the last fraction of a second (see Weihs et al 1998 for example). I believe such has been pointed out a number of times. Just because you don't accept it, really doesn't mean much.
I have looked at Weihs et al 1998 - again this experiment confirms what we already know about hidden variables. If local hidden variables are given to a particle at the "classical source" and stay with it unchanged this cannot explain what we observe, how many times do we need to test it?

With regards to what I am proposing this experiment makes exactly the same mistakes made by all others - there are too many particle interactions between what you call "classical source" and the detectors and what I am saying is that the classical source is irrelevant because at each quantum interaction the hidden variables are changed.
For example, as photon travels inside the fibre as it happens in most of such experiments its hidden state is changed every time it bounces off the fibre's walls - isn't this obvious?

Second, you haven't made any predictions, although you have waved your hands a substantial amount. Third, even if you had, you would need a theory to go with it for other physicists to be interested. Just making counter-predictions to established theory won't go very far.
I gave just one example of counter prediction, there are others. Obviously, my theory is in embryonic state and I am researching everything I could by also asking what other people think. It is the only way to do research - I really don't understand why you having problems with this, given the desire of this forum to be circulated among universities. I am discussing something that is on the edge of understanding, and no one knows the answers - does it mean it cannot be discussed?

And lastly, further discussion of your "theory" would violate PF forum rules on personal speculation. This is a moderated science forum, and you will need established references to continue. If you have any further questions on quantum mechanics, please feel free to start a thread. If you continue to argue on behalf of ideas with no connection to established science, you can expect to be reported quickly. This has gone on long enough, there are other readers here to consider besides yourself.
Everything I am saying is based exactly on what Feynman said in all of his books, and as far as I know he has Nobel prize for it, isn't this good enough reference?
All the reference provided to me so far are not deep enough to address the issue I am discussing and I have explained why.
 
Last edited:
  • #126
Nugatory said:
"Any interaction (with any particle) creates a new source at every such interaction" is a bit vague, but to the extent that it means anything, it's a basic feature of quantum mechanics, which deals only with interactions and uses the word "source" to identify the classical portion of a system involved in some interactions. To be more precise, you would have to use the language of state preparation and measurement.
I will be working on more precise explanations, of course, that will take years.

Meanwhile, this statement:
"the word "source" to identify the classical portion of a system involved in some interactions"
IMHO is THE problem with experiments such as ones testing Bell inequalities.

I do not believe in the existence of what we call "classical physics" - the world is quantum at its core, so all "classical" phenomena we are observing are simply emerging properties of the large number of quantum interactions. Hence, making experiments where we deliberately separate the system into classical and quantum portions is destined to fail to find anything new - we will always observe what we already know about supposed weirdness of our world.
Once you start accepting that the world is quantum with probability at its core and, whatever we call classical is just an aggregate property, nothing is weird anymore and all makes sense.
 
Last edited by a moderator:
  • #127
Closed pending moderation.
 

Similar threads

Back
Top