Understanding Measurements / Collapse

In summary, the impact of past measurements is that the wavefunction is altered, depending on whether or not a detection is made.
  • #36
1977ub said:
Ok here is what I had deleted:

Two observers are watching the experimental apparatus. The electrons are emitted, and there is a circle indicated at the close region and another at the far region.

The wall is coated entirely with a substance which will register visibly when an electron arrives. We don't have "detectors" so much as regions of the wall which are painted with circles simply to define these regions.

Each time a single slow electron is emitted, it can land in one of the 2 circles or it can land elsewhere, but it cannot land in both circles.

Observer A performs a single calculation based only on the initial set up, presuming that there is only one "collapse". He asks "what % of electrons will end up within the far circle?" He ends up with a single prediction of % hits in that circle, finally computing the outcome % based on *all* electrons released.

Observer B is watching closely enough to decide when each electron should have hit within the closer circle and in those cases where it seems not to have done so, recalculates the probability of hitting the far circle based upon that initial null measurement. No results are ever "thrown out". Any time the close circle is hit, , this will be treated as a "miss" of the far circle, but these will be averaged in at the end.

At the end of a sufficiently high number of electrons released to determine a %, will both end observers end up with the same prediction for the far circle?
If not, which one will match the % from the runs of the experiment?

If Observer A includes the disturbance caused by Observer B's inner circle measurement apparatus in his calculations, he will get the same result.

Observer A can do his calculation in 2 ways, both getting the same answer:
(1) Calculate exactly the same as Observer B, collapse the wave function, and throw away the intermediate result, then get the final count at the outer circle.
(2) Calculate with Observer B observing, but with no collapse of the wave function, and just calculate the final result at the outer circle.

The averaging process you talk about is the same as throwing away the intermediate results.
 
Last edited:
Physics news on Phys.org
  • #37
There is an emitter of slow electrons, and there is a flat wall with two circular regions painted - one close to the emitter, and one farther away. There is no inner circle "apparatus" - unless by using photographic plate along the whole wall we have created innumerable "detectors" - both inside and outside of the 2 circles.
 
  • #38
We understand the apparatus. What we're saying is, let's say f1(r) is the predicted density of hits as a function of radius in the inner circle, and f2(r) is in the second. These are straightforward to compute prior to any trials of the experiment. Now let's say you wanted to test how f2 changes after you wait long enough to know there has not been a hit in the inner cicle. The answer is you set f1 to zero and simply scale up f2 so that it remains a normalized probability. But our point was, if you want to test that you got this calculation right, you'll have to do many many trials of the experiment-- but any trial which gets a hit in the inner circle will not be relevant to the probability you are testing, so will have to be thrown out of the dataset.

Alternatively, if you don't throw out any data, you can simply test the original f1 and f2, which is testing something different because here f1 is not zero and f2 is not normalized by itself. That's what I meant by two different experiments being tested, and different data being used, because in only one is part of the dataset being thrown out. This is what is typical-- you can get two different probabilities, and conclude both are working fine, if you are testing something different that requires a different sorting of the dataset. The role of sorting in probability calculations is often overlooked by language that suggests that probabilities "exist" as things in and of themselves, which to me, connects to a similar problem with interpreting wavefunction amplitudes as "things." But that interpretation is possible, and some prefer it.
 
  • #39
no inner circle vs outer circle. two circles painted on, both let's say one foot in diameter. one covers the part of the wall closest to the emitter. the second one is the same size but some distance away on the wall.
 
  • #40
No mention of 'time-symmetric' interpretations of QM such as the Transactional Interpretation? TI makes conceptualising this kind of two-hemisphere experiment simpler and less confusing. I'm only just learning about this myself, so I'm no expert, but ...

The absorption of the electron, wherever it ends up, causes the absorber to emit a backwards-in-time 'confirmation wave' which, together with the forwards-in-time 'offer wave' of the emitter, determines the wave function of the emitted electron.

In this view, a null result of the smaller-hemisphere measurement team does not change the wave function. It was already there, fully-formed - having been instantaneously created from the interference of the emitter's 'retarded' offer wave and the absorber's 'advanced' confirmation wave. And non-locality isn't a problem when you can travel back in time! The offer/confirmation occurs instantaneously over any distance.

As I understand it (which I don't, mathematically, at least not yet), time symmetry is there in the equations of Maxwell, Einstein and Schroedinger. The backwards-in-time stuff has just been ignored by physicists. My instinct tells me there's something to this. If you can remove some of the quantum-weirdness of Copenhagen, that's got to be a good thing, and for it to concur with the equations is even better. I just wish I could learn the mathematical side of QM more easily, but it's a slow process for me.
 
  • #41
1977ub said:
no inner circle vs outer circle. two circles painted on, both let's say one foot in diameter. one covers the part of the wall closest to the emitter. the second one is the same size but some distance away on the wall.
That makes no difference, the geometry of the regions does not change the argument I just gave, it only modifies the form of the f(r) functions. The key point in all of this is, what f(r) you think you are testing depends on what data you are throwing out. You have to throw data out because you have lots of trials, and some will not be relevant to what you are testing. For example, if you are testing changes in the f(r) that come from the absence of detections somewhere else, you have to throw out the cases where there were detections somewhere else! So this is the reason you have to decide what you are testing before you can calculate the probabilities you expect.
 
  • #42
Jehannum said:
The absorption of the electron, wherever it ends up, causes the absorber to emit a backwards-in-time 'confirmation wave' which, together with the forwards-in-time 'offer wave' of the emitter, determines the wave function of the emitted electron.
Yes, you can always retain realism if you jettison locality. To me, this is a dubious choice, because the only reason we hold to either realism or locality is that they both gibe with our classical experiences. So if you have to let go of one, I see no reason to pick which. I'd rather just dump the whole paradigm that classical experience should be regarded as prejudicial toward our interpretations of non-classical phenomena! But I certainly agree that interpretations are subjective-- the question being asked is framed in an interpretation independent way, and all the interpretations must arrive at the same answer to that question.
As I understand it (which I don't, mathematically, at least not yet), time symmetry is there in the equations of Maxwell, Einstein and Schroedinger. The backwards-in-time stuff has just been ignored by physicists. My instinct tells me there's something to this. If you can remove some of the quantum-weirdness of Copenhagen, that's got to be a good thing, and for it to concur with the equations is even better. I just wish I could learn the mathematical side of QM more easily, but it's a slow process for me.
But remember, time symmetry is in Newton's laws too! Yet we have the second law of thermodynamics, and we have a concept of a difference between a cause and an effect. I grant you that it is not at all obvious that causes lead to effects, rather than effects produce a requirement to have a cause, but why retain realism at all if we are going to allow that our daily experiences are not reliable guides to "what is really happening"? To me, once we've rejected the authority of our intuition, the more obvious next step is to be skeptical of the entire notion of "what is really happening," and just admit that we are scientists trying to form successful expectations about observed phenomena, and any untestable process that we imagine is regulating those phenomena, but cannot show is regulating those phenomena, is essentially pure magic. Useful magic, subjectively preferred magic, but magic all the same.
 
  • #43
Ken G said:
That makes no difference, the geometry of the regions does not change the argument I just gave, it only modifies the form of the f(r) functions. The key point in all of this is, what f(r) you think you are testing depends on what data you are throwing out. You have to throw data out because you have lots of trials, and some will not be relevant to what you are testing. For example, if you are testing changes in the f(r) that come from the absence of detections somewhere else, you have to throw out the cases where there were detections somewhere else! So this is the reason you have to decide what you are testing before you can calculate the probabilities you expect.

I promise nobody will throw anything out.

person A intends to treat a single wave at emission as collapsing only once when it hits some near or far point of the wall. The wall is entirely coated with photographic material which will register a hit. The count up to a hundred emissions, and then count the hits in the far circle. This way, they get a single % of hits in the far circle.

person B has realized that since different parts of the wall are at different distances from the emitter, that partial information can be gotten midway through the flight of a particle. They count up a hundred emissions. They count up however many hits in the far circle and get a % just like person A. However, halfway through the expected flight time of each particle - in cases where the particle has hit some part of the wall, that obviously counts as a miss, but for the other cases, with this new information, a new wave calculation is made regarding the hit in the far circle. Unlike the case where the wall is circular, and there is only one instant where any information is revealed, New information arrives continuously until the far circle is hit (when it actually is). Does this change the computation which is done - or should be done. Doesn't a calculation change whenever new information becomes available?
 
  • #44
1977ub said:
I promise nobody will throw anything out.
Yes they do! I don't mean they pretend it didn't happen, I mean they simply sort their data such that part of the dataset simply doesn't appear when they test the probability they have calculated.
person B has realized that since different parts of the wall are at different distances from the emitter, that partial information can be gotten midway through the flight of a particle. They count up a hundred emissions. They count up however many hits in the far
circle and get a % just like person A. However, halfway through the expected flight time of each particle - in cases where the particle has hit some part of the wall, that obviously counts as a miss, but for the other cases, with this new information, a new wave calculation is made regarding the hit in the far circle.
Bingo, you have just stated where the sorting of the data is occurring. That's what I'm talking about-- the whole reason person B is testing a different percentage is they are using different data. You have just said so.

There isn't anything quantum mechanical going on here, the exact same principle applies at a table where people are playing cards, and some players are privy to information that others aren't. They calculate different probabilities of winning, yet they are just as correct as the others who get different answers, because their probabilities apply to a different set of hands-- the set that satisfies their own particular information constraints. They are thus "throwing out" of their consideration a different set of hypothetical hands, and hence achieve a different, yet "correct", probability.

However, there are situations in quantum mechanics where null information has a very non-classical effect, such as when a pulse of light finds no particle on the right side of a box. That changes the state of the particle in a way that cannot be treated with classical information, the way this scenario can.
 
  • #45
1977ub said:
I promise nobody will throw anything out.

person A intends to treat a single wave at emission as collapsing only once when it hits some near or far point of the wall. The wall is entirely coated with photographic material which will register a hit. The count up to a hundred emissions, and then count the hits in the far circle. This way, they get a single % of hits in the far circle.

person B has realized that since different parts of the wall are at different distances from the emitter, that partial information can be gotten midway through the flight of a particle. They count up a hundred emissions. They count up however many hits in the far circle and get a % just like person A. However, halfway through the expected flight time of each particle - in cases where the particle has hit some part of the wall, that obviously counts as a miss, but for the other cases, with this new information, a new wave calculation is made regarding the hit in the far circle. Unlike the case where the wall is circular, and there is only one instant where any information is revealed, New information arrives continuously until the far circle is hit (when it actually is). Does this change the computation which is done - or should be done. Doesn't a calculation change whenever new information becomes available?

The calculation they perform mid-way-through is different because in those cases they have been given good reason to "throw out" the probability it is in the closer circle. My basic question - does person A's calculation of a single "collapse" work out to match the results, or does the mere fact that information reveals itself bit by bit over time mean that a more complex calculation must be performed in order to get the right results.
 
Last edited:
  • #46
I never said they didn't have good reason to throw out some of the data, it's all about sorting.

To answer your question, there are many versions of "right results," depending on which experiment is being conducted. Persons A and B are both going to get "right results," and those results are going to be different. Again, that's commonplace, that in itself has nothing to do with collapse, it's purely how information and probability work. Probabilities always require sorting of data to fit to the information being used, and the test being done.
 
  • #47
Person A & B are doing the same experiment. The same emitter, the same wall, the same tallies. The same net hit % at the end of it all. One of them on his notepad imagines there to be one wave collapse, the other person imagines there to many or a continuous wave collapse over time. Which one's calculations match the experimental outcome.
 
  • #48
1977ub said:
Person A & B are doing the same experiment. The same emitter, the same wall, the same tallies.
No, they are certainly not doing the same tallies, as you just explained in your last post. Are they getting the same percentages of hits in the outer circle? No, they are not. That's what is meant by different tallies, and it's also why they are getting different probabilities, and they think the probabilities are right. I'm not sure what's hard to see about this-- you must agree they are getting different probabilities, right? And they are getting agreement with their probabilities, right? So that logically requires they must get different tallies, which they do.
The same net hit % at the end of it all.
No, it's not the same percentage. It's the same hits in the outer circle, but not the same percentages. How could they get the same percentages when you just agreed they are calculating different probabilities, because person B is throwing out all the trials that get a hit in the first circle, and person A is including all those trials?
One of them on his notepad imagines there to be one wave collapse, the other person imagines there to many or a continuous wave collapse over time. Which one's calculations match the experimental outcome.
Both, that's the whole point. I must be missing something in your question, because this is trivial. Imagine the two circles each get half the hits for the "single wave collapse," i.e., the full experiment (and no hits occur outside both circles, for simplicity). Person A uses a "single wave" to conclude that each trial has a 50% chance of hitting the first circle, and a 50% chance of hitting the second. And he/she concludes his/her probabilities are perfectly correct. Meanwhile, person B concludes that every time there is not a hit in the first circle, there will be a hit in the second circle 100% of the time-- that's your "continuous collapse" percentage, it's 100%. And of course, that works out just as well too. So you only have to realize why one person expects 50% in the second circle, and it works, and the other expects 100% in the second circle, and that works too. It's simply sorting the data differently, there's nothing quantum mechanical there.
 
  • #49
atyy said:
The weird thing about a measurement is that when it occurs is subjective. A measurement occurs when one gets a definite result or definite information.
The measurements whose results are null (ie. when you look and you see the particle is not there) are measurements and they do collapse the wave function. Some examples of quantum calculations with null results are given in https://arxiv.org/abs/1406.5535 (p5: "Observing nothing is also an observation").

The idea that measurement is subjective is not a necessary inference or conclusion, but arises from the inability to define measurement in the standard theory. This is remedied in the transactional picture, in which the 'measurement transition' is well-defined, as discussed here: https://arxiv.org/abs/1709.09367
 
  • #50
1977ub said:
I'm trying to understand the impact of past measurements, and when measurements occur.

As I understand it, in the simplest case, you've got a particle emitter in the center of a circle, and a measuring plate around the circle. Here in the ideal case the particle is emitted and has equal probability of showing up anywhere on the circle when the wave "collapses". This random outcome can be tested.

But I wondered at some point, when the particle is emitted at a flat wall, and we know what time, we can calculate the time that the wave would reach the nearest point to the emitter. If it is not measured there, then presumably the wave is still happening, and then with each progressive moment, we get more and more information about where the arrival point is NOT, and therefore the wave gets redefined, ad infinitum until it finally hits some point on the wall. Is that right?

When a photon leaves a distant star and arrives at a photographic plate on earth, has a "measurement" been performed at every moment where it *might* have arrived somewhere else than earth? If we had an emitter in deep space sending photons in random directions, could we tell anything about the layout of the universe in various directions once we got back the times photons were emitted and correlated that with the photons we got on earth?
Measurement occurs whenever there is absorber response--but that is missing in the stanrdard theory. You need the direct-action theory (transactional picture) to be able to define measurement in physical terms. See: https://arxiv.org/abs/1709.09367
 

Similar threads

Replies
1
Views
1K
Replies
7
Views
1K
Replies
8
Views
1K
Replies
7
Views
1K
Replies
4
Views
2K
Replies
24
Views
2K
Back
Top