Exploring Heisenberg's Uncertainty Principle: Intuition & Explanations

In summary, Fredrik's post is accurate and comprehensive. Heisenberg's uncertainty principle is a limit on the accuracy with which we can measure a particle's position and momentum, and on my course I was shown the derivation. However, I've been wondering if there is any reason to intuitively expect difficulties when trying to simultaneously know both quantities. What I mean is, is there anything about the nature of "position" and "momentum" that hints that we should not be able to know both simultaneously? One explanation I heard was that if you, say, bounced a photon off an atom to measure its position, then the recoil would affect its momentum, thus giving rise to the uncertainty -
  • #211


It is uncontested that you can measure both at the same time if you don't care about accuracy.
 
Physics news on Phys.org
  • #212


atyy said:
It is uncontested that you can measure both at the same time if you don't care about accuracy.
Uncontested by you perhaps. :smile:

I have added some stuff to my previous post that you might be interested in...and now I have to get some sleep.
 
  • #213


Fredrik said:
Uncontested by you perhaps. :smile:

Surely uncontested by everyone - what else could the tracks in a cloud chamber be but simultaneous position and momentum measurements? The only question is whether simultaneous accurate measurements of both are possible.

Fredrik said:
I haven't looked at the Raymer article yet, but I also feel that if any kind of limit is supposed to be a part of the definition of this momentum measurement, it's L→0, not L→∞. The reason is that what we're measuring is more like an average momentum than the "momentum right now". The position measurement is performed on a particle with a wavefunction that has had some time to spread out. To claim that we have really performed a simultaneous measurement, we should measure the momentum when the particle is in the same state as when we measure the position, but the momentum measurement involves two different times, and the wavefunction is spreading out over time. So it seems that we are closer to a "true" simultaneous measurement when L is small.

Yes, I think that's where Ballentine is wrong - it must be accurate measurements of the same state. However, if I read Raymer correctly, an accurate momentum measurement of the state at small L is done by taking L large, whereas an accurate position measurement of that state is done at small L. So Ballentine's error is that his accurate position and momentum measurements both performed at large L are accurate position and momentum measurements of different states. So he doesn't have accurate conjugate position and momentum.
 
Last edited:
  • #214


atyy said:
It is uncontested that you can measure both at the same time if you don't care about accuracy.
I'll have to agree with this. Confidence levels and confidence intervals is what the SD is all about anyway.

Maybe Fredrik insists that even a non-proper measurement (like we are talking about here in the Ballentine case) or "incomplete" measurement should be incorporated in a mesaurement theory. And here I agree; but this IMO requires a reconstruction of measurment theory as there are then more sublte points around.

A measurement without qualifying confidence measures is IMO not complete. And to make it complete in the conventional picture you need a complete ensemble or a ninfinity of reruns.

If we are to get away from this, how can be understand and construct instrinsic confidence measures without referring on unreal ensembles?

Technically, to falsify QM, one or two detector clicks is not enough. You need an infinite of them to the point where you does effectively simulate the full ensemble. This is also acknowledged by Popper. Falsification is only a statistical process as well. Any single datapoint can be explained away as noise.

/Fredrik
 
  • #215


Fredrik said:
It also seems to me that this is precisely the type of "inference" that measuring devices do when they test the accuracy of the theory's predictions, so how can anyone not call it a measurement?
But theory does not predict the outcomes of single measurements (single data points) anyway. It only predicts the ensemble properties.

If we stick to ensemble interpretation, one could even argue that it's completely meaningless to even bother speak about single measurements, becuase our theory doesn't make any statement about it, it only makes statements of the statistics.

However I think that makes no sense becauase it leaves out many real life situations. But this is to me a conceptual problem of the ensemble interpretation. It shows that it's absurd as a basis for decision making in interactions, and to me a theory is an interaction tool more than a description. I want they theory to guide me through the future, not describe the past that is already history.

Like we discussed briefly in another thread, in a cosmological perspective, we actually do replace the ensemble with "counting evidence" from several interactions with the same system. I'm suggesting that a similar perspective may be used in QM. Here the information encoded in the "ensemble" can instead be thought of as physicall encoded inthe observing systems microstructure. In that way, the "ensemble" is indirectly defined by the state of hte observer (which is a function of it's history). This means you always have an "effective ensemble" whenever you have an observer. Then "single measurments" would simply slowly evolve the effective ensemble.

/Fredrik
 
  • #216


Okay..

Both CM and QM says "position" and "momenta" are different. They're mathematically used differently, and operationally defined differently. The experimental data for a "position" (point) and a "momentum" (two or more points with or without "path") are different.

So why is it that we read HUP as "weird" because of something Galileo once said?

Rather: HUP should correct Galileo's misconception and note that the "arbitrary" degree of accuracy only extends to the point where the large numbers of atoms in the system mask the effects of underlying quantum events at Planck scales. (e.g., human-scale phenomena)
 
Last edited:
  • #217


I also suspect that Raymer's momentum "measurement" isn't a true momentum measurement. If it were, we would expect the state to collapse into a momentum eigenstate before the "measurement", since the "measurement" at infinite time is supposed to reflect momentum at finite t. Raymer says "This mapping of the momentum distribution into position for large L is analogous to far-field diffraction in optics". My guess is that it isn't a true momentum measurement, because it uses some knowledge of the state. At one extreme, if one knows the state of particle, one can get both position and momentum distributions with no measurement and no collapse at all.
 
Last edited:
  • #218


atyy said:
I also suspect that Raymer's momentum "measurement" isn't a true momentum measurement. If it were, we would expect the state to collapse into a momentum eigenstate before the "measurement", since the "measurement" at infinite time is supposed to reflect momentum at finite t. Raymer says "This mapping of the momentum distribution into position for large L is analogous to far-field diffraction in optics". My guess is that it isn't a true momentum measurement, because it uses some knowledge of the state. At one extreme, if one knows the state of particle, one can get both position and momentum distributions with no measurement and no collapse at all.
Can you describe his method or quote the relevant part of the article?

What do you mean by a "true momentum measurement"? Is it that it works on an unknown state? Is it that it involves collapse to a momentum eigenstate? (I would only require that it gives us results with dimensions of momentum, and that those results will be distributed approximately as described by the squared absolute value of the Fourier transform of the wavefunction).
 
  • #219


atyy said:
...what else could the tracks in a cloud chamber be but simultaneous position and momentum measurements?
I would describe it as a series of approximate position measurements that together can be considered a single momentum measurement. Do we want to call this a simultaneous measurement of both? Maybe. It seems to be a matter of semantics, and taste. The argument in favor of calling it a simultaneous measurement is of course that by the end of it, we have obtained a value of each of the position components and each of the momentum components. The argument against it would be that we only need one of the liquid drops to obtain the values of the position components, but we need several to obtain the values of the momentum components.

I just skimmed thorough parts of section 4.4 (titled "Particle detectors") in "Nuclear and particle physics", by Brian Martin. I was hoping that it would tell me what particle physicists consider "momentum measurements", and I believe it did. The bottom line is that it always involves a series of approximate position measurements. The momentum is then inferred from the shape of the particle track. The difference between the older types of detectors (cloud chambers, bubble chambers) and the more recent (gas detectors, wire detectors, semiconductor detectors) is that the new ones don't bother to make the track visible. They just use electrodes to collect the electrically charged products of the interactions, and (I presume) calculate the shape of the track from the amplitude and timing of the electrical signals.

So for the purposes of this discussion, it seems that we can take a bubble chamber (or any other of these devices) as a definition of what is meant by a "momentum measuring device". But I don't know if we really should say that this is a way to measure the momentum of a particle in a given state (unknown and completely arbitrary). What I'm thinking is that the interactions that produced the first bubble, or interactions before that, must have put the particle in a state such that the wavefunction and its Fourier transform have approximately the same width in units such that [itex]\hbar=1[/itex]. So maybe we should just say that this is the definition of how to measure momentum when the particle is known to be in that kind of state.

However, since all momentum measurements seem to involve at least two approximate position measurements (or at least a preparation of a state with sharply defined position, followed by a position measurement), I don't think there can exist a meaningful definition of what it would mean to measure the momentum of a particle in an arbitrary state. This is probably as good as it gets. Momentum measurements of the type suggested by von Neumann's projection axiom don't exist.
 
Last edited:
  • #220


Note that if we replace the wall of detectors in Ballentine's thought experiment with something like a bubble chamber, the wavefunction will become somewhat localized again as soon as the particle enters the chamber, no later than at the time of the interaction that creates the first bubble. Immediately after this, I guesstimate that the width of the wavefunction and the width of its Fourier transform will be of the same order of magnitude for the rest of the passage through the chamber. The other bubble events are approximate position measurements that don't localize the particle any more than it already is. The momentum calculated from the shape of the track will tell us the approximate momentum of the state that the particle was put into after it entered the chamber. This is a different state than the one we wanted to perform the momentum measurement on.

Because of this, I'm starting to think that the method suggested by Ballentine is the only thing that can be called a py measurement of a particle that hasn't interacted with its environment since it passed through the slit. The funny thing is that this doesn't make it obvious that the distribution of results will agree with the squared absolute value of the Fourier transform of the wavefunction. It's possible that the agreement is poor for large L (the distance from the slit to the wall of detectors), and in that case, his estimate of the margin of error on py is questionable. And his argument that we can measure y and py accurately at the same time may fall with it.
 
Last edited:
  • #221


Here's a free article that describes the same thing as Raymer: http://tf.nist.gov/general/pdf/1283.pdf .

It talks about the position and momentum "shadows" of an initial state. The shadow of position occurs at a different time from the shadow of momentum. So Ballentine is wrong because he is not talking about canonically conjugate position and momentum.
 
  • #222


Fredrik said:
However, since all momentum measurements seem to involve at least two approximate position measurements (or at least a preparation of a state with sharply defined position, followed by a position measurement), I don't think there can exist a meaningful definition of what it would mean to measure the momentum of a particle in an arbitrary state. This is probably as good as it gets. Momentum measurements of the type suggested by von Neumann's projection axiom don't exist.

I agree .. that is exactly that I was saying (or at least trying to), back on the first page of this thread :wink:.

The interesting question raised by that is, why not? Is it due to some fundamental limitation (i.e. your last statement should be strengthened to "... axiom can't exist.")? Or is it just that we haven't figured out how to build one yet?

The thing I find most troubling and bizarre is that not even a tiny hint of what we have been discussing here appears in any QM text that I have ever seen. They just state the measurement axiom, explain how it works for eigenstates and superpositions of some unspecified operator O, and then move on. But what is the point of having the axiom in the first place if the only thing we can actually measure directly is position, and all other quantities must be inferred? It seems like all of this should have been hashed out by "the heavyweights" back during the development of QM, but it seems to have been overlooked. Can that really be true?

[EDIT] The more I think about this .. the more wrong it seems. The whole discussion of eigenstates as "the only possible results" of a "measurement" clearly has some kernel of truth to it, but it seems like a drastic over-simplification. On the other hand, it seems like any oversimplification must not matter very much, given the long, strong history of agreement between QM theory and experiment. I am getting more confused by the minute here. :confused:
 
Last edited:
  • #223


atyy said:
Here's a free article that describes the same thing as Raymer: http://tf.nist.gov/general/pdf/1283.pdf .

It talks about the position and momentum "shadows" of an initial state. The shadow of position occurs at a different time from the shadow of momentum. So Ballentine is wrong because he is not talking about canonically conjugate position and momentum.

Thanks! That looks like a very interesting article, but I am not sure how it gets at the measurement problem that we are discussing. Perhaps it will be more clear after I have had more time to read it carefully.
 
  • #224


SpectraCat said:
[EDIT] The more I think about this .. the more wrong it seems. The whole discussion of eigenstates as "the only possible results" of a "measurement" clearly has some kernel of truth to it, but it seems like a drastic over-simplification. On the other hand, it seems like any oversimplification must not matter very much, given the long, strong history of agreement between QM theory and experiment. I am getting more confused by the minute here. :confused:

The reason we have been discussing position and momentum being exactly measurable is that otherwise Ballentine is trivially wrong, and there is no discussion. However, position and momentum cannot be exactly measured, and are always jointly measured approximately. A less approximate measurement of momentum means a more approximate measurement of position. This seems to be found in all standard quantum optics textbooks. This is also found in QFT notes such as http://www.kitp.ucsb.edu/members/PM/joep/Web221A/Lecture8.pdf and http://www.kitp.ucsb.edu/members/PM/joep/Web221A/LSZ.pdf . So yes, the elementary textbook stuff is a lie, but not in the way Ballentine advocates. And the more rigourous way of dealing with it makes it clear that the standard lie is in fact the correct heuristic (not Ballentine's).
 
Last edited by a moderator:
  • #225


I think we all (including ballentine) agrees that HUP refers to expectations defined by a STATE, nothing else. An Ballentines point was not to confuse different "error measures".

I don't think this is what we debate here, what seems to be of debate here. what seems to be of debate is wether the example in the Ballentine notes Fredrik posted where one is using an inference to "measure" momentum can qualify as a measurement, and thus wether one can define at least loosely speaking (until proper full analyis is made) some "effective state" that derives from the mixed measurements + inference?

I think this is what we discuss here. And if so, I have an objection to Ballentines elaboration. If there is to be any sense in the inference he makes, the "state of information" that we end up with after the detection + the kind of inference from the angle and p_y, then I propose one has to at least be somewhat sensible consider the "effective" uncertainty in the STATE that is inferred by ballentines idea, of both y and p_y following from the ENTIRE set of information.

In particular this means that all we know is that y was somewhere between the slit input and detection hit (as time also passes, this however should not matter for the inferences. Information is information, no matter if old; the expectations doesn't care)

[itex]\Delta y \approx L tan \theta +\delta y[/itex], not just [itex]\delta y[/itex]

It also seems reasonable to think that roughly

[itex]\delta y \gtrsim h/p[/itex],

Also since [itex]\delta \theta \approx \delta y / (L cos^2 \theta)[/itex],

we seems to end up with - by inference in ballentines example - loosely speaking something like

Also since [itex]\Delta y \Delta p_y \gtrsim h\ (1+\lambda_{broglie}/L) [/itex],

So I'm tempted to think that if we ARE to try to make an inference like Ballentines wants, and INFER something like the uncertaintes of the INFORMATION (without explict statistical ensemble) in a way that has anything at alla to do with the original discussion, wouldn't the above be more reasonable? And if so, we certainly do get something in the ballpark of the original HUP even for this inference. So when I read ballentines notes it seems flawed?

/Fredrik
 
  • #226


Fra said:
what seems to be of debate is wether the example in the Ballentine notes Fredrik posted where one is using an inference to "measure" momentum can qualify as a measurement, and thus wether one can define at least loosely speaking (until proper full analyis is made) some "effective state" that derives from the mixed measurements + inference?
I agree that one of the things we're discussing is if what Ballentine is describing is a momentum measurement, but I consider the issue of what new state is prepared by the measurement trivial. Either the particle is absorbed by the detector and no new state is prepared, or the particle makes it all the way through the detector and escapes on the other side. In that case, the wavefunction is sharply peaked at the location of the detector, and is close to zero outside of it.

The rest of what you said seems to be based on the assumption that the new state is going to be spread out all over the region between 0 and y+δy. I don't see a reason to think that.

When we try to decide if this should be considered a momentum measurement, I don't think the properties of the state that's prepared by the interaction should influence us in any way. The only thing that should concern us is this: If we define "quantum mechanics" so that this is a momentum measurement, will the theory's predictions about momenta be better or worse than if we define the theory so that this isn't a momentum measurement?

My opinion about Ballentine's argument has changed during the course of this thread. This is what I'm thinking now: The best way to define a "momentum measurement" of a particle prepared in a localized state such as the particle that emerges from the slit in this thought experiment is, roughly speaking, to do what Ballentine does and then take the limit L→0. To be more precise, we say that a detection of the particle followed by this sort of inference of the momentum is an approximate momentum measurement, and the approximation is exact in the limit L→0. The margin of error [itex]\delta p_y[/itex] will depend on L. When L is small, it should therefore be proportional to L. When L is larger, terms with higher exponents will become important.

Ballentine's argument relies on his claim that [itex]\delta p_y[/itex] can be made arbitrarily small by making L large. This claim appears to be false. (It's correct if we leave out the L→0 statement from the definition of momentum measurement, but it's false if we include it).
 
  • #227


I think Ballentine's claim that the distribution of position values at large L corresponds to the momentum distrubution at small L is true. However, the position distribution at large L is not conjugate to the momentum distribution at small L. It is the position distribution at small L that is conjugate to the momentum distribution at small L. So I think Ballentine's claim that conventional wisdom is wrong is false because he isn't talking about conjugate variables.
 
  • #228


atyy said:
I think Ballentine's claim that the distribution of position values at large L corresponds to the momentum distrubution at small L is true.
I don't understand what this means.
 
  • #229


Fredrik said:
I don't understand what this means.

Try http://tf.nist.gov/general/pdf/1283.pdf, figure 2. In the text on the left column of p25, they say: "Figure 2c shows the results predicted by theory for atoms with a wide range of propagation times. In the extreme Fresnel regime, we recognize the spacelike shadow of the two slits. With increasing td, the wavepackets start to overlap and interfere until, for large td, we arrive at the Fraunhofer regime in which the diffraction pattern embodies the momentum-like shadow of the state."

These guys http://www.mpq.mpg.de/qdynamics/publications/library/Nature395p33_Duerr.pdf say something similar: "Figure 2 shows the spatial fringe pattern in the far field for two different values of tsep. We note that the observed far-field position distribution is a picture of the atomic transverse momentum distribution after the interaction."

So the position distribution on the screen at large L (Fraunhofer regime) corresponds to the momentum distribution of the initial state, whereas the position distribution on the scrren at small L (Fresnel regime) corresponds to the position distribution of the initial state.
 
Last edited by a moderator:
  • #230


Fredrik said:
The rest of what you said seems to be based on the assumption that the new state is going to be spread out all over the region between 0 and y+δy. I don't see a reason to think that.

My estimates are handwaving semiclassical IMO, and is supposed to be a ballpark estimate only. I'll try to add more later, but I think the perhaps interresting discussion to keep going here is exactly how to understand a "state preparation". What I tried to do is suggest that one can make state preparattions without statistical ensembles, if you instead think in terms of "counting evidence". And the spread of the y above, IS the spread of the information set we do use for the inference. This is what I think it is relevant. I'll try to get back alter and explain my logic.

Fredrik said:
Ballentine's argument relies on his claim that [itex]\delta p_y[/itex] can be made arbitrarily small by making L large. This claim appears to be false. (It's correct if we leave out the L→0 statement from the definition of momentum measurement, but it's false if we include it).

Not sure what you mean. It seems to me that ballentine is right on that point. My disagreement with this notes isn't that. Roughly it seemn like

[itex]\Delta p_y \approx (p/L) \delta y cos^3 \theta \rightarrow 0, if L \rightarrow\infty[/itex]

but then also in My estimate
[itex]\Delta y \rightarrow \infty[/itex]

Maybe I'm missing something from the quick estimate?

/Fredrik
 
  • #231


If we allow L-> 0 then it seems to me that

[itex]\Delta y \rightarrow \delta y[/itex]
[itex]\Delta p_y \approx (p/L) \delta y cos^3 \theta \rightarrow \infty, if L \rightarrow 0[/itex]

So it reduces to a plain y measurement, where the inference of p yields no information. What is the problem with this?

/Fredrik
 
  • #232


The momentum of an electron can be measured just by letting it fall on a photographic plate, and so we know both the position and momentum. The point however is that in any given situation, the energy-momentum or space-time relations must be used at least twice, otherwise they are not defined.
 
  • #233


atyy said:
I think Ballentine's claim that the distribution of position values at large L corresponds to the momentum distrubution at small L is true.

Fredrik said:
I don't understand what this means.

atyy said:
Try http://tf.nist.gov/general/pdf/1283.pdf, figure 2. In the text on the left column of p25, they say: "Figure 2c shows the results predicted by theory for atoms with a wide range of propagation times. In the extreme Fresnel regime, we recognize the spacelike shadow of the two slits. With increasing td, the wavepackets start to overlap and interfere until, for large td, we arrive at the Fraunhofer regime in which the diffraction pattern embodies the momentum-like shadow of the state."
...
So the position distribution on the screen at large L (Fraunhofer regime) corresponds to the momentum distribution of the initial state, whereas the position distribution on the scrren at small L (Fresnel regime) corresponds to the position distribution of the initial state.
What figure 2c seems to indicate is that in a double-slit experiment, we won't get the typical "both slits open" interference pattern if the particles are moving too fast. Making L small should have the same effect. In either case, the wavefunction won't have spread out enough in the y direction by the time its peaks reach the screen.

I see that the pattern will depend on the initial wavefunction (and therefore on its Fourier transform), and that it will "look like" the wavefunction itself when L is small. I don't see how the pattern will "correspond to the momentum distribution of the initial state" when L is large. Do you mean that it will actually "look like" the Fourier transform of the wavefunction?

I still don't see how to interpret your statement in the first quote above. Did Ballentine even say something like that? Which one of his statements have you translated into what you're saying now?

I also don't see what this implies about the single-slit experiment.

I'm not saying that you're wrong, only that I don't understand what you're thinking.
 
  • #234


dx said:
The momentum of an electron can be measured just by letting it fall on a photographic plate, and so we know both the position and momentum.
Only if it was known to be in a state with a sharply defined position earlier. Maybe not even then. This is still a matter of some debate in this thread. Ballentine's thought experiment is just a particle going through a single slit, and then reaching a wall of detectors. This wall of detectors could be a photographic plate. Those details aren't important here.

I think we have to define what we mean by a "momentum measurement" in this situation. I don't think it can be derived. We should choose the definition that gives us the best agreement between theory and experiment. I'm thinking that since we want to measure the momentum of the particle when it's in the state prepared by the slit, we should do it as soon as possible. The longer we wait, the more the state will have changed, and we're not really measuring what we want to measure. A longer "wait" corresponds to a larger L (the distance to the wall of detectors).

So I want to use a definition that implies that the value of py that's inferred from the y measurement is only an approximate measurement, and that the inaccuracy of the y measurement isn't the only thing that contributes to the total error. There's also a contribution that depends on L (and goes to zero when L goes to zero) that must be added to the contribution from the inaccuracy in the y measurement.

Since the error depends on L, it should grow at least linearly with L. So I want to define a "momentum measurement with minimum error L" as a y measurement at x coordinate L, followed by a calculation of py. Maybe that should be "with minimum error kL", where k is some number, but right now I don't know what number that would be, so I'm just setting it to 1.
 
  • #235


Fra said:
And the spread of the y above, IS the spread of the information set we do use for the inference.
I'm not sure I understand what you're saying. What do you mean by "spread of the information set"? If you're talking about the width of the wavefunction after the detection, how could it be larger than the detector?

Fra said:
Not sure what you mean. It seems to me that ballentine is right on that point.
See my answer to dx above. Does this help you understand what I mean at least?

Regarding your calculations, I haven't really tried to understand them. It would be much easier to do that if you explained how you got those results. Are the upper case deltas supposed to be "uncertainties" of the kind that appear in the uncertainty relations?
 
  • #236


What I had in mind was simply a state which is prepared with a definite momentum, which is then measured by the photographic plate. So when the particle falls on the plate, we know its position and also its momentum because we have measured ('prepared') the momentum before.
 
  • #237


dx said:
What I had in mind was simply a state which is prepared with a definite momentum, which is then measured by the photographic plate. So when the particle falls on the plate, we know its position and also its momentum because we have measured ('prepared') the momentum before.
So we know that it's a momentum eigenstate, and just need to find out which one that is? I think we would need to detect the particle twice to be able to calculate a momentum, and if we do, the first detection will change the state of the particle. I don't know if this should be called a "momentum measurement". (I'm thinking it probably shouldn't).

I think this approach to momentum measurements (the idea that we can calculate the momentum from the coordinates of two detection events) only works when both the wavefunction and its Fourier transform are peaked, but obviously not so sharply peaked that this statement contradicts itself. If the initial state is such that the width of the wavefunction and the width of its Fourier transform are of the same order of magnitude in units such that [itex]\hbar=1[/itex], then we can make two (or more) position measurements that aren't accurate enough to change the state by much, and calculate a momentum from that.
 
  • #238


Fredrik said:
I think we have to define what we mean by a "momentum measurement" in this situation
Yes, I think this is what we are discussing, and I was proposing something in the direction.
Fredrik said:
I'm not sure I understand what you're saying. What do you mean by "spread of the information set"? If you're talking about the width of the wavefunction after the detection, how could it be larger than the detector?
I think by detector you mean the resolution of the detectors at the wall.

But IMO, the entire slit setup is part of the "detector", simply because in this "generalized" "measurement" where we also try to infer momentum, the inference depends on the entire setup, inlucing L. So I think in the case where we try to as you say, define or generalized some kind of inference of p_y in parallell to infering y, the entire setup is the "detector" IMO. The actuall counter on the wall does not alone allow infering p_y.
Fredrik said:
So I want to use a definition that implies that the value of py that's inferred from the y measurement is only an approximate measurement, and that the inaccuracy of the y measurement isn't the only thing that contributes to the total error. There's also a contribution that depends on L (and goes to zero when L goes to zero) that must be added to the contribution from the inaccuracy in the y measurement.

Since the error depends on L, it should grow at least linearly with L. So I want to define a "momentum measurement with minimum error L" as a y measurement at x coordinate L, followed by a calculation of py. Maybe that should be "with minimum error kL", where k is some number, but right now I don't know what number that would be, so I'm just setting it to 1.

Why would the uncertainy of the inference increase with L? It seems to be the other way around? Holding
[itex]\delta y[/itex] fixed, and increasing L, decreases [itex]\delta \theta[/itex] and thus the error?

OTOH, since this "inference" is defined with respect to a time interval where the particle goes from the slit input to a detector cell, the mathcing uncertainty in y loosely speaking "conjugating with this momentum inference" should be [itex]L sin \theta[/itex].

Also; I'm not thinking in terms of wavefunctions here. I'm thinking in terms of information state; this information state is inferred. I don't think it's consistent to at the same time thinkg that [[itex]\Delta y \approx \delta y[/itex] and have confidence in an inference in [itex]p_y[/itex] that DEPDENDS on a path or transition through the slit construction [itex]Lsin \theta[/itex]. I think it's an inconsistent inference.

I'm just suggesting that I think that if you DO insist in the inference like you do, then I think we need to acknowledge that the uncertaint in y is also a function of L. This is IMO the consequence of L you might seek.

/Fredrik
 
  • #239
Fredrik said:
What figure 2c seems to indicate is that in a double-slit experiment, we won't get the typical "both slits open" interference pattern if the particles are moving too fast. Making L small should have the same effect. In either case, the wavefunction won't have spread out enough in the y direction by the time its peaks reach the screen.

I see that the pattern will depend on the initial wavefunction (and therefore on its Fourier transform), and that it will "look like" the wavefunction itself when L is small. I don't see how the pattern will "correspond to the momentum distribution of the initial state" when L is large. Do you mean that it will actually "look like" the Fourier transform of the wavefunction?

I still don't see how to interpret your statement in the first quote above. Did Ballentine even say something like that? Which one of his statements have you translated into what you're saying now?

I also don't see what this implies about the single-slit experiment.

I'm not saying that you're wrong, only that I don't understand what you're thinking.

I don't know the derivation, but I believe what those papers say is this. Let's say the transverse wave function at the slit is u(x). If we measure its transverse position accurately, we expect it to be distributed as |u(x)|2; if we measure its transverse momentum accurately, we expect it to be distributed as |v(p)|2, where v is the Fourier transform of u. If you measure the transverse position at large L, and for each measured position xL you take the corresponding sinθL, where tanθ=xL/L, then sinθL is distributed like |v(p)|2.

Although the paper talks about a double slit, I expect it to be true for a single slit, where a single slit is a double slit with zero distance between the slits. Also, http://tf.nist.gov/general/pdf/1283.pdf" and Raymer all seem to use this trick, even though the Durr and Raymer papers don't assume a double slit.

This is the same procedure Ballentine uses to get the momentum. So I believe that his momentum distribution is an accurate reflection of the momentum at an earlier time.
 
Last edited by a moderator:
  • #240


One on my personal quests is:

How to understand and generalize measurement theory as a way to intrisically count and represent information, while respecting constraints on information capacity. And how to from this, construct rational expectations and ultimately rational actions.

Current QM does not do this. It violates grossly the information capacity bounds just to mention one thing (the environment is used as an information sink; this works fine for typical collider experiments but not for cosmology, or for unification of forces). Also it's an extrinsic theory; depending on a classical observer context. RG theory does not accomplish what I want, so we need something new.

So the first step:

This is a way to understand information states, without statistical ensembles. Or rather the "statistics" does not refer to infinite repeats or "ensembles of trials", it refers to "counting evidence", and instead we can do a form of observer-state statistics on datapoints. This generalizes the information stats to cases where we clearly can't repeat experiments nor represent enough data.

In this way, it should be possible to generalize "measurements in QM" to general inferences. It's an information interpretation takes to some new depths.

So I agree that the notion of measurment in QM certainly isn't general enough to describe all relevant inferences. This is why a new "inference theory" is needed, in QM style but more creative. QM was designed to solve different problems, than we face today. Unification and QG wasn't I think on the map when QM was defined. Just that we are so deep into this now it's hard to imagine a different framework.

/Fredrik
 
  • #241


Fra said:
I think by detector you mean the resolution of the detectors at the wall.
Yes, I meant one of the little boxes to the right in the figure in Ballentine's article.

Fra said:
But IMO, the entire slit setup is part of the "detector", simply because in this "generalized" "measurement" where we also try to infer momentum, the inference depends on the entire setup, inlucing L. So I think in the case where we try to as you say, define or generalized some kind of inference of p_y in parallell to infering y, the entire setup is the "detector" IMO.
I disagree. A measuring device (an idealized one) only interacts with the system during the actual measurement, and the measurement is performed on the last state the system was in before the interaction with the measuring device began. In this case, we're clearly performing the measurement on the state that was prepared by the slit, so it can't be considered part of the momentum measuring device. The momentum measuring device consists of the wall of detectors and any computer or whatever that calculates and displays the momentum that we're going to call "the result". The coordinates and size of the slit will of course be a part of that calculation, but those are just numbers typed manually into the computer. Those numbers are part of the measuring device, but the slit isn't physically a part of it.

Fra said:
Why would the uncertainy of the inference increase with L? It seems to be the other way around? Holding
[itex]\delta y[/itex] fixed, and increasing L, decreases [itex]\delta \theta[/itex] and thus the error?
You're talking about the the contribution to the total error that's caused by the inaccuracy of the y measurement. I was talking about a different contribution to the total error. I started explaining it here, but I realized that my explanation (an elaboration of what I said in my previous posts) was wrong. I've been talking about how to define a momentum measurement on a state with a sharply defined position, but now that I think about it again, I'm not sure that even makes sense.

What we need here is a definition of a "momentum measurement" on the state the particle is in immediately before it's detected, and the only argument I can think of against Ballentine's method being the only correct one is that classically, it would measure the average momentum of the journey from the slit to the detector. However, classically, there's no difference between "momentum" and "average momentum" when the particle is free, as it is here. I don't see a reason to think this is different in the quantum world, so I no longer have a reason to think we're measuring "the wrong thing", and that means I can no longer argue for a second contribution to the total error that comes from "measuring the wrong thing". (That was the contribution I said would grow with L).

Fra said:
Also; I'm not thinking in terms of wavefunctions here. I'm thinking in terms of information state;
Huh? What's an information state? Are you even talking about quantum mechanics?
 
Last edited:
  • #242


I don't think anyone believes quantum theory is fine as it is :)
 
  • #243


atyy said:
I don't know the derivation, but I believe what those papers say is this. Let's say the transverse wave function at the slit is u(x). If we measure its transverse position accurately, we expect it to be distributed as |u(x)|2; if we measure its transverse momentum accurately, we expect it to be distributed as |v(p)|2, where v is the Fourier transform of u. If you measure the transverse position at large L, and for each measured position xL you take the corresponding sinθL, where tanθ=xL/L, then sinθL is distributed like |v(p)|2.
OK, thanks. If anyone knows a derivation (or a reason to think this is wrong), I'd be interested in seeing it. (I haven't tried to really think about this myself).

atyy said:
This is the same procedure Ballentine uses to get the momentum. So I believe that his momentum distribution is an accurate reflection of the momentum at an earlier time.
I still don't understand the significance of this. If we replace the wall of detectors with a photographic plate and make L large, how does it help us to know that the image we're looking at is the momentum distribution of the initial state (the state that was prepared by the slit)?

I know that I've been talking about how to define a momentum measurement on that initial state (sorry if that has caused confusion), but what we really need to know is how to define a momentum measurement on the state immediately before detection. I mean, we're performing the position measurement on that state, so if we're going to be talking about simultaneous measurements, the momentum measurement had better be on that state too.
 
  • #244


Fredrik said:
I still don't understand the significance of this. If we replace the wall of detectors with a photographic plate and make L large, how does it help us to know that the image we're looking at is the momentum distribution of the initial state (the state that was prepared by the slit)?

I know that I've been talking about how to define a momentum measurement on that initial state (sorry if that has caused confusion), but what we really need to know is how to define a momentum measurement on the state immediately before detection. I mean, we're performing the position measurement on that state, so if we're going to be talking about simultaneous measurements, the momentum measurement had better be on that state too.

Ballentine's procedure gives the position distribution of the state just before detection. It also gives the momentum distribution of the initial state (just after the slit), which is not the momentum distribution of the state just before detection. So he does not have simultaneous accurate measurement of both position and momentum.
 
Last edited:
  • #245


atyy said:
Ballentine's procedure gives the position distribution of the state just before detection. It also gives the momentum distribution of the initial state (just after the slit), which is not the momentum distribution of the state just before detection. So he does not have simultaneous accurate measurement of both position and momentum.
Aha. You're saying that because of what you described in the post before the one I'm quoting now, the position distribution (which we are measuring) is the same function as the momentum distribution of the initial state, and that this means that we're performing the momentum measurement on the wrong state.

That promotes the issue of how to prove (or disprove) that claim to the main issue right now.
 

Similar threads

Replies
18
Views
2K
Replies
16
Views
1K
Replies
17
Views
2K
Replies
12
Views
1K
Replies
1
Views
1K
Replies
8
Views
2K
Back
Top