Uncertainty principle and standard deviation.

In summary, the uncertainty principle is about measuring the uncertainty in a particle's position and momentum at the same time. The principle is only meaningful when a particle's momentum and position are measured simultaneously by many detectors.
  • #1
kof9595995
679
2
The uncertainty is interpreted as standard deviation, then does it mean uncertainty principle only has statistical meaning? I mean, is the principle only meaningful when a particle's momentum and position are measured simultaneously by many detectors?
 
Physics news on Phys.org
  • #2
Maybe I need to restate my question like this:
Since uncertainty is indeed a standard deviation, so to make sense out of uncertainty principle, we need a lot of measurements to make it statistical,right? And since we need to know the position and momentum at a particular time, we need to measure a particle many times simultaneously, is this right?
So is it that the uncertainty principle is something about this kind of "simultaneous many measurement "?
 
  • #3
What you need to do is to perform the experiment many times on systems that have been prepared in the same state. It doesn't matter if you perform many identical experiments at the same time, or if you just do the same experiment many times.

If you try to measure two observables at the same time, you'd just end up measuring one observable that isn't any of the observables that you were trying to measure.
 
Last edited:
  • #4
kof9595995 said:
Since uncertainty is indeed a standard deviation, so to make sense out of uncertainty principle, we need a lot of measurements to make it statistical,right?

Right!

And since we need to know the position and momentum at a particular time, we need to measure a particle many times simultaneously, is this right?

Don't think of measuring the same particle many times simultaneously. Think of preparing a large number of particles in exactly the same way (perhaps one at a time) and then measuring each of them after the same amount of time has elapsed since preparation. In other words, the statistics apply to an ensemble of identically prepared systems.

(added: ah, now I see Fredrik beat me to it while I was typing.)
 
Last edited:
  • #5
Ok, thank you guys, I think I get it now
 
  • #6
Sorry to bump in the thread again, but I got a subsequent question.
"Identical states" means same type of particles associated with the same wave function , right? And we actually can not determine the initial condition of wave function even if every physically determinable quantities are known, is it right? So how can we make sure the wave functions are the same?
 
Last edited:
  • #7
A state is really an equivalence class of state preparation procedures. Each observable A has an expectation value x(A) for each state preparation procedure x. Two state preparation procedures x and y are equivalent if x(A)=y(A) for all observables A. These equivalence classes of state preparation procedures are represented mathematically by the rays of a Hilbert space.

So the best you can do is to make repeated measurements of a few observables to find out how close the average results are to the expectation values.

I think that if you make a large number of measurements of each member of a complete set of commuting observables, you can get very close to actually determining what state your preparation procedure produces.
 
Last edited:
  • #8
So are you saying that if all expectation values of observables are the same respectively, then the wave functions must be the same? Is there a mathematical proof?
 
  • #9
Or are you saying identical states don't necessarily require the same wave function?
 
  • #10
kof9595995 said:
Or are you saying identical states don't necessarily require the same wave function?
That would contradict what I said, and it would definitely be wrong. There are many wave functions (every member of the ray) for each state, not the other way round.

kof9595995 said:
So are you saying that if all expectation values of observables are the same respectively, then the wave functions must be the same? Is there a mathematical proof?
I hope so. :smile: (I don't immediately see how to prove it, and I don't have time to think about it now).
 
  • #11
kof9595995 said:
So are you saying that if all expectation values of observables are the same respectively, then the wave functions must be the same? Is there a mathematical proof?
Well, I hope so too. But personally I am not quite confident of this. Cause every observable is the quadratic product of matrices, <psi*|H|psi>, right? It's like even if we know values of x(t)^2+y(t)^2 for all t, we won't know what x(t) or y(t) is.
 
  • #12
But as Fedrik pointed out, we can make simultaneous measurements of commutable observables. The states are going to be uniquely determined as a combination of eigenstates with regards to specific observables. All we need to do is measure enough commutable observables to be able to uniquely determine the state or states of a system. Obviously, the more wavefunctions that are involved the more difficult this becomes (and it could I guess be impossible to fully determine the state). However, we do have another advantage, by measuring the observables we do collapse the wavefunction. So if we find a system that measures of our desired state then we have forced that system into that state through the act of measurement too.

If we do not have enough observables and must underdetermine our systems, then the next step I guess would be to classify the resulting observations that we wish to make. If the states are different, and they differ in terms of our desired observables, then we should be able to figure out the statistical results of the observations between the systems for our desired measurements. That is, if we can reduce our pool of systems down to three systems where the desired pair of uncommutable measurements, say energy and position, differ between the three, then we should be able to get an idea of the expected means and variances of the desired observables to be able to bin the results appropriately.
 
  • #13
Born2bwire said:
The states are going to be uniquely determined as a combination of eigenstates with regards to specific observables. All we need to do is measure enough commutable observables to be able to uniquely determine the state or states of a system. Obviously, the more wavefunctions that are involved the more difficult this becomes (and it could I guess be impossible to fully determine the state).
I'm a little confused, so you agree it's possible the case that the wave function can be never determined no matter how many observables are measured?

Born2bwire said:
If we do not have enough observables and must underdetermine our systems, then the next step I guess would be to classify the resulting observations that we wish to make. If the states are different, and they differ in terms of our desired observables, then we should be able to figure out the statistical results of the observations between the systems for our desired measurements. That is, if we can reduce our pool of systems down to three systems where the desired pair of uncommutable measurements, say energy and position, differ between the three, then we should be able to get an idea of the expected means and variances of the desired observables to be able to bin the results appropriately.
I get lost here, would you like to clarify a bit more?
 
  • #14
Well what makes a system unique? The observables that help define its properties. Stuff like the system's energy, spin, momentum, position, etc all help to uniquely define the system. The system's state is defined as the combination of eigenstates, wavefunctions, of the system's Hamiltonian. This will make the expectation value of the system's energy one property that we can measure. If the system can have eigenstates that have the same energy, called degenerate states, or if we can have a different and unique set of states that can produce the same expectation value of the energy (within statistical measures) then we just need to look for more properties to uniquely define the system.

If we need to measure multiple observables, we want them to be commutable because then we no longer have any inherent restrictions on the precision of an ensemble of measurements. If we have degenerate cases, we can usually apply conditions that will raise the degeneracy, like the application of a magnetic field with Zeeman splitting. Either way, once we have determined a set of observables that will uniquely determine our sytem, all we need to do is measure our systems and pick out the ones that match. Even if a system is in a superposition of states, a measurement will cause the wavefunction to collapse on the measurement. So if I want state with energy E_1, I measure systems until I get E_1 and then that system has now collapsed from any superposition of states to that one state that is E_1. So in a way, measurement can help ensure the starting point (unless you are looking for a superposition of states as your system).



As for the second set of statements. Let's say we have a system that has a three-fold degeneracy for the second energy level with an energy eigenvalue of E_1. However, the three states here differ in terms of two uncommutable observables, let's say A and B (whatever they may be). So, uncertainty principle states that the product of the variances of A and B will have a lower limit. However, if the expectation values of A and B are sufficiently different between the three states, then it may still be possible to uniquely define the system. Our data will be scattered about when we plot it out in terms of <A> versus <B>, however, if we do it correctly the means should still be around the unique mean values. So while it we may not be able to precisely measure A and B at the same time, we may be able to measure them with enough precision that we can separate the measurements amongst the three states.


This is not exactly what I am talking about but a demonstration of the splitting none the less. Take a look at the Stark effect (http://en.wikipedia.org/wiki/Stark_effect) or the Zeeman splitting (http://en.wikipedia.org/wiki/Zeeman_effect). Let us say that we can only accurately and precisely measure the energy of an atom in an excited state but we wish to only have a specific spin with the electron. Normally, there are a number of degenerate states to the higher energy levels that have the same energy (from simple quantum analysis) but differ in their spins and angular momenta. However, if we apply a field, the difference in the interaction of the spins and momenta lifts the degeneracy so that the different states now have different energy levels. By increasing the field we can make the differences strong enough that we should be able to correlate measurements to the specific states.
 
Last edited by a moderator:
  • #15
Thanks for the amazingly detailed reply. One last question: I notice you keep mentioning about results of measurements as expectation values. Then may question is: Say, we want to prepare some particles which will evolve from a certain eigenstate(say, energy, non-degeneracy case), can we just make individual measurement on each particle, and select those which collapse to the desired energy eigenstate?
 
  • #16
If you're asking if this is a way to prepare an ensemble of systems in identical states, then the answer is yes, assuming that you can measure the energy without destroying the system.
 
  • #17
kof9595995 said:
Thanks for the amazingly detailed reply. One last question: I notice you keep mentioning about results of measurements as expectation values. Then may question is: Say, we want to prepare some particles which will evolve from a certain eigenstate(say, energy, non-degeneracy case), can we just make individual measurement on each particle, and select those which collapse to the desired energy eigenstate?

When we make measurements, it has been my understanding that the mean value across an ensemble of measurements will be the expectation value of the operator. There will be variance in the measurements due to the precision of your detectors and aspects of your experiment. For example, if we use a spectral analyzer to find the radiated wavelengths from an excited hydrogen atom, we do not measure a delta function of wavelengths. Instead, we measure a small bandwidth centered about (hopefully) the wavelength corresponding to the energies of the photons. That is, instead of an infinitesimally thin line on the spectrograph, we see a thin line that corresponds to a spread of wavelengths.

And like Fredrik already answered, yes. Of course that only ensures that the energy levels start out the same. Other parameters like the momentum, position, and such will be indeterminate unless we measure them. But if we measure both position and energy, there will be an uncertainty involved because they are non-commutable. So I guess the best we can do is make measurements of commutable observables and in such a way we can collapse the system to our desired starting point (or as close to it as possible). I even remember vaguely a paper that explored the implications of wavefunction collapse. For example, if measurement collapses the wavefunction onto the measured state, then suppose we have a very unstable system, like an excited electron in an atom, and we keep measuring the energy of the electron so that it keeps collapsing onto the excited state. If we do this quickly and repetitively, we can extend the lifetime of the excited system indefinitely. So there was a paper that demonstrated this and helps support wavefunction collapse by showing we can, only via measurements, keep an unstable system in the same state far beyond its statistical lifetime.
 
  • #18
OK, thank you guys!
 
  • #19
Ok, please bear with me people, I'm popping in again.
I was thinking about my original question about my so called "simultaneous measurements", that is. many observers measure the system(let's say they want to measure energy) at the same time, then here comes the strange thing:
Since wave function only collapses to one eigenstate, then all the observers should get the same result. But individually speaking, observers' measurements should be independent(Em...I'm not sure about this,although measurements cause the collapse, but since they are simultaneous, I think there's no way they can be dependent,correct me if I'm wrong ) , why they have to get the same measurement result?
 
  • #20
The assumption that simultaneous independent measurements are possible immediately leads to contradictions (as you seem to have realized already) so the assumption can't possibly be correct.

You should also think about how you would attempt to measure an observable of a system at the same time as it's being measured by another physicist. You have to let the system interact with your measuring device while it's also interacting with another. This would turn the two measurement devices into one measurement device, and it's not clear what observable it would measure. Even if it turns out to be the one you're trying to measure, we're still just talking about one measurement.
 
  • #21
I can not give definite answer to kof9595995 question but I will post a comment to Fredrik's answer that I hope can provide additional insights in question.
Fredrik said:
The assumption that simultaneous independent measurements are possible immediately leads to contradictions (as you seem to have realized already) so the assumption can't possibly be correct.
Where I see the core of this contradiction.
To calculate outcome of measurement we have to represent our sample as eigenvector in respect to operator. Say we split our sample in such a way that we have two orthogonal vectors and one of them is required eigenvector in respect to operator the other vector being null vector in respect to operator.
And we have another operator that requires different splitting of sample. Then we have two incompatible measurements because two eigenvectors does not represent the same part of the sample.

To illustrate this idea we can look at such experiment.
We have three entangled photon streams. For each stream we perform polarization measurements and record clicks in detectors with time stamps.
After that we correlate clicks from polarization measurements in different possible pairs. To calculate outcome of each single correlation (entanglement) measurement we have to represent one of involved photon streams in Hilbert space as eigenvector in respect to it's polarization measurement (operator). If all three polarization measurements require different representations of sample we can not calculate all entanglement measurement outcomes in one go.

I hope this example shows realistic even if not very traditional view how simultaneous measurements can be performed and contradictions involved.
 
  • #22
Fredrik said:
The assumption that simultaneous independent measurements are possible immediately leads to contradictions (as you seem to have realized already) so the assumption can't possibly be correct.

You should also think about how you would attempt to measure an observable of a system at the same time as it's being measured by another physicist. You have to let the system interact with your measuring device while it's also interacting with another. This would turn the two measurement devices into one measurement device, and it's not clear what observable it would measure. Even if it turns out to be the one you're trying to measure, we're still just talking about one measurement.
Emm..., so what is a measurement? Why two device interact with the particle at the same time then we have to define them to be one, just to avoid the contradiction?
 

FAQ: Uncertainty principle and standard deviation.

1. What is the Uncertainty Principle?

The Uncertainty Principle, also known as Heisenberg's Uncertainty Principle, is a fundamental principle in quantum mechanics that states that it is impossible to simultaneously know the exact position and momentum of a particle. In other words, the more precisely we know the position of a particle, the less we know about its momentum, and vice versa.

2. How does the Uncertainty Principle relate to standard deviation?

The Uncertainty Principle is related to standard deviation in that it quantifies the amount of uncertainty in a measurement. Standard deviation is a statistical measure of the amount of variation or dispersion of a set of data points. In quantum mechanics, the uncertainty principle tells us that the more spread out the data is, the greater the uncertainty in the measurement.

3. Can the Uncertainty Principle be violated?

No, the Uncertainty Principle is a fundamental principle in quantum mechanics and has been confirmed through numerous experiments. It is a consequence of the wave-particle duality of matter, which states that particles can exhibit both wave-like and particle-like behavior.

4. How is the Uncertainty Principle used in real-world applications?

The Uncertainty Principle has many practical applications, particularly in the field of quantum computing. It is also used in various technologies such as medical imaging, where it helps to improve the resolution of images by reducing uncertainty in measurements.

5. Is the Uncertainty Principle applicable to macroscopic objects?

The Uncertainty Principle is only applicable to microscopic objects, such as atoms and particles, as it is a fundamental principle of quantum mechanics. It does not apply to objects that are much larger than the scale of atoms.

Back
Top