Energy time uncertainty principle

In summary: The reason that photons have a particular energy is because they are affected by the uncertainty principle. The uncertainty principle states that you cannot measure the energy of a photon with absolute precision. This means that the range of energy that a photon can have is limited.
  • #1
Muneer QAU
14
0
how Energy time uncertainty principle account for the broadening of a level?
thanks in advance
 
Physics news on Phys.org
  • #2
It's one of the things that leads to line-broadening. Relates to the stability of an excited state.

Look at where the uncertainty principle itself comes from.

If the energy cannot be measured with arbitrary precision, then the photons emitted from a particular transition must have a range of energies.
That's where the "line broadening" description comes from.

It also restricts the range of particle-exchange forces.
We can violate conservation of energy by amount ΔE provided we do it for less than Δt=h/2πΔE.
 
  • #3
We can violate conservation of energy by amount ΔE provided we do it for less than Δt=h/2πΔE.
Where on Earth did you get that idea. Energy conservation is one of the cornerstones of physics, and holds exactly, even in quantum mechanics. You cannot violate it, even if you're quick! If an instance of an excited state has slightly less energy, it just means that slightly more energy was transferred to another particle when the state was excited.
 
  • #4
Beta decay of a neutron:
At some point in the interaction, you have a proton and a W boson

energy before:
n mass is 940MeV/c2

energy after:
W- boson mass is 80GeV/c2
p mass is 938MeV/c2

... isn't that a mass-energy surplus of 85 times the start energy?
 
  • #5
The W in the intermediate state is virtual. Although its rest mass is 80 GeV, it is off the mass shell and has only the available energy, which in this case is the 940 MeV from the neutron. This does not mean you borrow 80 GeV from the universe and then pay it back! The reason weak decays are weak is precisely because the intermediate particle (W or Z) has such a large mass. This reduces the decay probability and increases the lifetime, by a factor of mW4.
 
  • #6
Well how is that different from what I said?

"Off the mass shell" is just a fancy way of saying you cannot account for the energy.

It's a virtual particle, it's allowed, because it has a fast decay ergo short range - so it "exists" in a time span consistent with the time-energy uncertainty relations. There is a violation of Conservation of Energy in this description but it is allowed for virtual particles.

Can you point to a physical experiment that will tell the difference between "borrowing 80GeV from the universe and paying it back" and ... whatever it is you are claiming virtual particles actually do?

(niggle: Surely the energy it gets to carry is 2MeV - since the proton got the other 938MeV?)

Next you'll be telling me nothing can go faster than light!
 
  • #7
To avoid sticky issues like virtual particles, it is actually possible to give a basic answer to the issue of why the energy width of a line is proportional to its inverse lifetime using a purely classical description. Model the atom as an oscillator with a resonant frequency wo, and expose it to an oscillating electric field at frequency w. The oscillator will oscillate at frequency w with an amplitude that depends on the difference between w and wo, in a way that can be calculated classically via the concept of "radiative reaction", or the "Abraham-Lorentz force" (http://en.wikipedia.org/wiki/Abraham–Lorentz_force). If you solve for the phase lag between the electric driving and the response of the classical oscillator, you find that the amplitude of the oscillation scales inversely to the difference between w and wo, and the radiated power thus scales like the square of that. This is the so-called "Lorentz profile", with shape 1/(w-wo)2. The width of the profile, w-wo, equals the classical damping constant needed to account for the radiative reaction force. All that remains is to associate this frequency with with the inverse lifetime, which follows immediately from the damping constant-- it gives the inverse of the time the oscillation takes to die away when it is no longer being driven. So we have w-wo ~ 1/t from purely classical physics, and turning that into the HUP simply requires multiplying both sides by h and noting that homework is a photon energy when we think quantum mechanically.
 
  • #8
Bill_K said:
Where on Earth did you get that idea. Energy conservation is one of the cornerstones of physics, and holds exactly, even in quantum mechanics. You cannot violate it, even if you're quick! If an instance of an excited state has slightly less energy, it just means that slightly more energy was transferred to another particle when the state was excited.


I have been looking at this question recently. I hope you don't mind if I ask, how do you know this?

I will indulge in the risky practice of anticipating your answer. The usual answer seems to be that there are certain symmetries and all of physics falls apart if we do not have them. But why can't all of physics fall apart at a very small scale? In fact, doesn't that seem to be the case? Isn't that the frontier of physics, the edge of the domain of applicability, the Plank scale and below?
 
  • #9
I probably should clarify things a bit - there are basically two schools of thought re virtual particles: one says that these particles are artifacts of the kind of mathematics we use - in this case, perturbation theory. The "violation" I'm talking about only exists on paper - it is part of a handy mathematical shortcut which gets us to the right results more easily.

The other one says that maybe the whole system becomes a bit uncertain at small scales, so the extra energy gets kinda "borrowed". We shouldn't be surprised at this since the correspondence principle basically means that the classical laws of physics need only be obeyed on average.

These two are equivalent interpretations in that (afaik) there is no experiment you can do to tell the difference. But, as you see, it can be contentious. Those of us who have to field the perpetual motion enthusiasts tend not to like the second much.

The trouble with the second one is that it obscured the fact that the underlying mathematics is an approximation. (And it can make pmm enthusiasts excited.)

The first looks good since we often do lots of intermediate steps in QM - like summing over every possible path to work out a detection crossection (See the Feynman lectures on youtube for eg.) The electron in a double-slit experiment does not go through both slits at the same time or interfere with itself - those are just descriptions of the rules for calculating where it could end up being detected.

However - sometimes these intermediate calculations turn out to have a reality about them - like with monochromatic reflection: it is possible to get a stronger reflection by removing most of the mirror since the law of reflection only works on average. The intermediate calculation is to sum the phases over every possible reflection point even where the angles are not equal. We find that the many of the phases cancel each other out - but if we only allow reinforcing terms (by removing the others), then the reflection gets stronger. Ergo: the extra paths in the intermediate calculation are occasionally traversed (or something).

This whole thing opens up an epistemological can of worms - what do we mean when we say we know something? Bottom line is that our mathematics does not have to describe something real every step of the calculation to be useful. So we have virtual bosons, canonical electrons and so forth.

I like to keep track because there are a lot of books aimed at the layman "out there" which get hugely confused about this.
Probably Ben_K was concerned about the potential for confusion too, and so took me to task on it.
I was sort-of hoping OP would have done that.
 
Last edited:
  • #10
Simon Bridge said:
However - sometimes these intermediate calculations turn out to have a reality about them - like with monochromatic reflection: it is possible to get a stronger reflection by removing most of the mirror since the law of reflection only works on average. The intermediate calculation is to sum the phases over every possible reflection point even where the angles are not equal. We find that the many of the phases cancel each other out - but if we only allow reinforcing terms (by removing the others), then the reflection gets stronger. Ergo: the extra paths in the intermediate calculation are occasionally traversed (or something).
That's a very nice example, I'll bear that in mind.
 

FAQ: Energy time uncertainty principle

What is the energy-time uncertainty principle?

The energy-time uncertainty principle, also known as the Heisenberg uncertainty principle, states that the more precisely we know the energy of a particle, the less precisely we can know its time of occurrence, and vice versa. It is one of the fundamental principles of quantum mechanics and highlights the limitations of our ability to measure certain properties of particles simultaneously.

How does the energy-time uncertainty principle relate to other uncertainty principles?

The energy-time uncertainty principle is closely related to the position-momentum uncertainty principle. Both principles state that certain properties of particles, such as energy and position, cannot be measured simultaneously with absolute precision. This is due to the inherent probabilistic nature of particles at the quantum level.

Can the energy-time uncertainty principle be violated?

No, the energy-time uncertainty principle is a fundamental principle of quantum mechanics and cannot be violated. It is a consequence of the probabilistic nature of particles and the limitations of our ability to measure their properties simultaneously.

How does the energy-time uncertainty principle affect our understanding of the universe?

The energy-time uncertainty principle is a crucial concept in quantum mechanics and has implications for our understanding of the universe at the smallest scales. It suggests that there are fundamental limits to our ability to predict the behavior of particles and has led to the development of theories and technologies, such as quantum computing, that take this principle into account.

What are some real-world applications of the energy-time uncertainty principle?

The energy-time uncertainty principle has practical applications in various fields, such as quantum cryptography, where it is used to ensure the security of communication channels. It also plays a crucial role in the development of quantum technologies, such as quantum computers and sensors, which rely on principles of quantum mechanics to function.

Similar threads

Back
Top