What is renormalization and what does it do?

In summary, renormalization is a technique used to address the problem of infinities in quantum field theory calculations. It involves adding infinite counterterms to the Hamiltonian in order to satisfy physical principles and obtain finite results. However, the QED Hamiltonian with counterterms is still infinite and this can be addressed through a unitary dressing transformation. Renormalization is not exclusive to quantum physics and can also be used in solid state physics on a finite lattice. There are also theories that propose a natural cutoff for infinities, but their validity is still unknown.
  • #1
captain
164
0
what is renormalization really and what does it do?
 
Physics news on Phys.org
  • #2
captain said:
what is renormalization really and what does it do?

In the end of 1920's Dirac, Pauli, Weiskopff, and Jordan formulated a quantum theory of interactions between electrons and photons in a loose analogy with Maxwell's classical electrodynamics. This early quantum electrodynamics (QED) was very successful in calculations of various scattering processes in lowest orders of the perturbation theory. Unfortunately, all contributions to the S-matrix in higher orders came out infinite.

In late 1940's Tomonaga, Schwinger and Feynman found the way to fix this problem of infinities by renormalization. The renormalization basically adds certain infinite counterterms to the Hamiltonian of the early QED. The form of these counterterms was selected such that the resulting theory satisfied two physical principles. First, the calculated electron's mass should be equal to the measured electron's mass. Second, the calculated interaction energy between two electrons at large distances should be equal to the classical expression [itex] e^2/r [/itex]. These two requirements lead to two types of renormalization counterterms in the Hamiltonian - the mass and charge renormalization counterterms.

As I said above, these counterterms are formally infinite, however, it appears that they exactly cancel the S-matrix infinities present in the original early QED. This cancelation leaves some residual finite terms called radiative corrections. Renormalized calculations of radiative corrections for scattering processes (and energies of bound states) yield finite results in each perturbation order, and these results agree with experiment to an astonishing precision.

The main remaining drawback of the renormalized theory is that the QED Hamiltonian with counterterms is infinite. Although, as I said, all infinities cancel out in formulas for S-matrix elements, they do not cancel out if you want to calculate, for example, the time evolution of state vectors and observables. This deficiency can be fixed by another trick called "unitary dressing transformation" first suggested in

O. W. Greenberg and S. S. Schweber, "Clothed particle operators in simple models of quantum field theory", Nuovo Cim. 8 (1958), 378

A more modern review can be found in

A. V. Shebeko and M. I. Shirokov, "Unitary transformations in quantum field theory and bound states", http://www.arxiv.org/abs/nucl-th/0102037

In the "dressed" theory both the Hamiltonian and the S-matrix are finite. Moreover, such unphysical features of the renormalized QED as "clouds of virtual particles" surrounding electrons and "vacuum polarization" are absent in the "dressed particle" approach.



Eugene.
 
  • #3
thanks. info much appreciated
 
  • #4
Two important points, usually not recognized by people working in quantum field theory:
1. Renormalization is not necessarily related to infinities. For example, solid state physics formulated on a finite lattice also uses renormalization.
2. Renormalization is not necessarily related to quantum physics. For example, classical electrodynamics dealing with self-fields of charged particles also requires renormalization.
 
  • #5
Demystifier said:
Two important points, usually not recognized by people working in quantum field theory:
1. Renormalization is not necessarily related to infinities. For example, solid state physics formulated on a finite lattice also uses renormalization.
2. Renormalization is not necessarily related to quantum physics. For example, classical electrodynamics dealing with self-fields of charged particles also requires renormalization.

Yes, that's a good point to make. Unfortunately, when most people hear about "renormalization" they think "infinities!".

To the OP: the point is that one has a theory that contains certain parameters. these parameters must be determined from experimental results. so one computes a certain physical process using theory and compares with an experimental result. This tehn determines the value of the parameters appearing in the theory. This is essentially all there is to it!

The details are more involved than this but that's the key idea. the fact that there are infinities is irrelevant to the need of renormalizing.


(one complication is that calculations are always done perturbatively, i.e. as an infinite expansion. so one must determine the parameters of the theory order by order in the infinite expansion).
 
  • #6
Demystifier said:
1. Renormalization is not necessarily related to infinities. For example, solid state physics formulated on a finite lattice also uses renormalization.

Yes, this is true. In solid state QFT momentum integrals are normally convergent, because there is a natural momentum cutoff - the inverse of the lattice constant. So, the renormalization results in finite corrections to the masses and charges of particles. These corrections (e.g., "effective masses" of electrons and holes, "screened charges", etc.) are well understood as the product of interactions between particles and the medium (crystal lattice).

In recent years it became fashionable to apply the same kind of explanation to the renormalization of QFT in fundamental particle physics. The idea is that space-time is a kind of physical medium which has some granularity (i.e., somewhat similar to crystal lattices), perhaps at the currently unaccessible "Planck scale". This would provide a natural cutoff for loop momentum integrals and make renormalization finite. Needless to say that nobody knows what is the nature of this "granularity" and whether it exists at all.

Personally, I don't subscribe to these ideas. I think there is no need to introduce currently unknown Planck-scale physics as a solution for our macroscopic renormalization problems. The "dressed particle" approach resolves all issues without speculations about Planck-scale space-time structure.

Eugene.
 
  • #7
meopemuk said:
Yes, this is true. In solid state QFT momentum integrals are normally convergent, because there is a natural momentum cutoff - the inverse of the lattice constant. So, the renormalization results in finite corrections to the masses and charges of particles. These corrections (e.g., "effective masses" of electrons and holes, "screened charges", etc.) are well understood as the product of interactions between particles and the medium (crystal lattice).

In recent years it became fashionable to apply the same kind of explanation to the renormalization of QFT in fundamental particle physics. The idea is that space-time is a kind of physical medium which has some granularity (i.e., somewhat similar to crystal lattices), perhaps at the currently unaccessible "Planck scale". This would provide a natural cutoff for loop momentum integrals and make renormalization finite. Needless to say that nobody knows what is the nature of this "granularity" and whether it exists at all.

Personally, I don't subscribe to these ideas. I think there is no need to introduce currently unknown Planck-scale physics as a solution for our macroscopic renormalization problems. The "dressed particle" approach resolves all issues without speculations about Planck-scale space-time structure.

Eugene.

thanks for all your help also other ocntributors. but i am curious to know what Eugene means at the end of the post.
 
  • #8
captain said:
thanks for all your help also other ocntributors. but i am curious to know what Eugene means at the end of the post.

The approach I was criticizing is also known as the "effective field theory."

Eugene.
 
  • #9
Why to dislike UV cut-offs? Any QFT presumably has them.

meopemuk said:
In recent years it became fashionable to apply the same kind of explanation to the renormalization of QFT in fundamental particle physics. The idea is that space-time is a kind of physical medium which has some granularity (i.e., somewhat similar to crystal lattices), perhaps at the currently unaccessible "Planck scale". This would provide a natural cutoff for loop momentum integrals and make renormalization finite. Needless to say that nobody knows what is the nature of this "granularity" and whether it exists at all.

Personally, I don't subscribe to these ideas. I think there is no need to introduce currently unknown Planck-scale physics as a solution for our macroscopic renormalization problems. The "dressed particle" approach resolves all issues without speculations about Planck-scale space-time structure.
Eugene.

Hmm, what you refer here to a "fashion" seem to me the only logical explanation.
Namely, any QFT -- whether it comes from particle physics or a condensed matter problem -- is a convenient approximation. We stuff an infinite number of degrees of freedom into any finite interval of space, hence the divergencies.

The fact that we have no idea of what the UV cut-off for say, the Standard Model
does not allow us to postulate that an "infinitely flexible" field should be a fundamental concept. Such view is an unjustified extrapolation of the well-establish low-energy physics into the unknown high-energy domain.

I agree that RG and the "dressed particle" picture is a great way to get meaningful answer sfrom QFT without invoking any assumption about the physical nature of the UV cut-offs. We just shouldn't forget that we have no way of knowing at what scale or what kind of cut-offs they are.
 
  • #10
Slaviks said:
Hmm, what you refer here to a "fashion" seem to me the only logical explanation.
Namely, any QFT -- whether it comes from particle physics or a condensed matter problem -- is a convenient approximation. We stuff an infinite number of degrees of freedom into any finite interval of space, hence the divergencies.

The fact that we have no idea of what the UV cut-off for say, the Standard Model
does not allow us to postulate that an "infinitely flexible" field should be a fundamental concept. Such view is an unjustified extrapolation of the well-establish low-energy physics into the unknown high-energy domain.

I agree that RG and the "dressed particle" picture is a great way to get meaningful answer sfrom QFT without invoking any assumption about the physical nature of the UV cut-offs. We just shouldn't forget that we have no way of knowing at what scale or what kind of cut-offs they are.


What you wrote is the traditional "effective field theory" view on renormalization. The "dressed particle" approach provides a completely different point of view. In this approach, there is no need to assume that QFT in particle physics is an approximation of some yet unknown Planck-scale theory. "Dressed particle" QFT can be formulated exactly and self-consistently without any cutoffs (i.e., all loop integrations are not limited in the momentum space). This is achieved by performing "unitary dressing transformation" of the original QFT Hamiltonian, as described in references I provided in an earlier post.

This approach provides a completely new interpretation of what IS quantum field theory. In this new formulation, QFT becomes a theory of particles interacting via potentials, just as ordinary quantum mechanics. The only significant difference with respect to QM is that interactions may not conserve the number of particles. You can read about this alternative approach in

E.V. Stefanovich, "Quantum field theory without infinities" Ann. Phys. (NY) 292 (2001), 139

E.V. Stefanovich, "Relativistic quantum dynamics" http://www.arxiv.org/abs/physics/0504062

Eugene.
 
  • #11
Wow!

You quote a really impressive body of work, very interesting to learn about such developments. And sorry for not noticing that "dressed particle" approach is a term here.

I will dare to insist on my point (this is not, however, meant to discredit the "dressed particle" picture): any field theory is an effective theory. Point-like interaction and infinite divisibility of space are just very convenient and useful abstractions.

This just great if there is a completely consistent way to extrapolate a
realistic, relativistic QFT (for example, the very well-tested QED) up to arbitrary large momenta with no divergences on the way. (your books suggest such an example). This might my "prettier" to someone's taste than introducing an explicit cut-off at Planck's (or any other, sufficiently high) scale.

Both procedures will not solve the conceptual issue: we have no way to tell what the space-time is like at small enough scales unless we can probe it expriementally one day. Before this, any ways of dealing with QFT divergences are about mathematical elegance and/or practical convenience. Physically they are in-distinguishable becasue they have to lead to the same known/tested low-energy physics.

This is a really simple point which often arises in discussions between those with a particle phyisics perspective and their condensed-matter trained colleagues.

Challenge me if I'm wrong, I'd really like to know the opinion of educated colleagues on this important issue.
 
  • #12
Slaviks said:
I will dare to insist on my point (this is not, however, meant to discredit the "dressed particle" picture): any field theory is an effective theory. Point-like interaction and infinite divisibility of space are just very convenient and useful abstractions.

...

Both procedures will not solve the conceptual issue: we have no way to tell what the space-time is like at small enough scales unless we can probe it expriementally one day. Before this, any ways of dealing with QFT divergences are about mathematical elegance and/or practical convenience. Physically they are in-distinguishable becasue they have to lead to the same known/tested low-energy physics.

I would dare to say that space is "infinitely divisible" and that there is no new physics at the Planck's scale. Of course, I have no way to prove that without appropriate experiments. However, I do know that usual assumptions of the underlying space-time granularity are not needed to solve the problem of QFT divergences. A relativistic quantum theory of interacting fundamental particles can be made self-consistent and divergence-free without cutoffs and "effective field theory" arguments. This "dressed particle" formulation is elegant, economical, and practically convenient. Moreover it doesn't lead to paradoxes of particle "self-energies", "vacuum polarization", etc.

In my view, the most significant advantage of the "dressed particle" approach is that it has a finite well-defined Hamiltonian which allows one to calculate the time evolution of interacting particle systems in addition to the usual QFT S-matrix. Unfortunately, such a time evolution is presently not accessible to high energy physics experiments. However, without doubt, experimentalists will learn how to do these things in the future. Then, we'll see that the difference between "effective field" and "dressed particles" philosophies is more than a matter of elegance and convenience.

Eugene.
 
  • #13
meopemuk said:
I would dare to say that space is "infinitely divisible" and that there is no new physics at the Planck's scale. Of course, I have no way to prove that without appropriate experiments.

So it's a matter of mathematical taste and convenience. For me it seems very strange if the same continuous Poincare-group structure what we use for the empty space (aka physical vacuum) at accessible energies would go ad infinitum to arbitrary small scale.

Imagine I give you a piece of solid (let it be a dialectric with a band gap of few eV), and let you do any experiments with it provided that you don't exceed energies of, say, 1 mK. All we can observe in this low energy limit are acoustic phonons. The allow for a very elegant (Debye) field theory formulation. We would measure their straight dispersion curve, estimate that with experimental accuracy they are perfect straight lines, extrapolate to arbitrary small scale (remember, we have no idea of the structure of the solid => no reason to invent ad hoc granularity). And then, of course, we'd have to invent a way to deal with "divergencies" (we can't discover the real acoustic phonon cut-off -- in my imaginary problem the Debye tempreature >> 1 mK).
This is not to sat that we would never know the elementary cell composition or the electronic properties.

This is one-to-one analogy to our situation with the physical vacuum we find ourselves in.
Our Planck scale is the Debye tempreature in my (actually, Phil Anderson's) example.

meopemuk said:
However, I do know that usual assumptions of the underlying space-time granularity are not needed to solve the problem of QFT divergences. A relativistic quantum theory of interacting fundamental particles can be made self-consistent and divergence-free without cutoffs and "effective field theory" arguments. This "dressed particle" formulation is elegant, economical, and practically convenient. Moreover it doesn't lead to paradoxes of particle "self-energies", "vacuum polarization", etc.

This is great to know there is such an option! But I equally don't see any problem with the "self-eneries", "vacuum polarization" and other "bare" quantities which diverge in the UV limit of the conventional formualtion. We just need to keep in mind that all we do is finding nice mathematical ways to do extrapolation. Feeding unobservable bare masses and coupling and matching the low energy limit to experiment has work perfectly well so far.

meopemuk said:
In my view, the most significant advantage of the "dressed particle" approach is that it has a finite well-defined Hamiltonian which allows one to calculate the time evolution of interacting particle systems in addition to the usual QFT S-matrix.
Unfortunately, such a time evolution is presently not accessible to high energy physics experiments.

Here I get confused, most probably due to my ignorance:
why cannot time-evolution of an arbitrary configuration of particles be computed int he conventional approach once the
in the conventional approach once Lagrangian and the boundary conditions are specified?
May be you could point to the relevant section in the works you cited above where this disadvantage of usual QFT is discussed.

meopemuk said:
However, without doubt, experimentalists will learn how to do these things in the future. Then, we'll see that the difference between "effective field" and "dressed particles" philosophies is more than a matter of elegance and convenience.

Cann't agree more!
 
  • #14
Slaviks said:
So it's a matter of mathematical taste and convenience. For me it seems very strange if the same continuous Poincare-group structure what we use for the empty space (aka physical vacuum) at accessible energies would go ad infinitum to arbitrary small scale.

Imagine I give you a piece of solid (let it be a dialectric with a band gap of few eV), and let you do any experiments with it provided that you don't exceed energies of, say, 1 mK. All we can observe in this low energy limit are acoustic phonons. The allow for a very elegant (Debye) field theory formulation. We would measure their straight dispersion curve, estimate that with experimental accuracy they are perfect straight lines, extrapolate to arbitrary small scale (remember, we have no idea of the structure of the solid => no reason to invent ad hoc granularity). And then, of course, we'd have to invent a way to deal with "divergencies" (we can't discover the real acoustic phonon cut-off -- in my imaginary problem the Debye tempreature >> 1 mK).
This is not to sat that we would never know the elementary cell composition or the electronic properties.

This is one-to-one analogy to our situation with the physical vacuum we find ourselves in.
Our Planck scale is the Debye tempreature in my (actually, Phil Anderson's) example.

Your solid state analogy could be relevant to the physics of fundamental particles only if vacuum has a non-trivial small-scale structure (an analog of the crystal lattice in your example). I don't see any reason to believe in this assumption. It seems more economical to think that vacuum is just empy space and that Poincare group remains valid for all (arbitrary small and arbitrary large) values of parameters (translation distances, rotation angles, and boost velocities).


Slaviks said:
This is great to know there is such an option! But I equally don't see any problem with the "self-eneries", "vacuum polarization" and other "bare" quantities which diverge in the UV limit of the conventional formualtion. We just need to keep in mind that all we do is finding nice mathematical ways to do extrapolation. Feeding unobservable bare masses and coupling and matching the low energy limit to experiment has work perfectly well so far.

Yes, the traditional theory based on "bare" particles works well for calculations of the S-matrix and related quantities, like scattering cross-sections and energies of bound states. If we don't pay attention to those ugly unphysical "self energies" and "vacuum polarization", then we can call it success.


Slaviks said:
Here I get confused, most probably due to my ignorance:
why cannot time-evolution of an arbitrary configuration of particles be computed int he conventional approach once the
in the conventional approach once Lagrangian and the boundary conditions are specified?
May be you could point to the relevant section in the works you cited above where this disadvantage of usual QFT is discussed.

See section 9.1 "Troubles with renormalized QED" in http://www.arxiv.org/abs/physics/0504062

In order to compute the time evolution one should have a well-defined Hamiltonian. However, in the limit of infinite cutoff (QFT is guaranteed to yield accurate S-matrix only in this limit) the Hamiltonian of QFT has infinite counterterms. So, it is useless for time evolution calculations. Moreover, this Hamiltonian usually has tri-linear interaction terms (e.g., in QED there are terms that annihilate one electron and create one electron and one photon). Calculating the time evolution of a single (bare) electron with such a Hamiltonian we will obtain that at finite times the electron is not stable with respect to the process

electron -> electron + photon

However, nobody has seen a photon emission by a free electron.

Of course, the correct way to explain this paradox is to say that "bare" electrons are just fictitious particles and that real physical electrons don't have these strange properties. This is exactly the starting point of the "dressed particle" approach. It suggests to abandon the unphysical "bare particle" representation and express all quantities in terms of (annihilation and creation operators of) real physical "dressed" particles.

Eugene.
 
  • #15
down to beliefs

meopemuk said:
Your solid state analogy could be relevant to the physics of fundamental particles only if vacuum has a non-trivial small-scale structure (an analog of the crystal lattice in your example). I don't see any reason to believe in this assumption. It seems more economical to think that vacuum is just empy space and that Poincare group remains valid for all (arbitrary small and arbitrary large) values of parameters (translation distances, rotation angles, and boost velocities).

Well, we in the domain of beliefs and personal taste, so your position is a legitimate as mine. Just want to explain why I find the assumption of structureless vacuum im-plausible.

The point of my analogy between the real vacuum and an imaginary solid was precisely to demonstrate a situation where we have no reason what-so-ever to believe that the vacuum has any "small-scale structure". Of course, in such a universe of this hypothetical solid it is most "economical to think that vacuum is just empty space" and build my "fundamental" Hamiltonain based on (Gallilean) invariance principles. Anyone would ridicule somebody inventing, say, a NaCl lattice in that low-energy elastic world.

My reasons not to believe that vacuum is a structureless physical entity ad infinitum lie precisely in the history on how the properties of matter have been unveiled. On scale of 10^-10 m we discover atoms (atomistic hypothesis was a real breakthrough in its time -- like NaCl lattice against Debye's structureless continuum in my hypothetical world). At 10^-15 we hit the nucleus, at 10^-18 the fine structure of hadrons shows up, etc.

It would be truly surprising if we happen to live precisely at the end of this hierarchy. Indirect arguments lead some to expect novel structure at 10^-35 (Planck scale). We might not like / accept these arguments, but regardless of their validity, anything can happen at, say 10^-100 or 10^(-10^10) instead of 10^-35. Just why would the hierarchy terminate precisely (plus/minus a few orders magnitude) at the boundary of our experimental abilities? My belief is that the whole unlimited "unknown" lies down there.

Probably the core belief that differentiates us (correct me if I'm wrong) is whether
on can uncover "the" fundamental QFT from arguments of pure logic and mathematical elegance. I believe one can not.

meopemuk said:
If we don't pay attention to those ugly unphysical "self energies" and "vacuum polarization", then we can call it success.

Since I know my QED is just a mere (but excellently accurate and mathematically symmetric) low-energy approximation of inaccessible "unknown", then I don't worry whether the formal quantities you mention are physical or not. Just use them. If that leads to ugly mathematics, use a better equipped formalism. But I don't see any physical difference between your approach and the conventional Feynman's technique (of course, unless you predict measurable difference that a future experiment will be able to tell).

meopemuk said:
Of course, the correct way to explain this paradox is to say that "bare" electrons are just fictitious particles and that real physical electrons don't have these strange properties.

The "real physical electrons" we are only the low-energy entities we are familiar with. I keep insisting there is no unambiguous way to extrapolate the properties of "real phyiscal electrons" at high-energies. I do agree with you that the "bare" electrons used in Feynman's approach should always be regarded as just one of mathematically possible constructs, not an "ultimate answer" (because there cann't be such, see my credo above)


Thanks for the discussion and references, BTW.
 
  • #16
Hawking radiation makes this view untenable though, clearly you will see virtual particles becoming real and outputing the characteristic thermal spectrum.

Without this point of view, you have all the old paradoxes of black holes.

Effective field theory is a paradigm not only b/c of QED, QCD and all the other standard model successes, but also b/c there are examples of nonperturbative solutions that were solved and compared exactly with EFT in condensed matter and solid state physics. We know it works and is a consistent point of view.

Its quite far beyond just SMatrix calculations anyway, as I've emphasized in the past. People study time varying symmetry breaking all the time in QCD and lattice QCD.

I don't know much about the dressed particle formalism (its somewhat dated), but clearly it can't make QED consistent arbitrarily close to the Planck scale. It hits a Landau pole long before that.
 
  • #17
Slaviks said:
It would be truly surprising if we happen to live precisely at the end of this hierarchy. ... My belief is that the whole unlimited "unknown" lies down there.

I think it would be great to learn that we, indeed, reached "the end of the hierarchy". Isn't it the dream of any theoretical physicist? In my opinion, there are some clues indicating that we are not far from the end. For example, it is known that two electrons are *exactly* identical. This suggests that there can not be any deeper "substructure" inside electrons.

Slaviks said:
Probably the core belief that differentiates us (correct me if I'm wrong) is whether
on can uncover "the" fundamental QFT from arguments of pure logic and mathematical elegance. I believe one can not.

This is a very desirable and noble goal, which has not been reached yet. In my book I tried to go as far as possible along this way.

Slaviks said:
Since I know my QED is just a mere (but excellently accurate and mathematically symmetric) low-energy approximation of inaccessible "unknown", then I don't worry whether the formal quantities you mention are physical or not. Just use them. If that leads to ugly mathematics, use a better equipped formalism. But I don't see any physical difference between your approach and the conventional Feynman's technique (of course, unless you predict measurable difference that a future experiment will be able to tell).

It is my strong belief that theoretical physics must make all attempts to avoid using non-observable entities and quantities (like bare and virtual particles, ghosts, gauges, etc.). This is an ideal which we should try to achieve. Then non-trivial experimental predictions will follow.

Eugene.
 
  • #18
Haelfix said:
Hawking radiation makes this view untenable though, clearly you will see virtual particles becoming real and outputing the characteristic thermal spectrum.

Without this point of view, you have all the old paradoxes of black holes.

I am probably too conservative, but I think that we have too little (if any) reliable experimental data about black holes and their thermal spectrum. So, it is too early to use them as an argument.

Haelfix said:
Effective field theory is a paradigm not only b/c of QED, QCD and all the other standard model successes, but also b/c there are examples of nonperturbative solutions that were solved and compared exactly with EFT in condensed matter and solid state physics. We know it works and is a consistent point of view.

I don't have any objections against using effective field theories in condensed matter and solid state physics. But I am not convinced that this is a viable approach for fundamental interactions.

Haelfix said:
Its quite far beyond just SMatrix calculations anyway, as I've emphasized in the past. People study time varying symmetry breaking all the time in QCD and lattice QCD.

According to quantum mechanics, the time evolution of a closed system is described by the time evolution operator

[tex] U(t) = \exp(\frac{i}{\hbar}Ht) [/tex]

In QED (and QCD) we know the Hamiltonian H pretty well. We know that it contains infinite renormalization counterterms and that it allows us to calculate the S-matrix very accurately. Now, let us take this Hamiltonian and try to solve some simple time-dependent problems. Let's not talk about "time varying symmetry breaking", which, as far as I know, is not well-understood yet. Let us consider the simplest possible task of calculating the time evolution of a 1-electron state [itex] | \Psi(0) \rangle = a^{\dag} |0 \rangle[/itex]. My point is that the naive quantum-mechanical expression

[tex] | \Psi(t) \rangle = \exp(\frac{i}{\hbar}Ht) | \Psi(0) \rangle [/tex]

just doesn't make sense in this case. For example, it would predict unphysical processes, like "electron -> electron + photon". Is there another (less naive) approach to this simple problem?

Eugene.
 
  • #19
"elementarity" of a particle is always scale-dependent!

meopemuk said:
I think it would be great to learn that we, indeed, reached "the end of the hierarchy".
It would be great (although very surprising) to learn we've reach the bottom, but the problem is that we cann't tell!

meopemuk said:
Isn't it the dream of any theoretical physicist?

No, it is not. Unless one makes such a belief system a requirement to be called "theoretical physicist".

BTW, there are even more deep-rooted reasons to believe that "the dream is dead" -- see the discussions of Leonard Susskind's work, e.g. http://rabett.blogspot.com/2006_01_01_archive.html


meopemuk said:
In my opinion, there are some clues indicating that we are not far from the end. For example, it is known that two electrons are *exactly* identical. This suggests that there can not be any deeper "substructure" inside electrons.

What are these clues?
What exactly do you mean by "it is known that two electrons are *exactly* identical"?

I don't see how an argument of particle identity can help you in extrapolating to arbitrary small scales which are never probed. E.g., two hydrogen atoms in their respective ground states are exactly identical, and there no way to prove their compositness once the energy of allowed experiments is well below the hyperfine splitting (the distance to the first excited state) = 21 cm wavelength. The very fact the we can have BEC means that the atoms being condensed are indistinguishable, once we cool the things cold enough.

For me a clear indication that QED alone has told everything it can is the fact that in the most recent tests of measured vs. calculated electron magnetic moment, the greatest uncertainty is in a tiny correction which come due to polarization of the hadronic vacua -- at high enough orders (energies), you cann't use QED alone.

meopemuk said:
It is my strong belief that theoretical physics must make all attempts to avoid using non-observable entities and quantities (like bare and virtual particles, ghosts, gauges, etc.). This is an ideal which we should try to achieve. Then non-trivial experimental predictions will follow.

If a more elegant formulation is possible, then it is always welcome! But as far as verifyable predictions are identical, the choice of philosophy it remains the matter of taste. For some, virtual particles are ugly and horrible, for other they may be quite inspirational. Everyone has his own intuition, experimental validity is the judge (that's what I like about physics).
 
  • #20
meopemuk said:
I would dare to say that space is "infinitely divisible" and that there is no new physics at the Planck's scale. Of course, I have no way to prove that without appropriate experiments. However, I do know that usual assumptions of the underlying space-time granularity are not needed to solve the problem of QFT divergences. A relativistic quantum theory of interacting fundamental particles can be made self-consistent and divergence-free without cutoffs and "effective field theory" arguments.

I may be wrong but it sounds as if you imply that an effective field theory approach implies the assumption of granularity of spacetime (I may have misinterpreted your words, if so I apologize). Saying that a theory is an eft does not imply that. It just implies that at some scale "new physics" arises. The nature of this new physics is quite arbitrary, it could be granularity of spacetime but it could be a new force, inner structure to the particles (including string-like structure) etc etc etc. So in that sense it is quite general.

A good example is of course the Fermi model of the weak interaction, which can be used as an eft as long as energies are much below the weak scale (the W mass, say), including in loop diagrams. The non-renormalizability of the theory indicated the need for new physics which had nothing to do with granularity of spacetime.

What are the conditions under which the dressed particle appraoach may be applied? Could it have been applied to cure the infinities of the Fermi model? In that case it would have missed the fact that there *was* a new underlying theory: the gauge weak interaction.

Finally, let me emphasize that the eft approach is extremely useful not only as a way to think of new physics but also to describe known theories at low wnergies. For example chiral perturbation theory, heavy quark eft and NRQCD for the strong force at low energies.. (there is also an equivalent to NRQCD called NRQED...you may recognize my handle). So the concept of eft has proven extremely successful as a tool that works very well to describe known theories. It suggests to me that it is a useful tool for describing known theories relative to "new physics".


But I don't know anything about the dressed particle approach and it would be certainly interesting to learn. I guess my first question qwould be: what are the conditions in which it is applicable? Could it has "solved" the Fermi model? Could it be used to decsribe QCD at low energies?


Regards

Patrick
 
  • #21
Slaviks said:
It would be great (although very surprising) to learn we've reach the bottom, but the problem is that we cann't tell!

BTW, there are even more deep-rooted reasons to believe that "the dream is dead" -- see the discussions of Leonard Susskind's work, e.g. http://rabett.blogspot.com/2006_01_01_archive.html

You would probably agree that this "race to the bottom" kept theoretical physics vibrant for so many centuries. Of course, you are right, that this is a dream of *some* theoretical physicists, not *all* of them. Personally, I am not impressed by Susskind's logic, because of hugh amount of unfounded speciulations associated with it.

Slaviks said:
What are these clues?
What exactly do you mean by "it is known that two electrons are *exactly* identical"?

I don't see how an argument of particle identity can help you in extrapolating to arbitrary small scales which are never probed. E.g., two hydrogen atoms in their respective ground states are exactly identical, and there no way to prove their compositness once the energy of allowed experiments is well below the hyperfine splitting (the distance to the first excited state) = 21 cm wavelength. The very fact the we can have BEC means that the atoms being condensed are indistinguishable, once we cool the things cold enough.

Agreed. That's why I used the word "clue" instead of "proof".


Slaviks said:
If a more elegant formulation is possible, then it is always welcome! But as far as verifyable predictions are identical, the choice of philosophy it remains the matter of taste. For some, virtual particles are ugly and horrible, for other they may be quite inspirational. Everyone has his own intuition, experimental validity is the judge (that's what I like about physics).

Alternative formalisms have their own merits, even if they lead to exactly the same predictions. A good example is provided by three formulations of quantum mechanics - Schroedinger's wave equation, Heisenberg's matrix mechanics, and Feynman's parth integral.

However, I believe that the "dressed particle" approach is not merely a different mathematical formalism. For me the biggest surprise was to learn that this approach predicts instantaneous (not retarded) Coulomb and magnetic interactions between charged particles. It appears that this conclusion does not contradict the usual field-based S-matrix approach, because, as I tried to point out earlier, the latter approach can't tell much about the time evolution of interacting systems and, therefore, about the speed of propagation of interactions. Moreover, at closer inspection, it appears that the possibility of faster-than-light interactions does not contradict any experimental evidence either. There are quite a few recent experiments (e.g., photon tunneling) which can be interpreted from the viewpoint of instantaneous interactions. So, in my opinion, this debate (which, supposedly, was closed 100 years ago) is now wide open.

Eugene.
 
  • #22
nrqed said:
I may be wrong but it sounds as if you imply that an effective field theory approach implies the assumption of granularity of spacetime (I may have misinterpreted your words, if so I apologize). Saying that a theory is an eft does not imply that. It just implies that at some scale "new physics" arises. The nature of this new physics is quite arbitrary, it could be granularity of spacetime but it could be a new force, inner structure to the particles (including string-like structure) etc etc etc. So in that sense it is quite general.

Thank you for the correction. I used space-time "granularity" or "discreteness" as an example resembling the situation in condensed matter physics, where the "new physics" is associated with the crystal lattice. You are right that in theories of fundamental particles there could be other sources of "new physics": new heavy particles, strings, etc. The beauty (again, this is my personal view, and others may not see it as beautiful at all) of the "dressed particle" idea is that it allows us to formulate QFT self-consistently without relying on yet unknown "new physics".

nrqed said:
A good example is of course the Fermi model of the weak interaction, which can be used as an eft as long as energies are much below the weak scale (the W mass, say), including in loop diagrams. The non-renormalizability of the theory indicated the need for new physics which had nothing to do with granularity of spacetime.

What are the conditions under which the dressed particle appraoach may be applied? Could it have been applied to cure the infinities of the Fermi model? In that case it would have missed the fact that there *was* a new underlying theory: the gauge weak interaction.

If a theory is non-renormalizable (i.e., the number of different counterterm types is infinite), then the "dressed particle" formalism is powerless to change that. It can make the Hamiltonian of such a theory finite, but the number of independent parameters will remain infinite.


nrqed said:
Finally, let me emphasize that the eft approach is extremely useful not only as a way to think of new physics but also to describe known theories at low wnergies. For example chiral perturbation theory, heavy quark eft and NRQCD for the strong force at low energies.. (there is also an equivalent to NRQCD called NRQED...you may recognize my handle). So the concept of eft has proven extremely successful as a tool that works very well to describe known theories. It suggests to me that it is a useful tool for describing known theories relative to "new physics".

I agree that EFT is a very valuable tool for deriving some approximations (e.g., low-energy) of fundamental theories. The "dressed particle" approach is not an approximation. Its major idea is to apply a unitary (dressing) transformation to the field-theoretical Hamiltonian of renormalized QFT so that certain "bad" terms are eliminated. Examples of such "bad" terms are trilinear "electron -> electron + photon" and "vacuum -> electron +positron + photon" interaction operators responsible for "self-energies" and "vacuum polarization". It is important that the "dressing" transformation is carefully chosen so that the original (accurate) S-matrix is not changed. It can be also proven that this transformation can be chosen so that it cancels out all infinite counterterms in the Hamiltonian.

As a result, we obtain a finite Hamiltonian in which particles interact via instantaneous potentials. This Hamiltonian produces the same S-matrix as the original field-theoretical Hamiltonian. However, the advantage is that you'll not need regularization and renormalization. All loop integrals will be finite. Moreover, you can easily form the time evolution operator with this Hamiltonian, and you can diagonalize this Hamiltonian to get energies and wave functions of bound states, as is normally done in non-relativistic quantum mechanics. (These procedures were quite troublesome with the original field-theoretic Hamiltonian). The only significant difference with respect to ordinary quantum mechanics (where the number of particles was assumed fixed) is that interactions changing the number of particles are allowed as well, e.g., "2 electrons -> 2 electrons + photon"

Eugene.
 

FAQ: What is renormalization and what does it do?

What is renormalization?

Renormalization is a technique used in theoretical physics to eliminate infinities that arise in calculations involving quantum field theory. It is a mathematical procedure that allows physicists to make predictions about the behavior of particles and fields at very small scales.

Why is renormalization necessary?

Renormalization is necessary because at the quantum level, particles and fields interact with each other in complex ways, leading to infinities in calculations. These infinities do not have physical meaning and must be removed in order to make accurate predictions.

What does renormalization do?

Renormalization removes infinities from calculations by adjusting the parameters of the theory. This allows the theory to accurately predict physical observables, such as particle masses and interaction strengths, at all energy scales.

How does renormalization work?

Renormalization works by dividing physical quantities into two parts: a "bare" part that includes the infinities, and a "renormalized" part that includes the physical contributions. By carefully choosing the parameters of the theory, the infinities in the bare quantities can be absorbed into the renormalized quantities, resulting in finite and physically meaningful predictions.

What are some applications of renormalization?

Renormalization has been successfully applied in various fields, including particle physics, condensed matter physics, and quantum field theory. It has been used to make predictions about the behavior of subatomic particles, the properties of materials, and the behavior of physical systems at extreme temperatures and energy scales.

Similar threads

Replies
0
Views
795
Replies
10
Views
1K
Replies
18
Views
2K
Replies
6
Views
1K
Replies
3
Views
2K
Replies
39
Views
4K
Replies
57
Views
6K
Replies
4
Views
2K
Back
Top