Wilsonian viewpoint and wave function reality

In summary: The stuff QFT is about: expectations of fields at a point and correlation functions, expectations of suitably (time- or normally) ordered products of fields at several points.
  • #106
Tollendal said:
Suppose I take a pair of gloves and put each of them in a box. I keep one with me and give the other to you, who then take a rocket to the Moon and there open it. Instantaneously you know what glove remained with me - there is no "communication" between us, as the situation was deffined the moment I closed the boxes. I imagine that's an example the Universe is non local, an empyrical constatation we must accept as a datum from reality.

Einstein didn't understand it. It seems to me his "ghostly action at distance" is nonsense!

I can't tell who wrote the above paragraph, but that example doesn't refute Einstein. Einstein was completely in sympathy with that point of view. He believed that the perfect correlations in EPR type experiments could be explained by hidden variables, and that a measurement simply revealed the pre-existing (though unknown) value of those variables. Bell showed that he was wrong---EPR type correlations cannot be explained by hidden variables unless they are nonlocal.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #107
A. Neumaier said:
The Hilbert space is standard Fock space with momentum states up to the cutoff energy. The Hamiltonian is derived from the action (written in momentum space and with the cut-off). Hence the dynamics is unitary. It is well-known that for QED the resulting renormalized perturbation series is (in the limit of infinite cutoff) independent of the details of the regularization, hence agrees with that of any of the established procedures, including dimensional renormalization (the best computational heuristics) and causal perturbation theory (the covariant and manifestly finite derivation of the perturbation series). Note that causal perturbation theory never claimed results different from the traditional ones, only a mathematically agreeable procedure to arrive at them.

But if there is a cut-off, there is no need for causal perturbation theory, since there are no UV divergences.
 
  • #108
atyy said:
But if there is a cut-off, there is no need for causal perturbation theory, since there are no UV divergences.
There is no need, but there is also no harm. It may be easier to see in this way how the covariant causal perturbation theory is related to the Hilbert spaces whose ''limit'' (which is no longer a Fock space) would have to be constructed in a fully constructive version. Perhaps this will be the way how it is done one day.
 
  • Like
Likes vanhees71
  • #109
A. Neumaier said:
There is no need, but there is also no harm. It may be easier to see in this way how the covariant causal perturbation theory is related to the Hilbert spaces whose ''limit'' (which is no longer a Fock space) would have to be constructed in a fully constructive version. Perhaps this will be the way how it is done one day.

I agree, but I still don't quite see why if we use a cutoff that you prefer the momentum cutoff over the lattice. In both cases, using the Wilsonian viewpoint we need to show that the low energy corase grained observables reproduce the traditional recipe plus experimentally negligible corrections. But I don't believe there is a rigourous version of Wilson's viewpoint for QED, whether one uses a momentum or a lattice cutoff.
 
  • #110
Dear stevendaryl,

I was trying to say that Einstein never accepted the non locality of the Universe, that my naive example pretended illustrate. Of course, the entanglement results from this non locality - is a fact dayly demonstrated.
 
  • #111
atyy said:
why if we use a cutoff that you prefer the momentum cutoff over the lattice.
There are three reasons;

1. The value of the momentum cutoff doesn't make a difference to the computational complexity, as it is just a parameter in the final formulas. On the other hand, changing the lattice size makes a tremendous difference to the work involved. As a consequence one gets for QED numerical lattice results only to a few digits of accuracy but numerical perturbation results to 10 digits of accuracy if desired, with comparable amount of work.

2. In case of the asymptotic series, it has been proved rigorously in the 1950s that, order by order, the limit exists in which the cutoff is removed, and one obtains a covariant result, interpretable in terms of the traditional relativistic scattering variables and highly accurate when truncated to low order. On the other hand, there is not a single convergence result for the lattices, only numerical studies of poor accuracy and without any associated covariance.

3. Renormalizability proofs are associated with the perturbative setting only; in the lattice case there are no such proofs for realtivistic QFTs, only plausibility arguments by analogy to soldi state physics.

Thus the lattice case has far weaker theoretical and practical properties.

I would be interested in a lattice study of the anomalous magnetic moment of the electron - haven't seen any.
 
  • #112
stevendaryl said:
I can't tell who wrote the above paragraph, but that example doesn't refute Einstein. Einstein was completely in sympathy with that point of view. He believed that the perfect correlations in EPR type experiments could be explained by hidden variables, and that a measurement simply revealed the pre-existing (though unknown) value of those variables. Bell showed that he was wrong---EPR type correlations cannot be explained by hidden variables unless they are nonlocal.
Indeed, that's my point! Einstein understood quantum theory completely. There's no doubt about that, and that's why I quoted this paper of 1948 which is much clearer than the now famous EPR work of 1948. There is indeed no action at a distance. It cannot be by construction of local relativistic microcausal QFT. The "problem" for Einstein was not causality (which is however indeed a problem in that flavors of interpretations that assume a collapse outside of the validity of quantum dynamics which leads to an instantaneous influence of a measurement process at far distances) but inseparability, and indeed there's no violation of causality by making a local measurement of the polarization of a singl photon at place A of a polarization entangled photon pair, with the 2nd photon registered a far distance apart at location B. The correlation of the polarizations of the photons was always there from the moment on they were created, and nothing changes instantaneously at B when the polarization of the photon is measured at A. Of course, the single-photon polarizations for the measurement were maximally random, but the correlation was there, and before the measurement the two photons are inseparable due to the entanglement. The problem thus is not quantum theory itself, when you take the probabilistic meaning of the state seriously and refer only to descriptions of ensembles (minimal statistical interpretation). Whether or not you consider this as a complete description of nature or not is your personal taste (Einstein didn't, and that's why he looked for a unified classical field theory for the last 30 years of his life), but so far we have nothing better.
 
  • #113
A. Neumaier said:
There are three reasons;

1. The value of the momentum cutoff doesn't make a difference to the computational complexity, as it is just a parameter in the final formulas. On the other hand, changing the lattice size makes a tremendous difference to the work involved. As a consequence one gets for QED numerical lattice results only to a few digits of accuracy but numerical perturbation results to 10 digits of accuracy if desired, with comparable amount of work.

2. In case of the asymptotic series, it has been proved rigorously in the 1950s that, order by order, the limit exists in which the cutoff is removed, and one obtains a covariant result, interpretable in terms of the traditional relativistic scattering variables and highly accurate when truncated to low order. On the other hand, there is not a single convergence result for the lattices, only numerical studies of poor accuracy and without any associated covariance.

3. Renormalizability proofs are associated with the perturbative setting only; in the lattice case there are no such proofs for realtivistic QFTs, only plausibility arguments by analogy to soldi state physics.

Thus the lattice case has far weaker theoretical and practical properties.

I would be interested in a lattice study of the anomalous magnetic moment of the electron - haven't seen any.

OK, I see what you mean. But maybe one day lattice can get there. Then the main difference is that you are much more hopeful that a UV complete QED will be found, whereas I suspect it doesn't exist, so causal perturbation theory is a red herring.

I can kinda believe the momentum cutoff gives a well-defined quantum theory, but would like to know more before accepting that idea. Do you have a reference? Can gauge invariance really be preserved using a momentum cutoff?
 
  • #114
atyy said:
I can kinda believe the momentum cutoff gives a well-defined quantum theory, but would like to know more before accepting that idea. Do you have a reference?
It is the usual starting point - how can it need a reference?
atyy said:
Can gauge invariance really be preserved using a momentum cutoff?
Why do you insist on exact gauge invariance if you are willing to violate exact Poincare invariance? it will be valid like the latter in the limit of removing the cutoff.

Gauge invariance is tied to masslessness of the photon. But this is unprovable - though we have excellent, very tiny upper bounds on the mass. Note that QED with massive photons is still renormalizable, and indeed the infrared problems are often (though imperfectly) addressed by assuming during the calculations that the photon has a tiny mass, put to zero at the very end of the calculations.
 
  • #115
A. Neumaier said:
It is the usual starting point - how can it need a reference?

A. Neumaier said:
Why do you insist on exact gauge invariance if you are willing to violate exact Poincare invariance? it will be valid like the latter in the limit of removing the cutoff.

Gauge invariance is tied to masslessness of the photon. But this is unprovable - though we have excellent, very tiny upper bounds on the mass. Note that QED with massive photons is still renormalizable, and indeed the infrared problems are often (though imperfectly) addressed by assuming during the calculations that the photon has a tiny mass, put to zero at the very end of the calculations.

Well, I can provide a reference supporting the lattice approach:

http://arxiv.org/abs/hep-th/0603155
"But the lattice approach is very important also for more fundamental reasons: it is the only known constructive approach to a non-perturbative definition of gauge field theories, which are the basis of the Standard Model."

Edit: Google Erhard Seiler - he's associated with the Chopra Foundation?! Christof Koch too?
 
Last edited:
  • #116
atyy said:
the lattice approach is very important also for more fundamental reasons: it is the only known constructive approach
You should not forget that after they discuss this approach in more detail they close to say at the end of Section 6:
Fredenhagen and Rehren and Seiler said:
On the constructive side, the success with gauge theories in four dimensions has been much more modest, even though some impressive mathematical work towards control of the continuum limit has been done by Balaban

There are many approaches to relativistic quantum field theory in 4 dimensions , none so far leading to a construction. So the statement by Fredenhagen, Rehren, and Seiler that you quoted is an article of faith, not a fact.

Fact is that all very high accuracy predictions (which only exist in QED) are done starting from the covariant formulation.

Fact is also the gauge theories that have been constructed rigorously (in lower dimensions, e.g., QED_2 = the Schwinger model and 2-dimensional Yang-Mills) were constructed through the covariant approach.

As you well know, Balaban abandoned his work trying to construct 4D Yang-Mills theory through a lattice approach, and nobody took it up again. The continuum limit which should provide O(4) invariance seems too difficult to be tackled. Even then, one obtains a Euclidean field theory, not a Minkowski one, and needs the Osterwalder-Schrader theorem (which assumes exact O(4) invariance and reflection positivity) to get to the physical theory - and the resulting physical theory is exactly Poincare invariant. I am not awae of an approximate lattice version of the Osterwalder-Schrader theorem that would provide a physical, not quite covariant theory. All this shows that exact poincare invariance is fundamental. It is even used to classify the particle content of a theory - on a finite lattice there is no S-matrix and no notion of (asymptotic) particle states.
 
  • #117
A. Neumaier said:
You should not forget that after they discuss this approach in more detail they close to say at the end of Section 6:There are many approaches to relativistic quantum field theory in 4 dimensions , none so far leading to a construction. So the statement by Fredenhagen, Rehren, and Seiler that you quoted is an article of faith, not a fact.

Fact is that all very high accuracy predictions (which only exist in QED) are done starting from the covariant formulation.

Fact is also the gauge theories that have been constructed rigorously (in lower dimensions, e.g., QED_2 = the Schwinger model and 2-dimensional Yang-Mills) were constructed through the covariant approach.

As you well know, Balaban abandoned his work trying to construct 4D Yang-Mills theory through a lattice approach, and nobody took it up again. The continuum limit which should provide O(4) invariance seems too difficult to be tackled. Even then, one obtains a Euclidean field theory, not a Minkowski one, and needs the Osterwalder-Schrader theorem (which assumes exact O(4) invariance and reflection positivity) to get to the physical theory - and the resulting physical theory is exactly Poincare invariant. I am not awae of an approximate lattice version of the Osterwalder-Schrader theorem that would provide a physical, not quite covariant theory. All this shows that exact poincare invariance is fundamental. It is even used to classify the particle content of a theory - on a finite lattice there is no S-matrix and no notion of (asymptotic) particle states.

Well, we probably have more a difference of taste than a technical disagreement. I should say that perhaps Balaban abandoned his work not because his approach is not good, but because when he moved house one summer, the movers lost all his notes on Yang Mills ... :H (not sure if this is true, heard this anecdote from a talk by Jaffe, and I might not be retelling it quite correctly)
 
  • #118
atyy said:
Well, we probably have more a difference of taste than a technical disagreement. I should say that perhaps Balaban abandoned his work not because his approach is not good, but because when he moved house one summer, the movers lost all his notes on Yang Mills ... :H (not sure if this is true, heard this anecdote from a talk by Jaffe, and I might not be retelling it quite correctly)
You gave here a link to the lecture where Jaffe mentioned this.

But the real point is that an approach is taken up by others if they find it promising enough, and this hasn't happened in almost 30 years passed since the bulk of the work appeared. Recently Dimock wrote a few papers giving a streamlined exposition of part of Balaban's work - apparently nothing else happened. Having lost his notes could have been a good excuse for Balaban to quit working on the topic without having to admit that it seemed a dead end.

No one found the matter promising enough to put a PhD student on it, where losing notes is not a criterion since they are expected to create new notes based on their understanding on what was published (which was a lot).

You would be of the right age to choose it as your PhD topic - may I challenge you?
 
Last edited:
  • #119
A. Neumaier said:
You would be of the right age to choose it as your PhD topic - may I challenge you?

Ha, ha, I am pleased to know I still seem young to some people. I am way past the usual age to do a PhD. Being a lattice fan, and more a non-rigourist, if I were to make any progress with this hobby, I would try the chiral fermion problem :P

BTW, thanks for the pointer to the exposition by Dimock!
 
  • Like
Likes Demystifier
  • #120
atyy said:
I would try the chiral fermion problem
Since you are fan of it, can you recommend a good and not too technical review of the chiral fermion problem, which in simple terms explains what exactly this problem is? Or even better, could you write (perhaps as an insight entry on this forum) a non-technical review by yourself?
 
  • #121
Demystifier said:
Since you are fan of it, can you recommend a good and not too technical review of the chiral fermion problem, which in simple terms explains what exactly this problem is? Or even better, could you write (perhaps as an insight entry on this forum) a non-technical review by yourself?

How about http://latticeqcd.blogspot.sg/2005/12/nielsen-ninomiya-theorem.html ?
 
  • #122
atyy said:
Nielsen is a frequent guest at the institute where I am working, and sometimes I even share office with him. But I had the impression that the fermion doubling problem (which the Nielsen-Ninomiya theorem is about) is not the same thing as the chiral fermion problem. Are you saying that those are different names for the same problem?
 
  • #123
Demystifier said:
Nielsen is a frequent guest at the institute where I am working, and sometimes I even share office with him. But I had the impression that the fermion doubling problem (which the Nielsen-Ninomiya theorem is about) is not the same thing as the chiral fermion problem. Are you saying that those are different names for the same problem?

Well, they are closely related. The introduction of http://arxiv.org/abs/1307.7480 describes the relationship between the chiral fermion problem and the Nielsen-Ninomiya theorem.
 
  • #125
Demystifier said:
I don't think it's true. I think all so called "rigorous" interacting QFT's need some sort of regularization of UV divergences, not much different from a non-zero lattice spacing.
There are several rigorously constructed QFTs that do not have an ultraviolet cutoff, e.g. ##\phi^{4}_{3}##.
 
  • #126
Demystifier said:
@atyy In the literature I have seen the claim that the problem can be solved by Ginsparg-Wilson aproach. Here are some links
http://latticeqcd.blogspot.hr/2005/12/ginsparg-wilson-relation_21.html
http://arxiv.org/abs/hep-lat/9805015
What is your opinion on that approach?

It does work, if one can solve the relation. In many particular cases it can be used. However, it was probably wrongly believed eg.

"The full strength of the Ginsparg-Wilson relation was realized by Luscher who discovered that it suggests a natural definition of lattice chiral symmetry which reduces to the usual one in the continuum limit. Based on this insight, Luscher achieved a spectacular breakthrough: the non-perturbative construction of lattice chiral gauge theories" http://www.itp.uni-hannover.de/saalburg/Lectures/wiese.pdf

But fzero pointed out to me here on PF that showed this was probably not correct. https://www.physicsforums.com/threads/status-of-lattice-standard-model.823860/
 
Last edited:
  • Like
Likes Demystifier
  • #127
DarMM said:
There are several rigorously constructed QFTs that do not have an ultraviolet cutoff, e.g. ##\phi^{4}_{3}##.
Yes, but I meant in 4 dimensions.
 
  • #128
atyy said:
For it to be asymptotic series, the thing that it is approximating must exist. In other words, the theory must be constructed. Does a construction of QED exist?
Let's use the Wilsonian concept. That means, we replace, in a first step, QED or the whole SM by a lattice theory. With some lattice distance h and a finite size L with periodic boundary conditions so that there will be no IR infinities too.

This theory is well-defined in any sense. Now you can define, for the computations in this well-defined theory, any sort of pertubation theory. If the resulting series will be convergent or only an asymptotic series or whatever - this question is well-posed because the theory which is approximated by this pertubation theory is well-defined and exists. (And, in particular, at least up to this point this holds for gravity or other non-renormalizable stuff too.)

Then, you can consider the question of how this well-defined and hopefully well-approximated theory depends on the lattice approximation. What changes if you decrease h and increase L. The theories with different h and L will be related with each other by some renormalization. But if the relation between the lattice theory and its pertubation series is well-understood for one choice of h and L, it will probably be the same for other choices of h and L. Ok, may be with different radius of convergence or so, if there is such a thing.

But what about the limit? The limit is irrelevant. Because to recover everything observable it is sufficient to consider the well-defined lattice theory for small enough h and large enough L. Then, in particular, all terms violating relativistic symmetry (which is not exact on the lattice) will be small enough to be observable. And the theory with even smaller h will be indistinguishable.

(There is another point - to obtain a really well-defined theory it may be necessary to fix the gauge freedom and for gravity the freedom of choice of coordinates. With the Lorenz gauge and harmonic coordinates we have nice and simple candidates for this, so that this is unproblematic too. What has been said about relativistic symmetry holds for gauge and diff symmetry accordingly - for a fine enough grid it will be effectively recovered.)
 
  • #129
A. Neumaier said:
...the results obtained by truncating the covariant perturbation series at 5 loops (needed for the 10 digits) are provably equivalent (within computational accuracy) to those of a rigorously defined nearly relativistic QFT.
This is far better than what one has for the lattice approximations, which are completely uncontrolled.

I think one has to distinguish conceptual and pragmatical questions. It is one question how to compute QED predictions with maximal precision with minimal resources, and a completely different one to understand the concepts, and to understand how all this makes sense, and can be, at least in principle, defined in a rigorous way.

For efficient computations we can, easily, use, say, dimensional regularization. Even if a [itex]4-\varepsilon[/itex]-dimensional space is a completely meaningless nonsense. For understanding the concepts a lattice regularization is much better. It makes completely sense, in all details. It may be horribly inefficient for real computation, but so what? If we want to understand how a theory can be defined in a rigorous and meaningful way, a lattice regularization is the preferable way - it makes sense. In particular, even if the limit is not well-defined, a lattice with a small enough h is well defined. Which is an important difference to the QFT in [itex]4-\varepsilon[/itex]-dimensional space (which I have chosen here to illustrate the point).
 
  • #130
Ilja said:
a ##4-\varepsilon##-dimensional space is a completely meaningless nonsense
But a ##(4-\varepsilon)##-dimensional integral of a (sufficiently well-behaved) covariant function of momenta is a completely well-defined object. QFT makes use only of the latter.
 
  • #131
Demystifier said:
Since you are fan of it, can you recommend a good and not too technical review of the chiral fermion problem, which in simple terms explains what exactly this problem is? Or even better, could you write (perhaps as an insight entry on this forum) a non-technical review by yourself?
What I can tell you is how I see the problem.

First, there is the purely technical fermion doubling problem. If you put Dirac fermions naively on the lattice, you obtain not one but 16 Dirac fermions. In a more sophisticated version, this can be reduced to 4. What I propose here is a further reduction to 2, which is reached by using the old, not that relativistic variant of the Dirac equation [itex]i\partial_t \psi + i\alpha^i\partial_i \psi + m \beta \psi = 0[/itex] or so modulo signs, and then use the standard staggered fermion technique but only in the spatial discretization. Reducing it to two fermions is sufficient, because in the SM all fermions appear in electroweak doublets only. The problem to put a single Weyl fermion alone on the lattice may be a pragmatical problem, but is irrelevant for the conceptual understanding of the SM.

Then, the problem of creating a gauge-invariant lattice model. Here I say: Forget about it, use a non-gauge-invariant lattice model. Weak gauge fields are anyway massive, thus, not gauge-invariant. You need gauge invariance for renormalizability? No, think about what Wilson tells us about this. You have on the exact lattice theory renormalizable as well as non-renormalizable terms, and in the large distance limit the non-renormalizable ones decrease, the more horrible ones much faster than the almost renormalizable ones. So, conceptually we do not have to care.

Or, in other words, once we start with a lattice theory which is not gauge-invariant, we have to expect that in the large distance limit gauge invariance will not be recovered, and that the lowest order non-gauge-invariant terms survive. That will be the mass term. Fine, this is what we need for electroweak theory anyway.
 
  • Like
Likes Demystifier
  • #132
A. Neumaier said:
But a ##(4-\varepsilon)##-dimensional integral of a (sufficiently well-behaved) covariant function of momenta is a completely well-defined object. QFT makes use only of the latter.
Fine, but this is not the point. The integral is what you need to compute the results.

What you need for a conceptual understanding, or for a rigorous definition, is or a rigorous construction of the theory without cutoff, or at least a rigorous construction of a meaningful theory with cutoff. And a quantum field theory in a ##(4-\varepsilon)##-dimensional space is something I have never seen.
 
  • #133
Ilja said:
What I can tell you is how I see the problem.

First, there is the purely technical fermion doubling problem. If you put Dirac fermions naively on the lattice, you obtain not one but 16 Dirac fermions. In a more sophisticated version, this can be reduced to 4. What I propose here is a further reduction to 2, which is reached by using the old, not that relativistic variant of the Dirac equation [itex]i\partial_t \psi + i\alpha^i\partial_i \psi + m \beta \psi = 0[/itex] or so modulo signs, and then use the standard staggered fermion technique but only in the spatial discretization. Reducing it to two fermions is sufficient, because in the SM all fermions appear in electroweak doublets only. The problem to put a single Weyl fermion alone on the lattice may be a pragmatical problem, but is irrelevant for the conceptual understanding of the SM.

Then, the problem of creating a gauge-invariant lattice model. Here I say: Forget about it, use a non-gauge-invariant lattice model. Weak gauge fields are anyway massive, thus, not gauge-invariant. You need gauge invariance for renormalizability? No, think about what Wilson tells us about this. You have on the exact lattice theory renormalizable as well as non-renormalizable terms, and in the large distance limit the non-renormalizable ones decrease, the more horrible ones much faster than the almost renormalizable ones. So, conceptually we do not have to care.

Or, in other words, once we start with a lattice theory which is not gauge-invariant, we have to expect that in the large distance limit gauge invariance will not be recovered, and that the lowest order non-gauge-invariant terms survive. That will be the mass term. Fine, this is what we need for electroweak theory anyway.
Is such an approach compatible with zero mass of the photon?
 
  • #134
A. Neumaier said:
But a ##(4-\varepsilon)##-dimensional integral of a (sufficiently well-behaved) covariant function of momenta is a completely well-defined object. QFT makes use only of the latter.
True, but you don't need dim. reg. to define renormalization schemes, and it's indeed pretty unintuitive from a physics point of view. For me the most intuitive scheme is BPHZ, i.e., you define your renormalization scheme as conditions on the divergent proper vertex functions (which is a finite set for Dyson renormalizable and an infinite set for a non-renormalizable effective theory, where you need more and more low-energy constants the higher you go in the momentum expansion). There you introduce a renormalization scale, be it a (space-like) momentum if you need a mass-independent renormalization scheme or a momentum-subtraction scheme where some mass in your physics defines the scale. The Wilsonian point of view comes in via the various renormalization group equations.

You can also introduce a "cutoff function", implementing directly the Wilsonian point of view. That's a technique that is now well established in connection with the functional RG methods (Wetterich equation), which is used, e.g., to understand the QCD phase diagram (usually employing effective models like various chiral quark-meson models with or without Polyakov loops) with some success.
 
  • #135
vanhees71 said:
For me the most intuitive scheme is BPHZ
The problem here is that one loses manifest gauge invariance, hence has the problem of Gribov copies. or is there a way around that?
 
  • #136
You have the problem of Gribov copies independent of the chosen regularization scheme. Also I don't understand, where you see a problem of gauge invariance. The WTI's ensure that your counterterms are in accordance with gauge invariance. This was the great step forward in 1971 when 't Hooft published the results of his PhD thesis (under supervision of Veltman).
 
  • #137
vanhees71 said:
True, but you don't need dim. reg. to define renormalization schemes, and it's indeed pretty unintuitive from a physics point of view.
Yes, I have used this as an example to illustrate that it is useful to distinguish things which allow to make fast, efficient, accurate computations from things which allow to improve conceptual understanding or to prove consistency of the theory.

vanhees71 said:
You have the problem of Gribov copies independent of the chosen regularization scheme.
The question is if Gribov copies are a problem or not.

If you think that gauge-equivalent gauge fields are really identical states, and your gauge fixing condition is purely technical, then Gribov copies are clearly a problem, they mean that the same state is counted several times.
If you, instead, consider gauge-equivalent fields as physically different states (even if you have no way to distinguish them by observation), and the gauge condition as a physical equation for these degrees of freedom (so that you use a nice looking equation like the Lorenz condition), then there is no problem at all with Gribov copies.

Demystifier said:
Is such an approach compatible with zero mass of the photon?
Note that the EM field is a vector field, not chiral. So, to implement an exact gauge symmetry on the lattice is not a problem.

The only question would be if this is compatible with the approach to fermion doubling, where I have the electroweak doublet instead of a single Dirac fermion. In http://arxiv.org/abs/0908.0591 there the idea comes from I have an exact gauge invariance for U(3), but with the condition that all parts of an electroweak doublet have the same charge. While the group is fine (the SU(3) plus [itex]U(1)_{em}[/itex] part is in reality U(3) too) its representation is not. But the EM field can be understood as a deformation of the diagonal U(1) symmetry. And for such a deformation the exact gauge symmetry remains, even if somehow deformed, so one can hope that the deformed symmetry remains an exact symmetry. Then, the field would remain massless.

But this is, of course, a point which needs better understanding.
 
  • Like
Likes Demystifier
  • #138
Ilja said:
Yes, I have used this as an example to illustrate that it is useful to distinguish things which allow to make fast, efficient, accurate computations from things which allow to improve conceptual understanding or to prove consistency of the theory.The question is if Gribov copies are a problem or not.

If you think that gauge-equivalent gauge fields are really identical states, and your gauge fixing condition is purely technical, then Gribov copies are clearly a problem, they mean that the same state is counted several times.
If you, instead, consider gauge-equivalent fields as physically different states (even if you have no way to distinguish them by observation), and the gauge condition as a physical equation for these degrees of freedom (so that you use a nice looking equation like the Lorenz condition), then there is no problem at all with Gribov copies.Note that the EM field is a vector field, not chiral. So, to implement an exact gauge symmetry on the lattice is not a problem.

The only question would be if this is compatible with the approach to fermion doubling, where I have the electroweak doublet instead of a single Dirac fermion. In http://arxiv.org/abs/0908.0591 there the idea comes from I have an exact gauge invariance for U(3), but with the condition that all parts of an electroweak doublet have the same charge. While the group is fine (the SU(3) plus [itex]U(1)_{em}[/itex] part is in reality U(3) too) its representation is not. But the EM field can be understood as a deformation of the diagonal U(1) symmetry. And for such a deformation the exact gauge symmetry remains, even if somehow deformed, so one can hope that the deformed symmetry remains an exact symmetry. Then, the field would remain massless.

But this is, of course, a point which needs better understanding.

Not going to pretend I could follow it but Is the theory/proposal in the paper referenced significantly impacted by the discovery of the Higgs boson? Seemed like a paragraph there at the end suggested there might not be a Higgs sector. But it also seemed to be suggesting mass due to symmetry breaking was emergent (sorry if I am misrepresenting that significantly) - not necessarily that the Higgs stuff was wrong as a practical theory.
 
  • Like
Likes atyy
  • #139
Jimster41 said:
Not going to pretend I could follow it but Is the theory/proposal in the paper referenced significantly impacted by the discovery of the Higgs boson? Seemed like a paragraph there at the end suggested there might not be a Higgs sector. But it also seemed to be suggesting mass due to symmetry breaking was emergent (sorry if I am misrepresenting that significantly) - not necessarily that the Higgs stuff was wrong as a practical theory.
The Higgs sector is simply not yet considered in the model, at least yet.

But let's note that there are candidates for the role of the Higgs: Last but not least, some degrees of freedom of the Higgs field are simply transformed, by the symmetry breaking, into degrees of freedom of the massive bosons. These degrees of freedom exist in my model from the start, no need to obtain them in such a subtle way - and they exist for the gauge-symmetric gauge fields too. And for the U(1) gauge fields (the EM field, and the two additional U(1) fields which are supposed to be suppressed because of vacuum neutrality and anomaly) they are simple scalar fields. Then, for each electroweak doublet we have a massive scalar field. So there are a lot of scalar fields. How far they remember the Higgs, or the scalar particle which has been observed, would be a question to be studied.
 
  • Like
Likes atyy and Jimster41
  • #140
Ilja said:
The Higgs sector is simply not yet considered in the model, at least yet.

But let's note that there are candidates for the role of the Higgs: Last but not least, some degrees of freedom of the Higgs field are simply transformed, by the symmetry breaking, into degrees of freedom of the massive bosons. These degrees of freedom exist in my model from the start, no need to obtain them in such a subtle way - and they exist for the gauge-symmetric gauge fields too. And for the U(1) gauge fields (the EM field, and the two additional U(1) fields which are supposed to be suppressed because of vacuum neutrality and anomaly) they are simple scalar fields. Then, for each electroweak doublet we have a massive scalar field. So there are a lot of scalar fields. How far they remember the Higgs, or the scalar particle which has been observed, would be a question to be studied.

I am trying to follow (heuristically) your second more general paper on GLET. For what it's worth the exercise has helped me imagine what Smolin means when he talks about "Pure Relationism" in his recent book. As you may know he's is all over absolute time in that and he talks about Shape Dynamics and Causal Sets as relevant theories. Do you see them as such?

Heuristically, to me at least, your Ether seems like a Causal Set or LQG tetrahedral "foam" (or whatever quantization machine) indexed by absolute time so each chunk is unique and in some sense "located". I certainly hope I'm not missing the point entirely. Smolin's got this thing about similarity-distance that seems really appealing in this context.
 
Last edited:

Similar threads

Replies
9
Views
995
Replies
33
Views
2K
Replies
33
Views
2K
Replies
1
Views
879
Replies
8
Views
2K
Replies
36
Views
4K
Back
Top