- #1
- 5,779
- 172
I would like to continue discussing SF (i.e. PI) models of LQG based on chapter 3 from
http://arxiv.org/abs/1009.4475
Critical Overview of Loops and Foams
Authors: Sergei Alexandrov, Philippe Roche
(Submitted on 22 Sep 2010)
Abstract: This is a review of the present status of loop and spin foam approaches to quantization of four-dimensional general relativity. It aims at raising various issues which seem to challenge some of the methods and the results often taken as granted in these domains. A particular emphasis is given to the issue of diffeomorphism and local Lorentz symmetries at the quantum level and to the discussion of new spin foam models. We also describe modifications of these two approaches which may overcome their problems and speculate on other promising research directions.
In chapter 3 Alexandrov and Roche discuss spin foam models which may suffer from related issues showing up in different form but being traced back to a common origin (secondary second class constraints, missing Dirac’s quantization scheme, …).
In a nutshell, LQG is supposed to give an Hamiltonian picture of quantum gravity based on the use of specific variables (connections), whereas spin foam models are certain type of discretized path integral approach to the quantization. A priori these are different approaches using different methods and leading to different results. Of course, in the best case their predictions should coincide and they should be just equivalent quantizations. But at present such an agreement has not been achieved yet.
First Alexandrov and Roche start with a new perspective based on SFs
Most of the constructions of SF models of 4-dimensional general relativity heavily rely on the Plebanski formulation and translate the classical relation between BF theory and gravity directly to the quantum level. In other words they all employ the following strategy: 1. discretize the classical theory putting it on a simplicial complex;
2. quantize the topological BF part of the discretized theory;
3. impose the simplicity constraints at the quantum level.
Thus, instead of quantizing the complicated system obtained after imposing the constraints, they first quantize and then constrain. This strategy is behind all the progress achieved in the construction of 4-dimensional SF models. However, at the same time, this is a very dangerous strategy and, as we believe, it is the reason why most of these models cannot be satisfactory models of quantum gravity. As we will show, it is inconsistent with the Dirac rules of quantization and is somewhat misleading.
They introduce the models [21] (ELPR) and [20] (FK)
Although the models of [21] (ELPR) and [20] (FK) are in general different from each other and obtained using different ideas, they have several common inputs. First, they both rely on the idea allowing to effectively linearize the simplicity constraints
They discuss the BF model, why it fails to provide a quantization of GR, and they discuss how BF serves as a basis for the new model; they show that some shortcomings of the BF models are not resolved in [21] (ELPR) and [20] (FK)
But in fact (3.19) is stronger because it excludes the topological sector of Plebanski formulation. Thus, the linearization solves simultaneously the problem of the BC model that it does not distinguish between the gravitational and the topological sectors. The new constraint leads directly to the sector we are interested in. ... Second, both models suggest to quantize an extension of Plebanski formulation which includes the Immirzi parameter. This results in crucial deviations from the results of the BC model already at the level of imposing the diagonal simplicity constraint.
The construction of [21] (ELPR) and [20] (FK) contains some 'quantization ambiguities' as expected from chapter 2.
Then, as has been noted already in [131], the constraint (3.21) does not have solutions except some trivial ones. However, appealing to the ordering ambiguity, the authors of the model [21] adjusted the operator in (3.21) so that the constraint does have solutions.
In the following they stress deviations from well-established standard qunatization procedures used to quantize & modify BF leading to the new models
The main suggestion of this model, [EPRL] which distinguishes it from the BC model and was first realized in [128], is that the simplicity constraints should be imposed only in a weak sense that is instead of imposing the constraints on the allowed states [annihilating states] one only requires [vanishing of their expectation values sandwiched between physical states!] This is justified by noting that after identification of the bivectors Bf with generators of the gauge group or a combination thereof (3.20), the simplicity constraints become non-commutative and imposing them strongly leads to inconsistencies, as is well known for any second class constraints. This does not concern the diagonal simplicity constraint which lies in the center of the constraint algebra and therefore can still be imposed strongly leading to the restriction (3.21) on the allowed representations
[in FK] one starts again from the partition function for BF theory (3.12) where the simplicity constraints should be implemented as restrictions on the representation labels. However, before doing that one makes a refinement of the decomposition (3.12) using the coherent state techniques developed in [19] Here we concentrate on the Euclidean case. Although the Lorentzian case was also considered in [20], the corresponding construction is much more complicated and even the Immirzi parameter has not been incorporated in it so far.
They doubt that the SFs and canonical LQG agree in the kinematical Hilbert space; the 'proofs' are mostly based on unphysical i.e. non-gauge invariant quantities subject to quantization anomalies.
Due to this fact it was claimed that the boundary states of the new models are the ordinary SU(2) spin networks [128] and it is now widely believed that there is a perfect agreement between the new SF models and LQG at the kinematical level [10]. However, it is easy to see that this is just not true. First of all, the states induced on the boundary of a spin foam are not the ordinary spin networks, but projected ones considered in section 2.2.2
However, on one hand, there is no any fundamental reason to perform such a projection. And on the other hand, this relation shows that the kinematical states of LQG and the boundary states of the EPRL model are indeed physically different and the agreement between their labels is purely formal. The claimed agreement is often justified by comparison of the spectra of geometric operators, area [21] and volume [138]. By appropriately adjusting the ordering, the spectra in spin foams and LQG can be made coinciding. However, the operators, which are actually evaluated in these papers, are not the standard ones, but shifted by constraints.
But in the EPRL approach the boundary states are supposed to be integrated over these normals so that the operator corresponding to (3.39) is simply not defined! On top of that, even if one drops the integration over xt, as we argue below, and gets a well defined operator on a modified state space, we see that the quantization of the geometric operators is not unique. To get the coincidence with LQG requires ad hoc choice of the ordering and of the classical expression to be quantized.
After their discussion of EPRL and FK they focus on the imposition of constraints. To understand these issues one has to be familiar with the Dirac quantization procedure (second class constraints, Dirac brackets instead of Poisson brackets)!
All models presented in the previous section have been derived following the strategy of section 3.1.2: first quantize and then constrain. Now we want to reconsider the resulting constructions taking lessons from the canonical approach. As we showed in the previous section, the spin foam quantization originates in Plebanski formulation of general relativity. The canonical analysis of this formulation has been carried out in [139, 140, 141] and turns out to be essentially equivalent to the Lorentz covariant canonical formulation of the Hilbert–Palatini action [18] once eijkBjk is identified with ~ Pi. The Immirzi parameter is also easily included and appears in the same way. Thus, the canonical structure to be quantized can be borrowed from section 2.2.1. In particular, the role of the simplicity constraints is played by the constraints (2.30).
In section 3.4.1. Alexandro and Roche show based on a toy model how the two quantization strategies
1) a la Dirac and
2) 'first quantize using Poisson - then constrain'
may lead models which look equivalent at first sight but are definately inequivalent of one carefully inspects their details.
I do not go into the details here but I expect that everybody aiming to understand Alexandrov's reasoning will carefully follow his arguments and will understand in detail Dirac's constraint quantization procedure! The analysis of secondaray second class constraints modifying the symplectic structure on phase space i.e. replacing Poisson with Dirac brackets is key to the whole chapter 3!
They compare the two quantization strategies in the toy model:
Comparing the results of the two approaches, one observes a drastic discrepancy: gamma is either quantized or not. In the former approach for a non-rational gamma the quantization simply does not exist, whereas there are no any obstructions in the latter ... Taking into account that the second approach represents actually a result of several possible methods, which all follow the standard quantization rules, it is clear that it is the second quantization that is more favorable. The quantization of ? does not seem to have any physical reason behind itself. In fact, it is easy to trace out where a mistake has been done in the first approach: it takes too seriously the symplectic structure given by the Poisson brackets, whereas it is the Dirac bracket that describes the symplectic structure which has a physical relevance. It is easy to see that this leads to inconsistency of the first quantization. For example, the Hamiltonian H, ... is simply not defined on the subspace spanned by linear combinations of (3.54) ...
The next topic is relevant as response to Rovelli's "we don't need a Hamiltonian"
However, one can take a “minimalistic” point of view and do not require the existence of a well defined Hamiltonian on the constrained state space. (We thank Carlo Rovelli for discussion of this possibility.) After all, spin foam models are designed to compute transition amplitudes. Therefore, we are really interested not in the Hamiltonian itself, but in its matrix elements and the latter can be defined by using the Hamiltonian and the scalar product on the original unconstrained space
However, this expectation turns out be wrong. As is clear from the derivations in [20, 132] and has been explicitly demonstrated in a simple cosmological model [142], the vertex amplitude actually appears as a matrix element of the evolution operator. This requires to consider expectation values of higher powers of the Hamiltonian for which the property (3.62) does not hold anymore. This leads to deviations of results obtained by the spin foam strategy from those which are based on the well grounded canonical quantization
Let us summarize what we learned studying the simple model (3.42):
• The strategy based on “first quantize, then constrain” leads to a canonical quantization which is internally inconsistent as the Hamiltonian operator is ill-defined on the constrained state space.
• The origin of the problem as well as the quantization of the parameter ? can be traced back to the use of the Poisson symplectic structure which does not take into account the presence of the second class constraints.
• Besides, this approach completely ignores the presence of the secondary second class constraint which is crucial for suppressing the fluctuations of non-dynamical variables and producing the right vertex amplitude in discretized theory.
• An attempt to interpret the results of such quantization only as an approach to compute transition amplitudes using (unphysical) Hamiltonian (3.61) does not work as they turn out to be incompatible with the results of the standard (path integral or canonical) quantization. As a result, the transition amplitudes computed in this way do not have any consistent canonical representation.
In our opinion, all these problems are just manifestations of the fact that the rules of the Dirac quantization cannot be avoided. This is the only correct way to proceed leading to a consistent quantum theory.
The example presented above explicitly reveals the main problems of the new SF models and their origin. All these models start from the symplectic structure provided by the simple BF theory, which ignores constraints of general relativity. In particular, they all use the usual identification of the B-field with the generators of the gauge group, or its gamma-dependent version (3.20), when the constraints are translated into quantum level. But this identification does not agree with the symplectic structure of general relativity
Alexandro and Roche pay attentio to the simplicity constraint which is key to the SF formalism and which seems to be the weakest point. It is this constraint which is requred to turn BF into GR - and it is this constraint which is quantized in the wrong way!
In fact, a special care which is paid to the diagonal simplicity, when it is imposed strongly whereas the cross simplicity constraints are imposed only weakly, results from another common confusion. As we explained in section 3.3.1, this is done because the diagonal simplicity is in the center of the non-commutative constraint algebra of all simplicity constraints and thus interpreted as first class. But this classification would be correct only if there were no other constraints to be considered. It completely ignores the presence of the secondary constraints. The latter do not commute with all simplicity and in particular with the diagonal simplicity. As a result, all these constraints are second class and should be quantized via the Dirac bracket. Given all this, we expect that the new SF models suffer from inconsistencies which we met in the previous subsection. They can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian. In particular, there is no reason to expect that the new models may be in agreement with LQG or any of its modifications. Note that this incompatibly with the canonical quantization manifests itself in the issues involving the Hamiltonian. This is why one does not see it in a semiclassical analysis or in any investigation restricting to the kinematical level. It should be stressed that this critics is not just about face or edge amplitudes, which depend on details of the path integral measure but can be found in principle from consistency on the gluing of simplices [135]. In fact, the ignorance of the secondary second class constraints has much more profound implications and, what is the most important, it affects the vertex amplitude (see the next subsection). The standard prescription that the vertex is obtained by evaluating the boundary state of a 4-simplex on a flat connection is a direct consequence of the employed strategy, which starts by quantizing the topological BF theory, and should be modified to take into account all constraints of general relativity. Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity. A good candidate should allow a quantum mechanical representation in terms of wave functions, Hamiltonian, etc., especially if one hopes to find a viable loop quantization of gravity. The point we are making here is that the SF models derived using the strategy “first quantize and then constrain” do not satisfy this requirement.
As I stressed a couple of times the Lagrangian PI is a derived object which cannot be seen as a fundamental entity. The main problem is the construction of the measure taking into account the second class constraints. It is this step where the SF construction seems to fail up to now; it is this step where some weak points from the BF model do show up agains
The SF representation of quantum gravity can be seen as an outcome of a Lagrangian path integral for discretized Plebanski formulation of general relativity. However, the Lagrangian or a configuration space path integral is a derived concept. A more fundamental one is the path integral over the phase space. Its measure can be rigorously derived and in particular it contains d-functions of all second class constraints. On the other hand, the Lagrangian path integral can be obtained from the canonical one only at certain very special circumstances.
Therefore, ..., if one wants to calculate transition functions as one does in SF models, one must use the canonical path integral.The main consequence of this conclusion is that, as we mentioned above, the secondary second class constraints should appear explicitly in the integration measure. We believe that this is an important point missed by the present-day SF models.
Alexandro and Roche stress that all these defects of the new models may be invisble in the semiclassical sector. That means that reproducing GR in the IR is not sufficient as a test for successfil quantization. Tis is trivial as there are several inequivalent models having the same IR limit (this applies to any quantum theory, not only to GR)
One might argue that since the secondary constraints appear as stability conditions for the primary ones and the latter are imposed in the path integral at every moment of time, the secondary constraints should follow automatically and need not to be imposed explicitly. For example, in SF models based on Plebanski formulation one could expect that all set of simplicity constraints ensures the simplicity of bi-vectors at all times and thus it is enough. However, this argument works only at the quasiclassical level where the equations of motion are satisfied. Off-shell the quantum fluctuations of degrees of freedom fixed classically by the secondary constraints are not suppressed if the constraints are not inserted in the path integral.
It is also not seen at the quasiclassical level since the missing constraint is obtained on mass shell anyway. Therefore, it is not in contradiction with the fact that the semiclassical asymptotics of the EPRL and FK amplitudes reproduce the Regge action [147, 148, 149], i.e., the correct classical limit. The problem is that the secondary constraints are not imposed strongly at the quantum level and as a result one might expect the appearance of additional quantum degrees of freedom in the new models
http://arxiv.org/abs/1009.4475
Critical Overview of Loops and Foams
Authors: Sergei Alexandrov, Philippe Roche
(Submitted on 22 Sep 2010)
Abstract: This is a review of the present status of loop and spin foam approaches to quantization of four-dimensional general relativity. It aims at raising various issues which seem to challenge some of the methods and the results often taken as granted in these domains. A particular emphasis is given to the issue of diffeomorphism and local Lorentz symmetries at the quantum level and to the discussion of new spin foam models. We also describe modifications of these two approaches which may overcome their problems and speculate on other promising research directions.
In chapter 3 Alexandrov and Roche discuss spin foam models which may suffer from related issues showing up in different form but being traced back to a common origin (secondary second class constraints, missing Dirac’s quantization scheme, …).
In a nutshell, LQG is supposed to give an Hamiltonian picture of quantum gravity based on the use of specific variables (connections), whereas spin foam models are certain type of discretized path integral approach to the quantization. A priori these are different approaches using different methods and leading to different results. Of course, in the best case their predictions should coincide and they should be just equivalent quantizations. But at present such an agreement has not been achieved yet.
First Alexandrov and Roche start with a new perspective based on SFs
Most of the constructions of SF models of 4-dimensional general relativity heavily rely on the Plebanski formulation and translate the classical relation between BF theory and gravity directly to the quantum level. In other words they all employ the following strategy: 1. discretize the classical theory putting it on a simplicial complex;
2. quantize the topological BF part of the discretized theory;
3. impose the simplicity constraints at the quantum level.
Thus, instead of quantizing the complicated system obtained after imposing the constraints, they first quantize and then constrain. This strategy is behind all the progress achieved in the construction of 4-dimensional SF models. However, at the same time, this is a very dangerous strategy and, as we believe, it is the reason why most of these models cannot be satisfactory models of quantum gravity. As we will show, it is inconsistent with the Dirac rules of quantization and is somewhat misleading.
They introduce the models [21] (ELPR) and [20] (FK)
Although the models of [21] (ELPR) and [20] (FK) are in general different from each other and obtained using different ideas, they have several common inputs. First, they both rely on the idea allowing to effectively linearize the simplicity constraints
They discuss the BF model, why it fails to provide a quantization of GR, and they discuss how BF serves as a basis for the new model; they show that some shortcomings of the BF models are not resolved in [21] (ELPR) and [20] (FK)
But in fact (3.19) is stronger because it excludes the topological sector of Plebanski formulation. Thus, the linearization solves simultaneously the problem of the BC model that it does not distinguish between the gravitational and the topological sectors. The new constraint leads directly to the sector we are interested in. ... Second, both models suggest to quantize an extension of Plebanski formulation which includes the Immirzi parameter. This results in crucial deviations from the results of the BC model already at the level of imposing the diagonal simplicity constraint.
The construction of [21] (ELPR) and [20] (FK) contains some 'quantization ambiguities' as expected from chapter 2.
Then, as has been noted already in [131], the constraint (3.21) does not have solutions except some trivial ones. However, appealing to the ordering ambiguity, the authors of the model [21] adjusted the operator in (3.21) so that the constraint does have solutions.
In the following they stress deviations from well-established standard qunatization procedures used to quantize & modify BF leading to the new models
The main suggestion of this model, [EPRL] which distinguishes it from the BC model and was first realized in [128], is that the simplicity constraints should be imposed only in a weak sense that is instead of imposing the constraints on the allowed states [annihilating states] one only requires [vanishing of their expectation values sandwiched between physical states!] This is justified by noting that after identification of the bivectors Bf with generators of the gauge group or a combination thereof (3.20), the simplicity constraints become non-commutative and imposing them strongly leads to inconsistencies, as is well known for any second class constraints. This does not concern the diagonal simplicity constraint which lies in the center of the constraint algebra and therefore can still be imposed strongly leading to the restriction (3.21) on the allowed representations
[in FK] one starts again from the partition function for BF theory (3.12) where the simplicity constraints should be implemented as restrictions on the representation labels. However, before doing that one makes a refinement of the decomposition (3.12) using the coherent state techniques developed in [19] Here we concentrate on the Euclidean case. Although the Lorentzian case was also considered in [20], the corresponding construction is much more complicated and even the Immirzi parameter has not been incorporated in it so far.
They doubt that the SFs and canonical LQG agree in the kinematical Hilbert space; the 'proofs' are mostly based on unphysical i.e. non-gauge invariant quantities subject to quantization anomalies.
Due to this fact it was claimed that the boundary states of the new models are the ordinary SU(2) spin networks [128] and it is now widely believed that there is a perfect agreement between the new SF models and LQG at the kinematical level [10]. However, it is easy to see that this is just not true. First of all, the states induced on the boundary of a spin foam are not the ordinary spin networks, but projected ones considered in section 2.2.2
However, on one hand, there is no any fundamental reason to perform such a projection. And on the other hand, this relation shows that the kinematical states of LQG and the boundary states of the EPRL model are indeed physically different and the agreement between their labels is purely formal. The claimed agreement is often justified by comparison of the spectra of geometric operators, area [21] and volume [138]. By appropriately adjusting the ordering, the spectra in spin foams and LQG can be made coinciding. However, the operators, which are actually evaluated in these papers, are not the standard ones, but shifted by constraints.
But in the EPRL approach the boundary states are supposed to be integrated over these normals so that the operator corresponding to (3.39) is simply not defined! On top of that, even if one drops the integration over xt, as we argue below, and gets a well defined operator on a modified state space, we see that the quantization of the geometric operators is not unique. To get the coincidence with LQG requires ad hoc choice of the ordering and of the classical expression to be quantized.
After their discussion of EPRL and FK they focus on the imposition of constraints. To understand these issues one has to be familiar with the Dirac quantization procedure (second class constraints, Dirac brackets instead of Poisson brackets)!
All models presented in the previous section have been derived following the strategy of section 3.1.2: first quantize and then constrain. Now we want to reconsider the resulting constructions taking lessons from the canonical approach. As we showed in the previous section, the spin foam quantization originates in Plebanski formulation of general relativity. The canonical analysis of this formulation has been carried out in [139, 140, 141] and turns out to be essentially equivalent to the Lorentz covariant canonical formulation of the Hilbert–Palatini action [18] once eijkBjk is identified with ~ Pi. The Immirzi parameter is also easily included and appears in the same way. Thus, the canonical structure to be quantized can be borrowed from section 2.2.1. In particular, the role of the simplicity constraints is played by the constraints (2.30).
In section 3.4.1. Alexandro and Roche show based on a toy model how the two quantization strategies
1) a la Dirac and
2) 'first quantize using Poisson - then constrain'
may lead models which look equivalent at first sight but are definately inequivalent of one carefully inspects their details.
I do not go into the details here but I expect that everybody aiming to understand Alexandrov's reasoning will carefully follow his arguments and will understand in detail Dirac's constraint quantization procedure! The analysis of secondaray second class constraints modifying the symplectic structure on phase space i.e. replacing Poisson with Dirac brackets is key to the whole chapter 3!
They compare the two quantization strategies in the toy model:
Comparing the results of the two approaches, one observes a drastic discrepancy: gamma is either quantized or not. In the former approach for a non-rational gamma the quantization simply does not exist, whereas there are no any obstructions in the latter ... Taking into account that the second approach represents actually a result of several possible methods, which all follow the standard quantization rules, it is clear that it is the second quantization that is more favorable. The quantization of ? does not seem to have any physical reason behind itself. In fact, it is easy to trace out where a mistake has been done in the first approach: it takes too seriously the symplectic structure given by the Poisson brackets, whereas it is the Dirac bracket that describes the symplectic structure which has a physical relevance. It is easy to see that this leads to inconsistency of the first quantization. For example, the Hamiltonian H, ... is simply not defined on the subspace spanned by linear combinations of (3.54) ...
The next topic is relevant as response to Rovelli's "we don't need a Hamiltonian"
However, one can take a “minimalistic” point of view and do not require the existence of a well defined Hamiltonian on the constrained state space. (We thank Carlo Rovelli for discussion of this possibility.) After all, spin foam models are designed to compute transition amplitudes. Therefore, we are really interested not in the Hamiltonian itself, but in its matrix elements and the latter can be defined by using the Hamiltonian and the scalar product on the original unconstrained space
However, this expectation turns out be wrong. As is clear from the derivations in [20, 132] and has been explicitly demonstrated in a simple cosmological model [142], the vertex amplitude actually appears as a matrix element of the evolution operator. This requires to consider expectation values of higher powers of the Hamiltonian for which the property (3.62) does not hold anymore. This leads to deviations of results obtained by the spin foam strategy from those which are based on the well grounded canonical quantization
Let us summarize what we learned studying the simple model (3.42):
• The strategy based on “first quantize, then constrain” leads to a canonical quantization which is internally inconsistent as the Hamiltonian operator is ill-defined on the constrained state space.
• The origin of the problem as well as the quantization of the parameter ? can be traced back to the use of the Poisson symplectic structure which does not take into account the presence of the second class constraints.
• Besides, this approach completely ignores the presence of the secondary second class constraint which is crucial for suppressing the fluctuations of non-dynamical variables and producing the right vertex amplitude in discretized theory.
• An attempt to interpret the results of such quantization only as an approach to compute transition amplitudes using (unphysical) Hamiltonian (3.61) does not work as they turn out to be incompatible with the results of the standard (path integral or canonical) quantization. As a result, the transition amplitudes computed in this way do not have any consistent canonical representation.
In our opinion, all these problems are just manifestations of the fact that the rules of the Dirac quantization cannot be avoided. This is the only correct way to proceed leading to a consistent quantum theory.
The example presented above explicitly reveals the main problems of the new SF models and their origin. All these models start from the symplectic structure provided by the simple BF theory, which ignores constraints of general relativity. In particular, they all use the usual identification of the B-field with the generators of the gauge group, or its gamma-dependent version (3.20), when the constraints are translated into quantum level. But this identification does not agree with the symplectic structure of general relativity
Alexandro and Roche pay attentio to the simplicity constraint which is key to the SF formalism and which seems to be the weakest point. It is this constraint which is requred to turn BF into GR - and it is this constraint which is quantized in the wrong way!
In fact, a special care which is paid to the diagonal simplicity, when it is imposed strongly whereas the cross simplicity constraints are imposed only weakly, results from another common confusion. As we explained in section 3.3.1, this is done because the diagonal simplicity is in the center of the non-commutative constraint algebra of all simplicity constraints and thus interpreted as first class. But this classification would be correct only if there were no other constraints to be considered. It completely ignores the presence of the secondary constraints. The latter do not commute with all simplicity and in particular with the diagonal simplicity. As a result, all these constraints are second class and should be quantized via the Dirac bracket. Given all this, we expect that the new SF models suffer from inconsistencies which we met in the previous subsection. They can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian. In particular, there is no reason to expect that the new models may be in agreement with LQG or any of its modifications. Note that this incompatibly with the canonical quantization manifests itself in the issues involving the Hamiltonian. This is why one does not see it in a semiclassical analysis or in any investigation restricting to the kinematical level. It should be stressed that this critics is not just about face or edge amplitudes, which depend on details of the path integral measure but can be found in principle from consistency on the gluing of simplices [135]. In fact, the ignorance of the secondary second class constraints has much more profound implications and, what is the most important, it affects the vertex amplitude (see the next subsection). The standard prescription that the vertex is obtained by evaluating the boundary state of a 4-simplex on a flat connection is a direct consequence of the employed strategy, which starts by quantizing the topological BF theory, and should be modified to take into account all constraints of general relativity. Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity. A good candidate should allow a quantum mechanical representation in terms of wave functions, Hamiltonian, etc., especially if one hopes to find a viable loop quantization of gravity. The point we are making here is that the SF models derived using the strategy “first quantize and then constrain” do not satisfy this requirement.
As I stressed a couple of times the Lagrangian PI is a derived object which cannot be seen as a fundamental entity. The main problem is the construction of the measure taking into account the second class constraints. It is this step where the SF construction seems to fail up to now; it is this step where some weak points from the BF model do show up agains
The SF representation of quantum gravity can be seen as an outcome of a Lagrangian path integral for discretized Plebanski formulation of general relativity. However, the Lagrangian or a configuration space path integral is a derived concept. A more fundamental one is the path integral over the phase space. Its measure can be rigorously derived and in particular it contains d-functions of all second class constraints. On the other hand, the Lagrangian path integral can be obtained from the canonical one only at certain very special circumstances.
Therefore, ..., if one wants to calculate transition functions as one does in SF models, one must use the canonical path integral.The main consequence of this conclusion is that, as we mentioned above, the secondary second class constraints should appear explicitly in the integration measure. We believe that this is an important point missed by the present-day SF models.
Alexandro and Roche stress that all these defects of the new models may be invisble in the semiclassical sector. That means that reproducing GR in the IR is not sufficient as a test for successfil quantization. Tis is trivial as there are several inequivalent models having the same IR limit (this applies to any quantum theory, not only to GR)
One might argue that since the secondary constraints appear as stability conditions for the primary ones and the latter are imposed in the path integral at every moment of time, the secondary constraints should follow automatically and need not to be imposed explicitly. For example, in SF models based on Plebanski formulation one could expect that all set of simplicity constraints ensures the simplicity of bi-vectors at all times and thus it is enough. However, this argument works only at the quasiclassical level where the equations of motion are satisfied. Off-shell the quantum fluctuations of degrees of freedom fixed classically by the secondary constraints are not suppressed if the constraints are not inserted in the path integral.
It is also not seen at the quasiclassical level since the missing constraint is obtained on mass shell anyway. Therefore, it is not in contradiction with the fact that the semiclassical asymptotics of the EPRL and FK amplitudes reproduce the Regge action [147, 148, 149], i.e., the correct classical limit. The problem is that the secondary constraints are not imposed strongly at the quantum level and as a result one might expect the appearance of additional quantum degrees of freedom in the new models