Infinities in QFT (and physics in general)

In summary: By the same reasoning you must reject real numbers as unphysical, and work with rationals only. Then not even the diagonal of a square is physical...
  • #36
A. Neumaier said:
I haven't seen any conceptual definition of Bohmian mechanics without using real numbers to define what everything means.
Sure, for conceptual definition of Bohmian mechanics, and for most of physics actually, real numbers are great. But for conceptual definition it is also great in QFT to write things like ##\langle 0|\phi(x)\phi(y)|0\rangle## or ##\int{\cal D}\phi \, e^{iS[\phi]}##. Yet you know that in QFT it's very tricky to give those things a precise meaning. One approach is to take continuum seriously and deal with functional analysis. Another is to not take continuum seriously. Both approaches are legitimate, both have advantages and disadvantages. You prefer the former, I prefer the latter. You seem to argue that the latter is wrong a priori, I argue that it's not.
 
Last edited:
Physics news on Phys.org
  • #37
martinbn said:
You, on the other hand, seem to exclude the possibility of the negation of your stance.
Actually, I don't. I appreciate any progress in understanding any kind of infinities, be it in axiomatic QFT, functional analysis or set theory. But my gut feeling (which, of course, can be wrong) is that approaches which do not insist that infinity should be taken very seriously have a better chance to succeed.
 
  • #38
Demystifier said:
Actually, I don't. I appreciate any progress in understanding any kind of infinities, be it in axiomatic QFT, functional analysis or set theory. But my gut feeling (which, of course, can be wrong) is that approaches which do not insist that infinity should be taken very seriously have a better chance to succeed.
It is not just infinities. It question the real numbers too. No infinities there.
 
  • #39
martinbn said:
It is not just infinities. It question the real numbers too. No infinities there.
Sets with infinite cardinality are also viewed as infinities in my dictionary.
 
  • #40
Demystifier said:
Sets with infinite cardinality are also viewed as infinities in my dictionary.
But you had no objections to the rational numbrs, nor the integers. They are also infinite sets!
 
  • #41
martinbn said:
But you had no objections to the rational numbrs, nor the integers.
Actually I did, but not explicitly. When I was talking about computations with a computer, I took for granted that a finite computer can represent only a finite set of different numbers.
 
  • #43
Demystifier said:
One approach is to take continuum seriously and deal with functional analysis. Another is to not take continuum seriously. Both approaches are legitimate, both have advantages and disadvantages. You prefer the former, I prefer the latter. You seem to argue that the latter is wrong a priori, I argue that it's not.
The latter does not give an unambiguous logical definition, since ambiguitiy is introduced through the multitude of possible approximations. Thus it is conceptually fuzzy.
Demystifier said:
Sets with infinite cardinality are also viewed as infinities in my dictionary.
So you have already problems with texts, which form an infinite set.
 
  • #44
A. Neumaier said:
Thus it is conceptually fuzzy.
Fuzzy is better than inefficient.

A. Neumaier said:
So you have already problems with texts, which form an infinite set.
The set of all texts that was ever written and will ever be written by humans and human made machines is finite. With non-written texts I don't have problems at all.
 
  • #45
Demystifier said:
Fuzzy is better than inefficient.
But clarity is much more efficient efficient than fuzziness. This is demonstrated well by the fact that all quantum physics textbooks base their concepts on real numbers and infinite-dimensional Hilbert spaces, and no physicist (not even you yourself) defines concepts using your fuzzy, strictly finitist point of view.
Demystifier said:
The set of all texts that was ever written and will ever be written by humans and human made machines is finite.
This assumes that the universe has a finite lifetime, which is questionable.
 
  • #46
A. Neumaier said:
But clarity is much more efficient efficient than fuzziness. This is demonstrated well by the fact that all quantum physics textbooks base their concepts on real numbers and infinite-dimensional Hilbert spaces, and no physicist (not even you yourself) defines concepts using your fuzzy, strictly finitist point of view.
I agree with that. When real numbers, infinite-dimensional Hilbert spaces etc give rise to clarity, I am happy to use them. But my point is that sometimes they don't seem to give rise to clarity, an example being attempts to make interacting 3+1 dimensional QFT rigorous. In such cases different strategies towards clarity may be more efficient.

Or to relate all this to the topic of this thread. If Bell non-locality is well understood in QM, but not so well understood in QFT, then my practical philosophy is to reformulate QFT such that it looks more like QM. It seems that the easiest way to do this is to replace some (not all!) QFT infinities with appropriate finite objects. A lattice may be one very specific way to do it, but not necessarily the best way.
 
  • #47
Demystifier said:
If Bell non-locality is well understood in QM, but not so well understood in QFT, then my practical philosophy is to reformulate QFT such that it looks more like QM.
This just sweeps the difficulties under the carpet instead of solving them - giving the deceptive illusion of understanding when there is none.

If physicists had always done this we'd not have the conceptual highlights of Hamiltonian mechanics or quantum optics.
 
  • #48
A. Neumaier said:
This just sweeps the difficulties under the carpet instead of solving them - giving the deceptive illusion of understanding when there is none.
Maybe! In general, how to distinguish illusion of understanding from true understanding?

A. Neumaier said:
If physicists had always done this we'd not have the conceptual highlights of Hamiltonian mechanics or quantum optics.
Could you be more specific about those examples? What was the specific problems that could have been swept under the carpet, but were not with development of Hamiltonian mechanics and quantum optics?
 
  • Like
Likes vanhees71
  • #49
Demystifier said:
how to distinguish illusion of understanding from true understanding?
The latter is achieved if almost everyone working on the topic agrees with you.

Changing the problem is never true understanding.
 
  • Skeptical
Likes Demystifier
  • #50
  • Like
Likes bhobba and vanhees71
  • #51
A. Neumaier said:
Changing the problem is never true understanding.
And there is more. Didn't Einstein change the problem of aether when he postulated that there is no aether? Didn't Planck changed the problem of UV catastrophe when he postulated discreteness in the form ##E=nh\nu## out of nowhere?
 
  • Like
Likes haushofer and vanhees71
  • #52
Demystifier said:
Wasn't renormalization in QFT a sort of a change of problem? Didn't it produce a lot of understanding, e.g. in computation of g-2? Isn't lattice QCD also a change of problem?
No. It was changing the status of computations that lead to manifest and meaningless infinities from ill-defined to perturbatively well-defined, through a better understanding of what in the original foundations was meaningful and what was an artifact of naive manipulation.
Demystifier said:
Isn't lattice QCD also a change of problem?
No; it is a method of approximately solving QCD. To do it well one needs the continuum limit - a single lattice calculation is never trusted. See, e.g., point 4 of the recent post of André W-L in the context of muon g-2 figuring in the Nature article you cited. Thus lattices are useful for reliable physics prediction only in the context of the continuum theory.
 
  • #53
Demystifier said:
And there is more. Didn't Einstein change the problem of aether when he postulated that there is no aether? Didn't Planck changed the problem of UV catastrophe when he postulated discreteness in the form ##E=nh\nu## out of nowhere?
No. They replaced a questionable hypothesis by a much stronger one. While Nikolic-physics removes from physics all strong concepts (which need infinity).
 
  • #54
This conversation seems very unproductive to me. Of course, in the end, calculations are made on a computer with finite precision, but physics is about understanding the laws of nature and that's hardly possible without continnuum mathematics. Just think about what a great insight special relativity was. A simple gedanken experiment based on intuitive principles leads to the length contraction formula ##L' = L\sqrt{1-\frac{v^2}{c^2}}##. However, if there were no square roots in physics, there would be no sane way to argue in favor of special relativity and we would still not understand the origin of the Lorentz transformations. Continuum mathematics leads to simple and compelling insights about nature. There are no analogous gedanken experiments without continuum mathematics and thus, while we could write down formulas and put them on the computer, we would be unable to understand their origin. They would just be way less convincing.
 
  • Like
Likes A. Neumaier
  • #55
Nullstein said:
but physics is about understanding the laws of nature and that's hardly possible without continnuum mathematics.
I certainly agree that continuum mathematics helps a lot in understanding physics. But at the same time, in many cases it also creates serious difficulties: https://arxiv.org/abs/1609.01421
 
  • #56
That's true, but the appearance of difficulties doesn't refute continuum mathematics. We can only make things as simple as possible, but not simpler. If difficulties arise, they must certainly be fixed, but just dumping continuum mathematics if singularities appear is quite excessive and introduces more problems than it solves. And it is often the case that singularities arise exactly because we didn't worry enough about continuum mathematics. For instance, the inifinities in QFT are due to using perturbation theory in circumstances where perturbation theory is not applicable and the perturbation series don't converge (see e.g. Dyson's argument). By paying more attention to correctly taking the continuum limit, the singularities can be resolved and we obtain perfectly reasonable theories (so far at least in 1+1 or 2+1 dimensions). And while the problem is still difficult in 3+1 dimensions, the beauty and simplicity of the results in lower dimensions strongly suggest that this is the correct way to go. And these considerations even led to the insight, that there is new physics in the non-perturbative regime that is invisible to perturbation theory. So the initial difficulties really seem to be an argument in favor of continuum mathematics instead of one against it.
 
  • Like
Likes A. Neumaier
  • #57
There is a sense in which it is clearly true that we don't need the continuum. Any observations we make as long as they are finite precision (which they always are) can be simulated by a sufficiently complicated deterministic finite automaton.

But conceptually, if the universe really is discrete, there is the puzzle as to why it appears continuous. Assuming that it's not a computer program that was specifically designed to fool us...

So I personally would not find it satisfying to see an argument that things could be discrete. I would want an argument that there is a plausible (non ad-hoc) discrete model that you could prove gave rise to the illusion of continuous spacetime.
 
  • Like
Likes martinbn and A. Neumaier
  • #58
stevendaryl said:
But conceptually, if the universe really is discrete, there is the puzzle as to why it appears continuous.
Because the minimal distance is too small to be seen by our present experimental techniques. For instance, many theories argue that discreteness might start to appear at the Planck scale.
 
  • #59
Demystifier said:
Because the minimal distance is too small to be seen by our present experimental techniques. For instance, many theories argue that discreteness might start to appear at the Planck scale.

That's a slightly different issue. If the discrete model is the discrete counterpart to 4D spacetime, then at sufficiently large length scales, it might look continuous. But what is the reason that a discrete model would happen to look like the discrete counterpart to 4D spacetime, other than if you are trying to simulate the latter with the former?
 
  • #60
The central issue is: All plausible, intuitive and beautiful arguments that have been succesfully used to derive our best physical theories, such as e.g. symmetry principles, and that really make the difference between physics and stamp collection, heavily rely on continuum mathematics.

Sure, we could discretice our theories, but we would lose all deep insights that we had gained and it would convert physics into mere stamp collection. Unless we can come up with even more plausible, intuitive and beautiful arguments for discrete theories, we shouldn't go that route.
 
  • Like
Likes weirdoguy and martinbn
  • #61
gentzen said:
But what I had in mind was more related to a paradox in interpretation of probability than to an attack on using real numbers to describe reality. The paradox is how mathematics forces us to give precise values for probabilities, even for events which cannot be repeated arbitrarily often (not even in principle).
Turns out that I tried to clarify before what I had in mind shortly after I found lemur's comment ("QM is Nature's way of having to avoid dealing with an infinite number of bits") with a similar thought. I just reread the main article, and realized that lemur's comment was an ingenious defense of it (against arbenboba's criticism).
I want to add that I do appreciate Gisin's later work to make the connection to intuitionism, but even so I had contact to people working on dependent type theory, category theory, and all that higher order stuff, it never crossed my mind that there might be a connection to the riddle of how to avoid accidental infinite information content.

Demystifier said:
martinbn said:
But you had no objections to the rational numbrs, nor the integers.
Actually I did, but not explicitly. When I was talking about computations with a computer, I took for granted that a finite computer can represent only a finite set of different numbers.
Just like some physicists (Sidney Coleman) guess that what we really don't understand is classicality, Joel David Hamkins guesses that what we really don't understand is finiteness. Timothy Chow wondered: It still strikes me as difficult to construct a convincing heuristic argument for this point of view. I tried to give an intuitive explanation, highlighting the importance of using a prefix-free code as part of the encoding of a natural number (with infinite strings of 0s and 1s as starting point). But nobody seems to appreciate simple explanations. So I later wrote a long and convoluted post that very few will ever read (or even understand) in its entirety, with the click-bait title: Defining a natural number as a finite string of digits is circular. As expected, it was significantly more convincing, as wittnessed by reactions like: "I’d always taken it as a given that, if you don’t have a pre-existing understanding of what’s meant by a “finite positive integer,” then you can’t even get started in doing any kind of math without getting trapped in an infinite regress."
 
  • #62
Nullstein said:
The central issue is: All plausible, intuitive and beautiful arguments that have been succesfully used to derive our best physical theories, such as e.g. symmetry principles, and that really make the difference between physics and stamp collection, heavily rely on continuum mathematics.

Sure, we could discretice our theories, but we would lose all deep insights that we had gained and it would convert physics into mere stamp collection. Unless we can come up with even more plausible, intuitive and beautiful arguments for discrete theories, we shouldn't go that route.
One can have the continuum arise from the discrete, and symmetries can be emergent.

https://ocw.mit.edu/courses/physics...pring-2014/lecture-notes/MIT8_334S14_Lec1.pdf
"The averaged variables appropriate to these length and time scales are no longer the discrete set of particle degrees of freedom, but slowly varying continuous fields. For example, the velocity field that appears in the Navier–Stokes equations is quite distinct from the velocities of the individual particles in the fluid. Hence the productive method for the study of collective behavior in interacting systems is the Statistical Mechanics of Fields."

https://arxiv.org/abs/1106.4501
"We have seen how a strongly-coupled CFT (or even its discrete progenitors) can robustly lead,“holographically”, to emergent General Relativity and gauge theory in the AdS description."
 
  • Like
Likes Fra and Demystifier
  • #63
stevendaryl said:
That's a slightly different issue. If the discrete model is the discrete counterpart to 4D spacetime, then at sufficiently large length scales, it might look continuous. But what is the reason that a discrete model would happen to look like the discrete counterpart to 4D spacetime, other than if you are trying to simulate the latter with the former?
I don't know, that's an open question.
 
  • #64
@Demystifier, in theories where say Lorentz invariance emerges from a lattice, the discrete theory is still a quantum theory, so it is not totally discrete, since a discrete quantum theory uses a complex vector space. I assume you'd argue that is also in principle not insurmountable?
 
  • Like
Likes Demystifier
  • #65
A. Neumaier said:
This assumes that the universe has a finite lifetime, which is questionable.
I don't think at least humanity having a finite lifetime is a controversial opinion though.

Anyways, I am wondering if there is ever any way to test some of these claims within some confines. I wonder if there is some kind of counter intuitive phenomenon that can never be adequately finitely described, at least theoretically. I believe some topics in chaos theory study whether some deterministic systems have finite predictability regardless of how fine your knowledge of the initial conditions are. It feels like this might be somewhat related to this debate. Could it ever be shown that an experiment agrees with the theory but disagrees "discontinuously" beyond a well defined boundary inexplicable by errors with any finitization of the theory regardless of how fine that is? And if that happened, would it really say much?
 
  • #66
atyy said:
One can have the continuum arise from the discrete, and symmetries can be emergent.
I don't doubt this, but it doesn't defeat my point. Our current best theories have plausible and insightful justification behind them. We should replace them only if falsification forces us to abandom them or if we can come up with even more insightful theories. To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
 
  • #67
Nullstein said:
To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
Questioning notions of absolute infinity or uncountable infinite sets it not automatically the same thing as advocating using discrete theories instead of the continuum.

The ##0.01## might just be good enough for the moment, or comparable to the best we can do at the moment. And it is not important whether it is exactly ##0.01## or ##0.01234##. Independently, it may no longer be good enough later, or the best we can do might improve over time, so that later the achievable accuracy is closer to ##0.0000043##.

Even the discrete might not be as absolute as an idealized mathematical description suggests. I might tell you that some number is 42, only to tell you later that I read it in a degraded old book, and that it might have also been 47. But the chances that it is really 42 are much bigger than the chances of it being 47. What I try to show with this example is that adding more words later can reduce the information content of what has been said previously. But how much information can be removed later depends on the representation of the information. So representations are important in intuitionisitic mathematics, and classical mathematics is seen as a truncation where equivalence has been replaced by equality.

However, the criticism that "no convincing and insightful arguments" for alternative theories are known partly also applies to intuitionistic mathematics. There are too many different options, and the benefits are hard to nail down. The (necessary and sufficient) consistency strength of those theories is often not significantly different from comparable classical theories with "extremely uncountable" sets. Maybe this is because our ignorance of the potential infinite is uncountable beyond imagination, but I am not sure whether that is really part of the explanation.
 
  • #68
gentzen said:
Maybe this is because our ignorance of the potential infinite is uncountable beyond imagination, but I am not sure whether that is really part of the explanation.
It is because the notion of the potential infinite is (by standard incompleteness theorems) necessarily ambiguous, i.e., not all statements about it are decidable for any finite specification of the notion.
 
  • #69
Nullstein said:
I don't doubt this, but it doesn't defeat my point. Our current best theories have plausible and insightful justification behind them. We should replace them only if falsification forces us to abandom them or if we can come up with even more insightful theories. To date, no convincing and insightful arguments for discrete theories are known. All discrete attempts are plagued by ambiguities.

Here's an example: We might replace ##\frac{\mathrm df(x)}{\mathrm dx}## by ##\frac{f(x + 0.01) - f(x)}{0.01}##. If we do this, we probably lose all continuum symmetries, but we have introduced an ambiguity: Why ##0.01## and not ##0.0000043##? (And many more!) This is a completely undesirable situation, even if the continuum symmetries are emergent in this formalism.
So what? Suppose that we live in 19th century in which there is no direct evidence for the existence of atoms. We know the continuum fluid mechanics and if someone argued that the fluid is really made of small atoms, you would argue that it's ambiguous because we don't know how small exactly those atoms are supposed to be. Does it mean that the atom hypothesis is completely undesirable?
 
  • #70
Demystifier said:
So what? Suppose that we live in 19th century in which there is no direct evidence for the existence of atoms. We know the continuum fluid mechanics and if someone argued that the fluid is really made of small atoms, you would argue that it's ambiguous because we don't know how small exactly those atoms are supposed to be. Does it mean that the atom hypothesis is completely undesirable?
That's hardly the same situation. Atoms added great explanatory power to the theory and are a form of reductionism, which is generally desirable. They didn't just reproduce the old results and at the same time invalidate previous insights as would be the case with discretization. They solved an actual problem, while discretization is like running away from a problem, which is already well understood not to require such a radical deviation from our current formalism. There's no need to throw out the baby with the bathwater. Essentially, I'm just arguing in favor of Occam's razor. You can of course reject Occam's razor and that's fine, but then we have to agree to disagree.
 

Similar threads

Replies
87
Views
6K
Replies
69
Views
5K
Replies
3
Views
2K
Replies
242
Views
23K
Replies
139
Views
8K
Replies
15
Views
2K
Replies
5
Views
2K
Replies
6
Views
2K
Back
Top