Complex Numbers Not Necessary in QM: Explained

In summary, the conversation discusses the necessity of complex numbers in physics, particularly in quantum mechanics. While some argue that they are not needed and can be replaced with other mathematical tools, others point out that complex numbers have unique properties that are important in applications. The conversation also touches on the use of real numbers in physics and how they can be difficult to justify physically. Ultimately, the question is raised as to why complex numbers are singled out for removal in quantum mechanics, when other mathematical abstractions are accepted and used in physics.
  • #71
A. Neumaier said:
Yes, but it doesn't change anything. Any theory represented in first order logic is independent of the model used to represent it.

Guten Appetit!

No. Most sets of rationals are not definable. (There are uncountably many sets of rationals, but only countably many of them can be defined.)

Yes.

No. Their number is countable but they do not form a model for the reals since the supremum axiom fails for them.

This is not a well-defined set, as you specify neither the meaning of the ##a_i## nor the meaning of ##\dots##.

It is a well-defined set in the usual mathematical framework. All of mathematics generally deals with objects being defined only by their properties. E.g. let ##f## be a continuous function, of which there are uncountable many.

You have introduced a non-standard approach where numbers, sets and functions (presumably) are restricted to ones that can be specified by some further criteria. Leaving the remaining numbers, sets or functions "anonymous". Presumably , however, these objects still exist in the new mathematical framework. For example, you don't have any uncountable sets of numbers with non-zero measure over which to integrate, unless you include all the real numbers..

In particular, you are now confusing your new definition of a definable set with the concept of a well-defined set in standard analysis.

Finally, there is no paradox in standard real analysis with the set of all real numbers. It's not a definable set in your terminology but that doesn't make it paradoxical.
 
Physics news on Phys.org
  • #72
PeroK said:
It is a well-defined set in the usual mathematical framework.
No, it isn't. ##\dots## is not a well-defined piece of notation, unless you know the law of formation of the ##a_i##.
Moreover, you mix the metalevel and the object level by treating anonymous defined numbers as well-defined things on the object level.
Mixing levels may lead to contradictions, as in my example in post #65.

PeroK said:
Finally, there is no paradox in standard real analysis with the set of all real numbers. It's not a definable set in your terminology
You thoroughly misunderstand what is being done in mathematical logic. The set of all real numbers is a well-defined object (e.g., in ZFC with Dedekind cuts).
 
  • Like
Likes Auto-Didact
  • #73
A. Neumaier said:
No, it isn't. ##\dots## is not a well-defined piece of notation, unless you know the law of formation of the ##a_i##.
Moreover, you mix the metalevel and the object level by treating anonymous defined numbers as well-defined things on the object level.
Mixing levels may lead to contradictions, as in my example in post #65.You thoroughly misunderstand what is being done in mathematical logic. The set of all real numbers is a well-defined object (e.g., in ZFC with Dedekind cuts).

The usual definition of a limit has, for example, ##\forall \ \epsilon > 0##.

What you are saying is that that is wrong and it should be:

##\forall \ ## definable ##\epsilon > 0##.

Using definable reals may be an alternative, but you cannot argue that mainstream real analysis is based on a countable subset of the real numbers.
 
  • #74
PeroK said:
Using definable reals may be an alternative, but you cannot argue that mainstream real analysis is based on a countable subset of the real numbers.
I didn't claim that. Mainstream real analysis applies equally to any model of the real numbers, no matter whether or not that model is countable. I only claimed that the definable reals form a model of the real numbers when interpreted in the model of ZFC consisting of definable sets.

PeroK said:
The usual definition of a limit has, for example, ##\forall \ \epsilon > 0##.

What you are saying is that that is wrong and it should be:

##\forall \ ## definable ##\epsilon > 0##.
No.

Anything said in group theory about a group applies in different ways to the different models of a group.

Anything said about Peano arithmetic that applies to the standard model ##N## of the natural numbers also applies to a nonstandard model ##N'=2N## defined inside of ##N## by redefining the successor notion to mean adding 2.
The theory remains completely unaltered.

The same happens for the real numbers. From a standard model ##R## of the reals inside ZFC we may construct a second model ##R'## of the real numbers inside ##R## consisting only of the definable reals in ##R## (interpreted in the nonstandard set theory ZFC' of all definable sets inside ZFC). Then the same abstract notions mean in ##R## what they are defined in ##R## and in ##R'## what they are defined in ##R'##, though on the level of theory, there is no difference.

The point is that the notion of bijection also changes, so that only some of the bijections in ZFC remain bijections in ZFC', which means that a set that is countable in ZFC - but only by means of undefinable bijections to the natural numbers) is no longer definable in ZFC'. The same happens with ##\forall##. It means for all elements in the model, and hence in R' for all definable reals. In contrast, adding in the theory 'definable' after ##\forall##, as you did, would alter the theory!
 
  • #75
A. Neumaier said:
Anything said about Peano arithmetic that applies to the standard model ##N## of the natural numbers also applies to a nonstandard model ##N'=2N## defined inside of ##N## by redefining the successor notion to mean adding 2.
The theory remains completely unaltered.
While this is correct, I think there is a huge difference between the real numbers in models (of set theory) and natural numbers in models of PA. With PA I know what structure I "really" have in mind. I don't really care whether any other structure satisfies the axioms or not (meaning it is not mandatory to look at the other models at all to know what you are talking about).
Sure there is a problem of LEM (in a limited sense), but that is also circumvented by using its counter-part in form of HA (as far as I can understand).

But with ZFC, I have not the slightest idea (in any real sense), even when giving absolutely zero thought to LEM. As per my limited understanding, as soon the subsets of ##\omega## we use are anything smaller than constructible reals, the resulting collection of reals will be "too small" to serve as a model (is this correct?).
It is still good to know that much is recoverable without definitions that are simply highly impredicative to me (even in smaller collections of real numbers). Sure it is nice to know that we have these constructions and we can use them, but beyond that, I am not sure all of it "means" anything [well if set-theory is sound for number-theoretic statements, then I suppose it does in a way ... till yet I don't think anyone knows of such a statement that is intuitively "very clearly" true or false but yet ZFC proves it the other way ... meaning people do have some sort of (even if vague) soundness belief regarding it even when they don't say it]
 
  • #76
SSequence said:
With PA I know what structure I "really" have in mind. [...]
But with ZFC, I have not the slightest idea (in any real sense)
I know what I have in mind for Peano arithmetic, for the reals, and for ZFC - namely the models obtained by the definable natural numbers, reals, and ZFC sets.

This is what mathematicians actually work with - always with finite formulas that at worst involve anonymous numbers or sets implicitly or explicitly quantified over. (Unless they are logicians and then get interested in ramifications about what sort of models are possible.) Though we cannot answer easily most of the (countably many) questions we might pose. But we restrict to questions that we find useful, and where we expect to be able to make progress.

You might be interested in reading my paper The FMathL mathematical framework, which addresses such things from my personal perspective.

SSequence said:
As per my limited understanding, as soon the subsets of ##\omega## we use are anything smaller than constructible reals, the resulting collection of reals will be "too small" to serve as a model (is this correct?).
No. For example, calling ZFC' the model of ZFC-constructible sets we may consider the model ZFC'' of ZFC'-definable sets, etc., and in this way get an infinite nested sequence of smaller and smaller models ZFC##{}^k,~k=0,1,2,\ldots##.
 
  • Like
Likes Auto-Didact and SSequence
  • #77
A. Neumaier said:
I know what I have in mind for Peano arithmetic, for the reals, and for ZFC - namely the models obtained by the definable natural numbers, reals, and ZFC sets.

This is what mathematicians actually work with - always with finite formulas that at worst involve anonymous numbers or sets implicitly or explicitly quantified over. (Unless they are logicians and then get interested in ramifications about what sort of models are possible.) Though we cannot answer easily most of the (countably many) questions we might pose. But we restrict to questions that we find useful, and where we expect to be able to make progress.

You might be interested in reading my paper The FMathL mathematical framework, which addresses such things from my personal perspective.
Yes, I think that usually one of the purposes of setting up a background frame-work or theory is to cast away philosophy to background (since, at least in some ways that comes "before" we set-up everything), and start getting to doing things.

In my own personal view, the strongest theory that I "think" I could convince myself of soundness (beyond all reasonable doubts) is HA (ofc I could choose something really weak ... but note that I used "strongest"). Beyond that, I can't say with complete certainty one way or other. Of course that doesn't mean at all that I believe this corresponds to all number-theory statements that could be proven (not at all ofc!).

Maybe someday my view will change ... or maybe not.

A. Neumaier said:
No. For example, calling ZFC' the model of ZFC-constructible sets we may consider the model ZFC'' of ZFC'-definable sets, etc., and in this way get an infinite nested sequence of smaller and smaller models ZFC##{}^k,~k=0,1,2,\ldots##.
Hmmm I find this genuinely interesting (even if I don't understand it). OK a small question ... very naive but honest question (sorry if it's really off): What about ##\omega_1## in all these "smaller" models? ... if they also contain less reals. Since there can be no infinite backward chain, it will remain "same" in infinitely many of these models?
 
Last edited:
  • #78
haushofer said:
Well, yes, but then you assume a certain operator form for the Hamiltonian. Maybe I'm stupid or miss something simple, so let's rephrase my question. Imagine I try to construct QM from first principles similarly as Schrodinger did. However, I want my wave functions and operators to be strictly real. What's, according to you, the first inconsistency that blows then into my face?

A. Neumaier said:
That you cannot even begin. Which dynamics does your try assume? And how does it account for the spectral features that had to be explained (and were explained) by Schrödinger?

I don't know if haushofer can or "cannot even begin", but apparently Schrödinger could:-) I have mentioned his work (Nature, v.169, 538 (1952)) several times. He noted that one can make the wave function real by a gauge transform (he considered the Klein-Gordon equation in electromagnetic field). Schrödinger's conclusion: “That the wave function of [the Klein-Gordon equation] can be made real by a change of gauge is but a truism, though it contradicts the widespread belief about `charged’ fields requiring complex representation".

Let me emphasize that this is not about a replacement of complex numbers by pairs of real numbers. I am sure you don't need any help to understand how Schrödinger's approach of 1952 can "account for the spectral features that had to be explained (and were explained) by Schrödinger" in 1926.

I cannot be sure that real functions are enough for everything in quantum theory, but they are sufficient for much more than "widespread beliefs" would suggest (please see references in https://www.physicsforums.com/threa...d-of-spinor-field-in-yang-mills-field.960244/).
 
  • #79
A. Neumaier said:
The subset of defined objects used by humanity is very finite, certainly of size less than ##10^{30}## [##=10^{12}## words or formulas produced per person ##\times 10^{10}## persons per generation ##\times 10^8## estimated generations humanity might exists], and hence less than the number of atoms in 20 tons of carbon.

This is more than enough for doing physics. The remaining countably many definable things are just the reservoir for creative work!
I agree with this.

Maybe I was fuzzy, why argument was not "against countable sets" in favour of reals, it was the opposite - against infinite set and against real analysis as the ideal language for an inference frameworkm because the infinitely embeddings we humans use for "creative work", makes things confusing. The embedding is non-physical. And even if everyone agrees, we are still lost in this embeeding due to the way the current paradigms is like.

I mean, if we see QM as a generalized P-theory, which in turn is based on real numbers, we already stepped over how this correspondes to reality. The problem is not to assign a real number to a degree of belief, the problem is them when one tries to consider the a probability of this probability a prio, as there are infinitely many options. And how can we claim to understand how this is to be normalized if the embedding is non-physical? This is what i want to reconstruct and cure.

So my argument is that if the possible distinguishable states; from the perspective of an agent (say an subatomic structure), this is probably also finite at any instant of time. If you start to think in therse terms, we are lead to researching new ways to "computing" and "representing things", not from human perspective, but from the inside perspecitve that has a better physical correspondence that does the continuum embedding that views everything from an infinite boundary (ie scattering perspective), where one in practice always have infinite amount of memory and processing power relative to the subsystem in the middle -> here the continuum approxiamtion is fine! And this is the perpective that is also the basis for QFT etc as i understand ing. But this is not satisfactory in QG and unification approaches where you also want to address the measurement problem.

/Fredrik
 
  • #80
PeroK said:
It is a well-defined set in the usual mathematical framework. All of mathematics generally deals with objects being defined only by their properties. E.g. let ##f## be a continuous function, of which there are uncountable many.

You have introduced a non-standard approach where numbers, sets and functions (presumably) are restricted to ones that can be specified by some further criteria. Leaving the remaining numbers, sets or functions "anonymous". Presumably , however, these objects still exist in the new mathematical framework. For example, you don't have any uncountable sets of numbers with non-zero measure over which to integrate, unless you include all the real numbers..

In particular, you are now confusing your new definition of a definable set with the concept of a well-defined set in standard analysis.

Finally, there is no paradox in standard real analysis with the set of all real numbers. It's not a definable set in your terminology but that doesn't make it paradoxical.

As far as I know, nobody tries to do mathematics using only definable objects, because the usual mathematical axioms don't hold when restricted to definable objects. However, the set of reals is certainly definable.

The definition of "definable" is this: An object ##O## is definable (relative to a language, and relative to an intended model of that language) if there is a formula ##\phi(x)## such that ##O## is the only object satisfying that formula. In the particular case of sets, people often say that a collection ##S## is definable if there is a formula ##\phi(x)## and ##S## consists of all the things satisfying formula ##\phi##.

In the particular case of the reals, you have to work your way up to it:
  • An ordinal is a set that is well-ordered by set membership.
  • A natural number is a finite ordinal
  • An integer is an equivalence class of pairs of naturals, where ##(x,y) \equiv (x',y')## iff ##x+y' = x' + y## (##(x,y)## is to be interpreted as ##x - y##).
  • A rational is an equivalence class of pairs of integers ##(x,y)## with ##y \neq 0##, and where ##(x,y) \equiv (x',y')## iff ##x \cdot y' = x' \cdot y##.
  • A real number is a set ##r## of rationals such that if ##x \in r## and ##x \lt y##, then ##y \in r##.
Then you have a formula ##real(r)## saying that ##r## is a real, and voila, the set of reals is a definable collection.

(This way of defining the basic objects of mathematics is a pain, because at every level of complexity, you have different objects. The zero for naturals is not the zero for integers, which is not the zero for rationals, which is not the zero for reals, which is not the zero for complex numbers. But each level contains a "copy" of the objects in the previous level.)

The set of all reals is definable. But there is a distinction between the set of all reals and the set of all definable reals. Weirdly, the set of all reals is definable, but the set of all definable reals is not, because "definable" is not definable. This is where you have to be careful about the distinction between language and metalanguage. Given a language ##L##, you can, in the meta language, define what it means to be definable in language ##L##. But not in ##L## itself.
 
  • Like
Likes Auto-Didact, eloheim, Demystifier and 2 others
  • #81
stevendaryl said:
The definition of "definable" is this: An object ##O## is definable (relative to a language, and relative to an intended model of that language) if there is a formula ##\phi(x)## such that ##O## is the only object satisfying that formula. In the particular case of sets, people often say that a collection ##S## is definable if there is a formula ##\phi(x)## and ##S## consists of all the things satisfying formula ##\phi##.

In the particular case of the reals, you have to work your way up to it:
  • An ordinal is a set that is well-ordered by set membership.
  • A natural number is a finite ordinal
  • An integer is an equivalence class of pairs of naturals, where ##(x,y) \equiv (x',y')## iff ##x+y' = x' + y## (##(x,y)## is to be interpreted as ##x - y##).
  • A rational is an equivalence class of pairs of integers ##(x,y)## with ##y \neq 0##, and where ##(x,y) \equiv (x',y')## iff ##x \cdot y' = x' \cdot y##.
  • A real number is a set ##r## of rationals such that if ##x \in r## and ##x \lt y##, then ##y \in r##.
Then you have a formula ##real(r)## saying that ##r## is a real, and voila, the set of reals is a definable collection.

(This way of defining the basic objects of mathematics is a pain, because at every level of complexity, you have different objects. The zero for naturals is not the zero for integers, which is not the zero for rationals, which is not the zero for reals, which is not the zero for complex numbers. But each level contains a "copy" of the objects in the previous level.)

The set of all reals is definable. But there is a distinction between the set of all reals and the set of all definable reals. Weirdly, the set of all reals is definable, but the set of all definable reals is not, because "definable" is not definable. This is where you have to be careful about the distinction between language and metalanguage. Given a language ##L##, you can, in the meta language, define what it means to be definable in language ##L##. But not in ##L## itself.

Thanks for that. Although I've never formally studied the set-theoretic foundations of mathematics, nothing you say surprises me. That's what I understood to be the case.

However:

stevendaryl said:
As far as I know, nobody tries to do mathematics using only definable objects, because the usual mathematical axioms don't hold when restricted to definable objects.

The whole argument presented on this thread, certainly as far as I can follow it, is that you can do mathematics using only definable reals and all of analysis and calculus survives intact:

DarMM said:
This is very interesting! What can one not do with the definables? Can all of analysis be built atop them?

PeroK said:
If it can I'll eat my real analysis book.

A. Neumaier said:
Guten Appetit!

So, what do you think? Do I have to eat my analysis book or not?
 
  • #82
Go for Abbott, Rudin is a bit bitter.
 
  • #83
PeroK said:
The whole argument presented on this thread, certainly as far as I can follow it, is that you can do mathematics using only definable reals and all of analysis and calculus survives intact:

I don't think that's true. Or it depends on exactly what you mean.

If you have a fixed language, ##L##, then you can prove that there are only countably many reals that are definable in that language. So ordinary measure theory would say that you can't have a set of definable reals with a nonzero Lebesgue measure.

However, if you leave it vague exactly what "definable" means, maybe you don't run into problems. In intuitionistic mathematics, you can't prove the existence of a noncomputable real, but the statement "all reals are computable" is not provable (although it might be true in some sense).
 
  • #84
I have done some Googling, and I did not find the argument, but I saw an argument once that you need complex numbers for quantum amplitudes if you want there to be continuous transformations relating any two quantum states. This sounds like something that @bhobba would know about.
 
  • #85
How about the following statement? For any real number ##x## there is a language ##L## in which ##x## is definable. However, there is no language ##L## such that any real number ##x## is definable in ##L##.
 
  • #86
stevendaryl said:
the statement "all reals are computable" is not provable (although it might be true in some sense).
It is false in any meaningful sense, because in any model, the number of reals is uncountable relative to this model but the number of computable reals is countable. The same holds for definable in place of computable.

But given any model M of ZFC you can construct a countable model C of ZFC (consisting of the set of meaningful formulas, factored by the equivalence relation of being equal in M) . The reals in C are by construction countable in terms of the notion of countable defined in M, but uncountable in terms of the notion of countable defined in C.

Demystifier said:
How about the following statement? For any real number ##x## there is a language ##L## in which ##x## is definable. However, there is no language ##L## such that any real number ##x## is definable in ##L##.
I don't think this is true.
 
  • #87
A. Neumaier said:
It is false in any meaningful sense, because in any model, the number of reals is uncountable relative to this model but the number of computable reals is countable. The same holds for definable in place of computable.

My quote was from the standpoint of intuitionistic mathematics. There, they don't assume the existence of any noncomputable reals. Or rather, there is no proof that there exists a noncomputable real.
 
  • #88
stevendaryl said:
My quote was from the standpoint of intuitionistic mathematics. There, they don't assume the existence of any noncomputable reals. Or rather, there is no proof that there exists a noncomputable real.
The intuitionistic reals behave mathematically very different from the reals taught in any analysis course.

In intuitionistic math, most concepts from ZFC ramify into several meaningful nonequivalent ones, depending on which intuitionistic version of the axioms one starts with (all of which would become equivalent if the axiom of choice were assumed in addition). Thus one has to be very careful to know which version of the reals one is talking about.
 
  • #89
stevendaryl said:
I have done some Googling, and I did not find the argument, but I saw an argument once that you need complex numbers for quantum amplitudes if you want there to be continuous transformations relating any two quantum states. This sounds like something that @bhobba would know about.

I think Hardy's axiom 5 from this paper (Quantum Theory from Five Reasonable Axioms) mentions something like this.

https://arxiv.org/abs/quant-ph/0101012

Cheers
 
  • Like
Likes bhobba
  • #90
cosmik debris said:
I think Hardy's axiom 5 from this paper (Quantum Theory from Five Reasonable Axioms) mentions something like this.

It's tied up with entanglement:
https://arxiv.org/abs/0911.0695

Thanks
Bill
 
  • #91
A. Neumaier said:
I don't think this is true.
Why?
 
  • #92
Demystifier said:
How about the following statement? For any real number ##x## there is a language ##L## in which ##x## is definable. However, there is no language ##L## such that any real number ##x## is definable in ##L##.

Well, I think that's trivially true. Given any real ##r## between 0 and 1, you can add a function symbol ##f## and add infinitely many axioms saying ##f(n) = r_n##. Then within this theory, the number ##r## is definable.
 
  • #93
Demystifier said:
Why?

stevendaryl said:
Well, I think that's trivially true. Given any real ##r## between 0 and 1, you can add a function symbol ##f## and add infinitely many axioms saying ##f(n) = r_n##. Then within this theory, the number ##r## is definable.
No. The problem is that you cannot ''give'' undefinable reals!

Given any real r makes r an anonymous real, never a particular one. It is just the conventional way of expressing that what follows has a formal variable r quantified over with the all quantor. Thus nothing is actually defined.
 
  • #94
A. Neumaier said:
No. The problem is that you cannot ''give'' undefinable reals!

In mathematical logic, one is allowed to consider theories with a non-computable collection of axioms. For example, the true theory of arithmetic. We can't actually write down such a collection, but it exists (in the same sense that any abstract mathematical objects exist). So for every real ##r##, there exists (as a mathematical object) a theory that defines ##r## uniquely. We can't write it down, but that's a different matter.
 
  • #95
stevendaryl said:
In mathematical logic, one is allowed to consider theories with a non-computable collection of axioms. For example, the true theory of arithmetic. We can't actually write down such a collection, but it exists (in the same sense that any abstract mathematical objects exist). So for every real ##r##, there exists (as a mathematical object) a theory that defines ##r## uniquely. We can't write it down, but that's a different matter.
Where is one allowed to do that? Not in first order logic, which we discuss here. In strange logics, strange things may of course happen.
 
  • #96
A. Neumaier said:
Where is one allowed to do that? Not in first order logic, which we discuss here. In strange logics, strange things may of course happen.

I am talking about first-order logic. In mathematical logic, one can study theories where the set of axioms are noncomputable.
 
  • #97
stevendaryl said:
I am talking about first-order logic. In mathematical logic, one can study theories where the set of axioms are noncomputable.
Please give a reference where this is done and leads to significant results. In this case one doesn't even know what the axioms are...
 
  • #98
A. Neumaier said:
Please give a reference where this is done and leads to significant results. In this case one doesn't even know what the axioms are...

Well, the most important non-axiomatizable theory is the theory of true arithmetic. You define the language of arithmetic, which is typically:
  • constant symbol ##0##
  • unary function symbol ##S(x)##
  • two binary function symbols ##+## and ##\times##
  • one relation symbol ##=##
You can, in set theory, define an interpretation of these symbols in terms of the finite ordinals, and then you can define the theory of true arithmetic as the set of formulas in this language that are true under this interpretation.

It's a noncomputable set of formulas, but it's definable in ZFC. (and much weaker theories)
 
  • #99
stevendaryl said:
Well, the most important non-axiomatizable theory is the theory of true arithmetic. You define the language of arithmetic, which is typically:
  • constant symbol ##0##
  • unary function symbol ##S(x)##
  • two binary function symbols ##+## and ##\times##
  • one relation symbol ##=##
You can, in set theory, define an interpretation of these symbols in terms of the finite ordinals, and then you can define the theory of true arithmetic as the set of formulas in this language that are true under this interpretation.

It's a noncomputable set of formulas, but it's definable in ZFC. (and much weaker theories)
''true arithmetic'' is not a theory in first order logic, but ''the set of all sentences in the language of first-order arithmetic that are true'' in the standard model of the natural numbers (itself not a first order logic notion).

Yes, it is a noncomputable set of formulas, but not a set of axioms of some first order theory. It is a nonaxiomatizable theory (i.e., not a first order logic theory), as you correctly said.
 
  • #100
A. Neumaier said:
''true arithmetic'' is not a theory in first order logic,

Yes, it is. In the study of mathematical logic, a "theory" is a set of formulas closed under logical implication.
 
  • #101
A. Neumaier said:
The intuitionistic reals behave mathematically very different from the reals taught in any analysis course.

In intuitionistic math, most concepts from ZFC ramify into several meaningful nonequivalent ones, depending on which intuitionistic version of the axioms one starts with (all of which would become equivalent if the axiom of choice were assumed in addition). Thus one has to be very careful to know which version of the reals one is talking about.
I never understood, what intuitionistic math is good for beyond the fact that it might be an intellectually interesting game of thought ;-).
 
  • #102
vanhees71 said:
I never understood, what intuitionistic math is good for beyond the fact that it might be an intellectually interesting game of thought ;-).
Well, it shows the extent to which things can be made fully constructive. Thus it gives insight into the structure of mathematical reasoning. A physicist doesn't need it, of course...
 
  • Like
Likes bhobba and vanhees71
  • #103
vanhees71 said:
I never understood, what intuitionistic math is good for beyond the fact that it might be an intellectually interesting game of thought ;-).

I spent a good number of years studying intuitionistic and constructive mathematics. I think it's interesting, but I'm not convinced that anything worthwhile comes from it.

An interesting fact about intuitionistic mathematics is the isomorphism between intuitionistic proofs and computer programs. Intuitionistically, if you prove a statement of the form

##\forall x \exists y: \phi(x,y)##

you can extract a program (expressed as a lambda-calculus expression) that given any ##x## returns a ##y## satisfying ##\phi(x,y)##.

Every proposition in constructive logic (I'm a little hazy about the exact distinction between constructive and intuitionistic) corresponds to a type, in the computer-science sense, and the proofs of those propositions correspond to mathematical objects of that type. So for example:

##A \wedge B## corresponds to the set of ordered pairs ##(a,b)## where ##a## is a proof of ##A## and ##b## is a proof of ##b##.
##A \rightarrow B## corresponds to the set of functions which given a proof of ##A## returns a proof of ##B##.
##A \vee B## corresponds to the disjoint union of proofs of ##A## and proofs of ##B##.
etc. (quantifiers correspond to product types and tagged unions).

I think it's all very interesting, and it gives some insight into logic and programming and the connection between them. But ultimately, I don't see the whole endeavor as being tremendously useful.
 
  • Like
Likes vanhees71
  • #104
The biggest difference between constructive and classical logic is the extent to which it is possible to prove that something exists without being able to give an example. You can't do that in constructive logic. So a proof that there exists a nonmeasurable set doesn't go through. However, you can recover most of classical mathematics by doing a "double negation". In almost all cases where statement ##A## is provable classically, ##\neg \neg A## is provable constructively. A double-negation is not equivalent to the original statement in constructive logic.
 
  • Like
Likes Auto-Didact
  • #105
stevendaryl said:
In almost all cases where statement A is provable classically, ¬¬A is provable constructively.
In all cases, this is provable in the intuitionistic setting.

This shows that there is no loss of quality in assuming classical logic. One can only gain, never inherit a contradiction that is not already there on the intuitionistic level.

This is the reason why intuitionistic logic is irrelevant in practice.
 
  • Like
Likes Auto-Didact and vanhees71

Similar threads

Replies
48
Views
12K
Replies
1
Views
1K
Replies
14
Views
4K
Replies
33
Views
4K
Replies
31
Views
5K
Replies
34
Views
4K
Back
Top