Is Complexity in Mathematics Determinate?

In summary: SeeTuring complexity.Godel, mentioned in "On the Length of Proofs" that, by reducing the alphabet used and logical operators at play, then one can assume that the length of the computational steps required to make the proof can serve as a reduction in complexity, if I'm not mistaken. Please feel free to comment on how this isn't true.Thank you, I'm here to learn.Turing complexity is the minimum amount of time for a machine to solve a problem. It is not related to the complexity of a proof.
  • #1
TimeSkip
44
4
TL;DR Summary
Is it ascertainable?
When the proof justifies the theorem, by a growing alphabet, then is it possible to talk about the complexity of the theorem via the growing alphabet of the theorems proof?

In as short as possible, would it be possible to entertain the notion that complexity in non-congruent mathematics is determinable?

I say this because I am assuming that the theorem itself is not ascertainable in complexity due to Gödel's Incompleteness Theorem itself. I have surmised that a growing alphabet can be able to determine the complexity of the proof of the theorem if logic comes next to mathematics.
 
Physics news on Phys.org
  • #2
TimeSkip said:
Summary:: Is it ascertainable?

When the proof justifies the theorem, by a growing alphabet, then is it possible to talk about the complexity of the theorem via the growing alphabet of the theorems proof?

In as short as possible, would it be possible to entertain the notion that complexity in non-congruent mathematics is determinable?

I say this because I am assuming that the theorem itself is not ascertainable in complexity due to Gödel's Incompleteness Theorem itself. I have surmised that a growing alphabet can be able to determine the complexity of the proof of the theorem if logic comes next to mathematics.
Can you explain what you mean by a growing alphabet justifying a proof?
 
  • Like
Likes Klystron and fresh_42
  • #3
I do not expect the alphabet to be relevant in any case which measures the complexity of proofs. You could e.g. simple take the length of a proof over a given alphabet as complexity measure (Turing). It is completely unclear to me how a dependency of a complexity measure from an alphabet could be established.
 
  • #4
Jarvis323 said:
Can you explain what you mean by a growing alphabet justifying a proof?
I say the above due to the following:

Gödel's incompleteness theorem applies to logical languages with countable alphabets. So it does not rule out the possibility that one might be able to prove a theorem by expanding the alphabet to account for new variables.

So, I'm assuming that when confronted with non-congruent complexity in mathematics, then perhaps, mathematical theorems can be ascertained in terms of complexity by their proofs. Hope that makes better sense.
 
  • #5
fresh_42 said:
I do not expect the alphabet to be relevant in any case which measures the complexity of proofs. You could e.g. simple take the length of a proof over a given alphabet as complexity measure (Turing).
See

fresh_42 said:
It is completely unclear to me how a dependency of a complexity measure from an alphabet could be established.
It seems in the above link Godel was engaging in dimensional analysis wrt. to time of computability. So, I hope that sort of view on the matter is more clear.
 
  • #6
TimeSkip said:
So, I'm assuming that when confronted with non-congruent complexity in mathematics, then perhaps, mathematical theorems can be ascertained in terms of complexity by their proofs. Hope that makes better sense.
I am afraid it does not. Neither did you define the term complexity which you use, nor how the alphabet comes into play. Diffusing your statements by using even more unexplained terms like congruence doesn't make things better.
TimeSkip said:
It seems in the above link Godel was engaging in dimensional analysis wrt. to time of computability. So, I hope that sort of view on the matter is more clear.
The above link deals with meta-levels, not with complexities. Every proof is always a finite set of symbols, since we cannot read anything else. The used alphabet can certainly shorten a proof, but this is only a representation property. It is in no way essential to what can be proven and what can't.

I suggest that you read a good introduction of logic, and I do not mean Wikipedia. Your statements so far do not really make sense.
 
  • #7
fresh_42 said:
Neither did you define the term complexity which you use, nor how the alphabet comes into play.
I define it as the least amount of steps required for a computer or machine to halt.

fresh_42 said:
Diffusing your statements by using even more unexplained terms like congruence doesn't make things better.
Sorry about that.

fresh_42 said:
The above link deals with meta-levels, not with complexities. Every proof is always a finite set of symbols, since we cannot read anything else. The used alphabet can certainly shorten a proof, but this is only a representation property. It is in no way essential to what can be proven and what can't.
Godel, mentioned in "On the Length of Proofs" that, by reducing the alphabet used and logical operators at play, then one can assume that the length of the computational steps required to make the proof can serve as a reduction in complexity, if I'm not mistaken. Please feel free to comment on how this isn't true.

Thank you, I'm here to learn.
 
  • #8
TimeSkip said:
I define it as the least amount of steps required for a computer or machine to halt.
Turing complexity.
Godel, mentioned in "On the Length of Proofs" that, by reducing the alphabet used and logical operators at play, then one can assume that the length of the computational steps required to make the proof can serve as a reduction in complexity, if I'm not mistaken. Please feel free to comment on how this isn't true.
The quotation says basically the following: We can prove many statements about natural numbers by induction. Now if we switch to the first meta-level, we might phrase and prove theorems about induction. Such a theorem naturally holds for all the induction proofs we did on the primary level. This can formally be done with every new meta-level we reach. It has nothing to do with complexity, only with different levels.
Thank you, I'm here to learn.
Sure? You add technical terms almost randomly and without even being connected and ask for its sense. There is none. I cannot see any structure in what you post.
 
  • #9
fresh_42 said:
The quotation says basically the following: We can prove many statements about natural numbers by induction. Now if we switch to the first meta-level, we might phrase and prove theorems about induction. Such a theorem naturally holds for all the induction proofs we did on the primary level. This can formally be done with every new meta-level we reach. It has nothing to do with complexity, only with different levels.
I seem to have run into issues with describing the part about complexity due to the fact that there's no (extra)-systematic way of formalizing mathematics apart from discerning some form of induction rather than say induction on the part of not logic but rather intuitive reasoning about how one should go about designing a proof for a theorem. What I do try and state, rather poorly is that there's inherent issues with trying to state something of the sort as to whether one can determine the complexity (if it pertains at all in any systematic manner) towards a proof of a theorem. It might be easier to think of this as if the incompleteness theorem were to assert that we just expand the alphabet of a theorems proof until it satisfies some least complexity if at all ever can be estimated.

My attempts seem to be closer to what Hilbert may have been trying to do in his program.

Hope that made sense.
 
  • #10
I have a hard time understanding what exactly is the issue that you're talking about. You are certainly right, that the length of a proof depends on what you take to be primitive. If you throw in more and more axioms, and throw in more and more function symbols and constant symbols, you can reduce the size of a proof. But there is a point of diminishing returns. No matter what system you use, there will be arbitrarily large proofs. And Godel's incompleteness theorem will get you.

A system that allows introductions of new symbols and new axioms is equivalent to one that has a fixed number of symbols and axioms (or axiom schemas, anyway). As long as the process for introducing new symbols and axioms is itself computable.
 
  • #11
stevendaryl said:
I have a hard time understanding what exactly is the issue that you're talking about.
I'm concerned about the complexity of theorems themselves. In a manner of speech I wonder whether a proof is subject to estimating its complexity. But, the issue is with reference to what can a proof be estimated in complexity, is it?

stevendaryl said:
No matter what system you use, there will be arbitrarily large proofs. And Godel's incompleteness theorem will get you.
Why "arbitrarily", in what manner can't this be determined with exactness?

Thanks.
 
  • #12
TimeSkip said:
I'm concerned about the complexity of theorems themselves. In a manner of speech I wonder whether a proof is subject to estimating its complexity. But, the issue is with reference to what can a proof be estimated in complexity, is it?Why "arbitrarily", in what manner can't this be determined with exactness?

Thanks.

"Arbitrarily large" has an exact meaning: For any length n, there exists a theorem whose shortest proof is larger than n.
 
  • Like
Likes PeroK
  • #13
stevendaryl said:
"Arbitrarily large" has an exact meaning: For any length n, there exists a theorem whose shortest proof is larger than n.
Sorry for the difficulties in communication; but, is it determinate the actual degree of complexity in a theorem's proof?
 
  • #14
TimeSkip said:
Sorry for the difficulties in communication; but, is it determinate the actual degree of complexity in a theorem's proof?

What do you mean by degree of complexity, other than the length?
 
  • #15
stevendaryl said:
What do you mean by degree of complexity, other than the length?
Namely, Turing complexity if that makes sense.
 
  • #16
stevendaryl said:
What do you mean by degree of complexity, other than the length?

I guess there is an informal notion of how "hard" a proof is that isn't just a matter of its length. A long proof could be considered "easy" if it's always obvious what the next step in the proof ought to be, while a short proof could be considered "hard" if it's completely unclear at some point what the next step ought to be.
 
  • #17
TimeSkip said:
Namely, Turing complexity if that makes sense.

Not for theorems, it doesn't make sense.

There is a notion of "complexity" for algorithms, which is determined by how the amount of time (or space) to compute an output grows with the size of the inputs. I don't see how complexity applies to theorems, though.
 
Last edited:
  • #18
stevendaryl said:
Not for theorems, it doesn't make sense.
Why not? Is there some underlaying reason for this assertion, as this is what I'm trying to speak about, although clumsily stated in the OP.
 
  • #19
TimeSkip said:
Why not? Is there some underlaying reason for this assertion, as this is what I'm trying to speak about, although clumsily stated in the OP.

Well the definition of algorithmic complexity is in terms of the time required to produce an output, as a function of the size of the input. A theorem doesn't have any inputs, so I don't see how algorithmic complexity could possibly be relevant.
 
  • #20
stevendaryl said:
Not for theorems, it doesn't make sense.
It does: you can count the steps until a TM halts. A theorem often has more than one proof, and once the alphabet is defined, you can measure how long it takes for a TM to halt at "qed". This would be a complexity measure for a given pair (theorem, proof). It would be basically its formal length.
 
  • #21
fresh_42 said:
It does: you can count the steps until a TM halts. A theorem often has more than one proof, and once the alphabet is defined, you can measure how long it takes for a TM to halt at "qed". This would be a complexity measure for a given pair (theorem, proof). It would be basically its formal length.

That's why I asked "What do you mean by degree of complexity, other than the length?"

Length makes sense.
 
  • #22
fresh_42 said:
It does: you can count the steps until a TM halts. A theorem often has more than one proof, and once the alphabet is defined, you can measure how long it takes for a TM to halt at "qed". This would be a complexity measure for a given pair (theorem, proof). It would be basically its formal length.
So, the complexity of a theorem, is determined by the length of time it takes for a TM to run through the proof of it? Is this done on some standard so as to determine the quantitative analysis of the qualitative state of complexity? As possibly a separate issue, how does one decide whether a theorem is qualitatively complex rather than hard?

Thanks.
 
  • #23
stevendaryl said:
Length makes sense.
How do you quantify the qualifier of a theorem having the property of being "complex" rather than "hard"? Only through a Turning machine? I know, Gödel, mentioned in "On the Length of Proofs" that, by reducing the alphabet used and logical operators at play, then one can assume that the length of the computational steps required to make the proof can serve as a reduction in complexity, if I'm not mistaken.
 
  • #24
TimeSkip said:
As possibly a separate issue, how does one decide whether a theorem is qualitatively complex rather than hard?
How do you define hard and complex in common language? E.g. FLT was certainly both. The major difficulty with such terms is, that they represent a lower bound for something: count the steps which are inevitably necessary! This is per se a difficult problem, because you must be sure that something easier not only hasn't been found, but cannot be found!
 
  • Like
Likes TimeSkip, Klystron and suremarc
  • #25
fresh_42 said:
How do you define hard and complex in common language? E.g. FLT was certainly both. The major difficulty with such terms is, that they represent a lower bound for something: count the steps which are inevitably necessary! This is per se a difficult problem, because you must be sure that something easier not only hasn't been found, but cannot be found!

So we might as well and forget about quantifying it, then? I keep on getting the feeling that this is something Gödel might have addressed in regards to Hilbert's program here or is all this nihil novi?

Like is there any rigor as to deciding that a theorem is complex, because I have a couple of things to say about God's algorithm, which seems prone to this growing alphabet issue, that the proof itself is likely undecidable as to stating if one cannot quantify the definition of being least amount of operations, or what?
 
  • #26
I do not quite follow the discussion, but I feel that there are few things that can be said. Consider the following questions:
--- When are two proofs (of same problem) the same?
--- When are two algorithms (say, for same computable function) the same?

At least, as far as I know, there are no (fully) general answers known to this. In particular, from about 5---7 years ago, I remember the first question being mentioned a number of times [while briefly skimming through Solomon Feferman's articles].

=============

Now I think here is what is probably the goal here. Given a set of problems ##S## we want a function ##f:S^2 \rightarrow \{0,1\}## so that whenever ##f(P1,P2)=1##, we have a formal way of saying that the problem ##P1## is easier than the problem ##P2##. Ideally, it seems, one would want to have further niceness for this function ##f## (being a specific kind of ordering on ##S##).

When the set of problems ##S## is complicated enough, I get the feeling that this question ["When is one problem easier than the other"] might fall in the same category as the above two.

=============

Regarding a more specific point [about which there has been some discussion above], suppose we fix our alphabet, theory etc. and also suppose that the theorems of the given system ##T## are recursively enumerable. Then, for all the theorems of ##T##, we can talk about the following metric:
"For a given theorem of ##T##, what is the shortest proof of it?"

However, there are number of issues here.
(1) If we are talking about something like ##PA##, the theorems of ##T## are a proper subset of the problems that can be stated in ##PA## (due to first incompleteness theorem).

(2) Another thing that has already been mentioned above (post#16) is that it is unclear that, even for theorems of ##T##, comparing the length of shortest proof, would actually tell us whether a given problem is actually easier than the other.

(3) Very briefly, from the point of view of computational complexity, if one does not know an algorithm to efficiently enumerate proofs of a given length ##n## then the most immediate way find [in efficient time] the length of a shortest proof (for a given problem) is not available.

Finally, suppose we use a program to enumerate all the theorems. For a given problem, we could calculate the time it takes for the proof of a given theorem to appear. However, this time is dependent on the program we use for the enumeration. As an exaggerated example, given a very easy problem, a "poorly written" program [because of "bad" algorithm] could just take an enormous time for its proof to appear [when a "good" algorithm would output the proof immediately].
 
Last edited:
  • #27
fresh_42 said:
How do you define hard and complex in common language? E.g. FLT was certainly both. The major difficulty with such terms is, that they represent a lower bound for something: count the steps which are inevitably necessary! This is per se a difficult problem, because you must be sure that something easier not only hasn't been found, but cannot be found!

I made a stab at that. A proof is a sequence of statements such that each statement is either an axiom, a logical tautology, or follows from previous statements through a rule of inference. So one way to measure a proof is simply its length--how many statements in the proof. A second measure of difficulty is how "obvious" the proof steps are. I don't know how to quantify this, but I can give an example. If you have a theorem that says "For every integer between 0 and 100, blah, blah, blah", an obvious way to prove it is to check every single number in that range. That would lead to a very long proof, but it wouldn't be difficult, conceptually. It's obvious how to do it.

In difficult proofs, it's usually necessary to prove some lemma that is not obviously related to what you're trying to prove.
 
  • #28
@stevendaryl
I feel that there are number of subtelties. As your post is very new, I haven't thought about it very well. However, I can try to explain with one example the kind of issues that I feel arise here. I will assume below that we are talking about a theory ##T## with r.e. theorems (and also, the discussion is limited to theorems of ##T## rather than all the problems that are posed in ##T##).

Let's suppose we have 6 programs with indexes ##e_1##, ##e_2##, ##e_3##, ##e_4##, ##e_5##, ##e_6## respectively. Denoting the halting times of these programs as ##t_1##, ##t_2##, ##t_3##, ##t_4##, ##t_5##, ##t_6## respectively [also suppose that the ##t_i## values are strictly increasing with ##t_{i+1}>>t_i##]. Also suppose that ##t_i>> e_i## for each ##1 \leq i \leq 6##.

Now consider the problem of determining the truth of the following two equations [each of the ##t_i## values expressed indirectly via halting times]:
##t_1+t_2=t_3## ---- (E1)
##t_4+t_5=t_6## ---- (E2)

One "obvious" way (in both cases) is to simulate running of the programs and then determine the answer. Such a proof would have a very high length. Now suppose that there is a sophisticated alternative which leads to a very short proof of ##E2##.
However, assume that there is no such short and sophisticated alternative for ##E1##.

========================

Would you say that the ##E1## is easier than ##E2##?
In terms of smallest proof, ##E2## is easier than ##E1##. However, alternatively, if we informally think about the difficulty of ##E1##, ##E2## we could say that they are equally difficult [since (in a very informal sense) both have the "easiest proof" with the "same method"].

This kind of thing is also highlighted when, for some program enumerating the theorems, we try to think of "time of output" for a given proof to appear. If we think of a program that outputs theorems in a very basic way, it is entirely possible that it would output the "sophisticated shorter proof" of ##E2## too late, and hence it would classify ##E1## as easier than ##E2##. On the other hand, another more advanced program for enumerating proofs could recognize the "sophisticated shorter proof" of ##E2## much earlier and hence declare ##E2## to be easier than ##E1##.

========================

In short, it looks complicated to me.

P.S.
It occurred to me that the function ##f:S^2 \rightarrow \{0,1\}## in post#26 should have been ##f:S^2 \rightarrow \{0,1,2\}##.
 
Last edited:
  • #29
stevendaryl said:
If you have a theorem that says "For every integer between 0 and 100, blah, blah, blah", an obvious way to prove it is to check every single number in that range. That would lead to a very long proof, but it wouldn't be difficult, conceptually. It's obvious how to do it.
It might be more difficult than it looks, depending on what "blah, blah, blah" entails. For example, prove that ##\mathrm{BB}(n)## is total for ##n## between 0 and 100. (Which may not even be possible in ZFC.) But I suppose that proving 100 theorems is essentially as "difficult" as proving the hardest of the 100 theorems.
 
  • #30
suremarc said:
For example, prove that ##\mathrm{BB}(n)## is total for ##n## between 0 and 100. (Which may not even be possible in ZFC.)
I do not think that this specific statement (taken exactly as it is) is true. That ##BB:\mathbb{N} \rightarrow \mathbb{N}## is a total function would be a theorem of set theory.

What, I think, is true is that there exists a value ##a \in \mathbb{N}## such that set theory [assuming it doesn't prove everything] doesn't prove the non-halting of some program of length ##a##.

In a similar way, something like:
##BB(a)=## such-and-such-value-in-decimal
would also be unproveable if the value on R.H.S. represents the actual value of ##BB(a)## [the L.H.S. is meant to be the definition based on programs and the R.H.S. successor operation taken on ##0## a certain number of times].

Regardless, I see what you meant in previous post though.
 
Last edited:
  • #31
Slightly related to the OP, and what @fresh_42 has said, and I want to quote it again, is what Gödel said in "On the Length of Proofs":

"The transition to the logic of the next higher type not only results in certain previously unprovable propositions becoming provable, but also in it becoming possible to shorten extraordinarily infinitely many of the proofs already available."

How is that possible? I understand that the alphabet mustn't be growing anymore (or does it?), but, more axioms in the metalogical proof being available to entertain?
 
  • #32
SSequence said:
I do not think that this specific statement (taken exactly as it is) is true. That ##BB:\mathbb{N} \rightarrow \mathbb{N}## is a total function would be a theorem of set theory [as a corollary to every statement of ##PA## being either true or false due to LEM].
I don’t understand what you mean in this last statement, but yes, I realize now I was mistaken. I was looking for an analog of Goodstein’s theorem for ZFC.
 
  • #33
suremarc said:
I don’t understand what you mean in this last statement, but yes, I realize now I was mistaken. I was looking for an analog of Goodstein’s theorem for ZFC.
Yeah, you are right in that a statement (about ##BB## numbers) very similar to it will indeed be independent of set theory.

Nevermind about the sentence in brackets (I think the discussion would drift from the main topic).
 
Last edited:
  • #34
TimeSkip said:
Slightly related to the OP, and what @fresh_42 has said, and I want to quote it again, is what Gödel said in "On the Length of Proofs":

"The transition to the logic of the next higher type not only results in certain previously unprovable propositions becoming provable, but also in it becoming possible to shorten extraordinarily infinitely many of the proofs already available."

How is that possible? I understand that the alphabet mustn't be growing anymore (or does it?), but, more axioms in the metalogical proof being available to entertain?

A lot of proofs in mathematics are proofs by induction. The simplest type of induction axiom is this:

##(\Phi(0) \wedge (\forall n \Phi(n) \Rightarrow \Phi(n+1))) \Rightarrow \forall n \Phi(n)##

So suppose that you want to prove ##\Phi(100)##. Then you can do it in four steps:
  1. Prove ##\Phi(0)##
  2. Prove ##\forall n \Phi(n) \Rightarrow \Phi(n+1)##
  3. Conclude from the induction axiom ##\forall n \Phi(n)##
  4. Take the special case ##n = 100## to get ##\Phi(100)##
Now, if you didn't have an induction axiom, you could still prove ##\Phi(100)## in the following way:
  1. Prove ##\Phi(0)##
  2. Prove ##\forall n \Phi(n) \Rightarrow \Phi(n+1)##
  3. Take the special case of 2 in which ##n=0## to conclude ##\Phi(0) \Rightarrow \Phi(1)##
  4. Using 1 and 3, conclude ##\Phi(1)##
  5. Take the special case of 2 in which ##n=1## to conclude ##\Phi(1) \Rightarrow \Phi(2)##
  6. Using 4 and 5, conclude ##\Phi(2)##
  7. etc.
So you could eventually prove ##\Phi(100)##, but it would take you something like 202 steps, compared to a single step with the induction axiom.

The same thing is true with more powerful induction axioms. They allow you to prove more general statements and then get the specific statement you are interested in as a special case.

Higher-order systems of mathematics allow for more powerful induction axioms. So that's one reason why proofs can be made shorter in these systems.
 
  • Like
Likes PeroK and SSequence
  • #35
stevendaryl said:
A lot of proofs in mathematics are proofs by induction. The simplest type of induction axiom is this:

##(\Phi(0) \wedge (\forall n \Phi(n) \Rightarrow \Phi(n+1))) \Rightarrow \forall n \Phi(n)##

So suppose that you want to prove ##\Phi(100)##. Then you can do it in four steps:
  1. Prove ##\Phi(0)##
  2. Prove ##\forall n \Phi(n) \Rightarrow \Phi(n+1)##
  3. Conclude from the induction axiom ##\forall n \Phi(n)##
  4. Take the special case ##n = 100## to get ##\Phi(100)##
Now, if you didn't have an induction axiom, you could still prove ##\Phi(100)## in the following way:
  1. Prove ##\Phi(0)##
  2. Prove ##\forall n \Phi(n) \Rightarrow \Phi(n+1)##
  3. Take the special case of 2 in which ##n=0## to conclude ##\Phi(0) \Rightarrow \Phi(1)##
  4. Using 1 and 3, conclude ##\Phi(1)##
  5. Take the special case of 2 in which ##n=1## to conclude ##\Phi(1) \Rightarrow \Phi(2)##
  6. Using 4 and 5, conclude ##\Phi(2)##
  7. etc.
So you could eventually prove ##\Phi(100)##, but it would take you something like 202 steps, compared to a single step with the induction axiom.

The same thing is true with more powerful induction axioms. They allow you to prove more general statements and then get the specific statement you are interested in as a special case.

Higher-order systems of mathematics allow for more powerful induction axioms. So that's one reason why proofs can be made shorter in these systems.
So, are induction axioms subject to complexity or is it purely intuitionistic?
 

Similar threads

Replies
3
Views
1K
Replies
72
Views
6K
Replies
3
Views
2K
Replies
15
Views
2K
Replies
8
Views
2K
Replies
4
Views
1K
Back
Top