Halting Problem and Turing's Solution

  • Thread starter NoahsArk
  • Start date
In summary: The solution shows that there are limits to what a computer can do, no matter how advanced or powerful it becomes.
  • #1
NoahsArk
Gold Member
258
24
TL;DR Summary
Help understanding the halting problem.
Hello. I've been trying to understand the halting problem, which is confusing for me on many levels.

1) I'm not even sure what the halting problem is asking? Is it asking whether or not there are certain problems that computers can't solve, or is it asking whether or not there are certain truth's that computer's will never be able to prove. If it's the later, then it's getting philosophical because how can we know something is true to begin with if it can't be proven?

2) What is the solution saying? My understanding is that it proves that there are some kinds of questions that computers will never be able to solve, even computers that one day are thousands of times faster than today's.

3) The way I see the Halting problem being defined is "Can we write a program that will determine whether any other program will halt or run forever". Thinking of modern programming languages, I don't see why this isn't possible. In python, for example, code which loops has the "while" key word. If we want to see whether or not a program loops forever, we can just scan the language and see if it has any looping keywords. There must be something more fundamental about this problem that I'm missing.

4) I also don't understand Turing's solution to the halting problem which concludes such a program can't be written. He first assumed that such a program could be written (call the first program Hal). He then imagines a second program (Barry) that does the opposite of everything Hal does. Now he imagines that Hal runs with Barry as it's input. If Hal states that Barry halts, Barry runs forever and vice versa. Turing says this proves we can never write a program which will determine whether or not another program halts or goes on forever.

My understanding is that we'd want a program like Hal, which can determine whether another other program will halt or run forever, to exist to be able to solve problems like Goldbach's conjecture. I don't see how this trick of creating a contradictory program means that programs like Hal couldn't solve such problems. The real problem seems that if a program were to solve Golbach's conjecture, it would have to keep testing even numbers and seeing if any of them fail to have two prime numbers that add up to that number. It might one day find a number like that, but how could we prove in advance whether or not one day it will?

So I am confused not only about the solution, but about the nature of the problem itself. Adding to the confusion is the fact that when Turing was writing about the halting problem, there were no computers or computer programs like there are today, and by "computer" and "program" he meant something more abstract which I can't define.

Thanks
 
Computer science news on Phys.org
  • #2
NoahsArk said:
1) I'm not even sure what the halting problem is asking? Is it asking whether or not there are certain problems that computers can't solve, or is it asking whether or not there are certain truth's that computer's will never be able to prove. If it's the later, then it's getting philosophical because how can we know something is true to begin with if it can't be proven?
It's the first -- that there are certain problems computers can't solve. More specifically, one program can't necessarily determine whether another program will finish running; i.e., halt.
NoahsArk said:
2) What is the solution saying? My understanding is that it proves that there are some kinds of questions that computers will never be able to solve, even computers that one day are thousands of times faster than today's.
See above. Also, the speed at which the computer runs is irrelevant.
NoahsArk said:
3) The way I see the Halting problem being defined is "Can we write a program that will determine whether any other program will halt or run forever". Thinking of modern programming languages, I don't see why this isn't possible. In python, for example, code which loops has the "while" key word. If we want to see whether or not a program loops forever, we can just scan the language and see if it has any looping keywords. There must be something more fundamental about this problem that I'm missing.
A very simple example of a program that doesn't halt is as follows:
Code:
while (true) {
    cout << "Still running..." << endl;
}
The above will run as long as the wind blows, the grass grows, and the rivers run to the sea (or when the power is cut off). There are much more complex programs that would be difficult or impossible to determine whether they finish.

For more info, see https://en.wikipedia.org/wiki/Halting_problem.
 
  • Like
Likes NoahsArk, anorlunda and sysprog
  • #3
NoahsArk said:
Summary:: Help understanding the halting problem.

So I am confused not only about the solution, but about the nature of the problem itself
You are asking about an extremely abstract concept in the mathematics of computability. If you are good at math, and want to dig deeper, I suggest the following references. But one or two paragraphs of text here won't do it.

https://en.wikipedia.org/wiki/Halting_problem#References
 
  • Like
Likes NoahsArk and sysprog
  • #4
The (undecidable) halting problem is the problem of writing a program which can reliably always determine whether any arbitrarily written given program will (or will not) halt. Programmers can often make that determination about a given program. But we can't write a program that we can be certain will always in finite time succeed in accomplishing that task -- https://en.wikipedia.org/wiki/List_of_undecidable_problems . . .
 
  • Like
Likes NoahsArk and FactChecker
  • #5
NoahsArk said:
1) I'm not even sure what the halting problem is asking? Is it asking whether or not there are certain problems that computers can't solve, or is it asking whether or not there are certain truth's that computer's will never be able to prove.

Like Mark Said, it is asking the first one. But interestingly, the answer also says something about the second one.

NoahsArk said:
If it's the later, then it's getting philosophical because how can we know something is true to begin with if it can't be proven?
2) What is the solution saying? My understanding is that it proves that there are some kinds of questions that computers will never be able to solve, even computers that one day are thousands of times faster than today's.

In mathematics, we start with a language to form statements, and some assumptions and then we use a logic to see what can deduce from those assumptions. The common axioms used today are called the ZFC axioms. Kurt Gödel proved a really interesting fact about ZFC and similar mathematical systems, which is that no such system is both complete (every true or false statement expressible in the language can be proven true or false) and consistent (the axioms don't lead to contradictions). If the system is inconsistent then it turns out every statement can be proven both true and false (which would be a serious problem). Gödel's proof is very similar to Turing's, and they tell us very similar things about languages and logic and truth.

What is also interesting is that proving that ZFC is consistent or not is itself like solving the Halting problem. If there are contradictions in the ZFC axioms, then a computer program could eventually find them. But there is no limit to how long that could take, and so the only way to proceed is to keep trying forever. If ZFC is consistent, then you'll never find those contradictions no matter how long you try, but you'll never know if you eventually will. And that means that for all we know someone out there has discovered a contradiction in ZFC and is hiding it and using it in sneaky ways to prove anything they want to.

NoahsArk said:
3) The way I see the Halting problem being defined is "Can we write a program that will determine whether any other program will halt or run forever". Thinking of modern programming languages, I don't see why this isn't possible. In python, for example, code which loops has the "while" key word. If we want to see whether or not a program loops forever, we can just scan the language and see if it has any looping keywords. There must be something more fundamental about this problem that I'm missing.

While you can look at the loops in programs and analyze them, what if the condition to keep looping or not is something which is undecidable?

NoahsArk said:
My understanding is that we'd want a program like Hal, which can determine whether another other program will halt or run forever, to exist to be able to solve problems like Goldbach's conjecture. I don't see how this trick of creating a contradictory program means that programs like Hal couldn't solve such problems.
While both Gödel's proof and Turing's proof used some kind of self referential trick where something is being asked to answer something about itself, since then more practical examples have been found, and the list keeps growing.

NoahsArk said:
The real problem seems that if a program were to solve Golbach's conjecture, it would have to keep testing even numbers and seeing if any of them fail to have two prime numbers that add up to that number. It might one day find a number like that, but how could we prove in advance whether or not one day it will?

Your intuition is good on this issue.

These type of problems you mention are called recursively enumerable (where we can be sure we will find the answer if it is a yes), but not co-recursively enumerable (where we can be sure we will find the answer if it is a no). It means we can only solve it if it's a yes answer (e.g. has a solution). The Halting problem is another example (we can always figure out it will Halt if it does eventually halt, just by keeping it running, but it might run forever and we won't know if it is going to or not) and another example is finding solutions to diophantine equations. This was Hilbert's 10th problem.

https://en.wikipedia.org/wiki/Diophantine_equation#Hilbert's_tenth_problem

Now, going back to the issue of completeness and ZFC, suppose that a diophantine equation doesn't have a solution. This is a problem that is either true or false. But, since it's not co-re, a proof exists that shows it has or doesn't have a solution only if it does have a solution. So saying that some such an equation does or doesn't have a solution when it in fact doesn't is a statement which is independent of ZFC. So, essentially, this result also proves Gödel's first incompleteness theorem.

There are also problems which are neither re nor co-re problems. But a problem is only decidable if it is both (you need to be able to answer both the yes and no, e.g. if it has a solution you need to determine that, and if it has no solution you need to be able to determine that also). So undecidable doesn't mean exactly what it sounds like.

The reason examples like the Halting problem are so pivotal, is that we can do something called reductions, where we reduce one problem to another, meaning for example, we could show that if we could solve some problem, ##P##, we could use that algorithm as a sub-routine to solve the halting problem. If we do show that, then we know that ##P## is undecidable also. The halting problem is a very easy foundational example of such problems, which allows us to keep discovering new undecidable problems easily.

1628060835066.png


https://web.stanford.edu/class/archive/cs/cs103/cs103.1134/lectures/20/Small20.pdf
https://web.stanford.edu/class/archive/cs/cs103/cs103.1134/lectures/21/Small21.pdf

NoahsArk said:
So I am confused not only about the solution, but about the nature of the problem itself. Adding to the confusion is the fact that when Turing was writing about the halting problem, there were no computers or computer programs like there are today, and by "computer" and "program" he meant something more abstract which I can't define.

Thanks

The important thing about the Turing machine, is that it is a model of computing which, basically, according to the Church Turing Thesis, can compute anything which can be computed. It means that it is as powerful as any other possible model of computing, including modern computer languages, including quantum computing, etc. The thing is that this is not something proven, but it is argued strongly and accepted, and nobody has ever been able to find a counter example.

Anyway, assuming the Church Turing Thesis, anything you prove about computability using Turing Machines also holds for any model of computing.

Turing went about this by modelling how people figure things out. His model is based conceptually on a person with an unlimited supply of time, paper, and pencil.

Now you might be able to imagine there could be some more powerful system that could do more than what a person could with paper and pencil. Some imagine maybe conciousness could possibly be able to do something more for example. But even if that were true, whatever it computed that you can't calculate with paper and pencil, would also be something you couldn't prove is correct, or write down exactly how.

But we really can't prove that someone couldn't do something beyond Turing machines somehow or be gifted unexplainable answers to uncomputable problems. So people also capture that possibility with the concept of an oracle, which solves some uncomputable problem for you, and then they study what Turing machines could do with access to that oracle. So in the end, all the bases seem to be covered.

Even though this is the formal model used in a great deal of theoretical computer science, you don't need to really prove everything in terms of Turing machines, and it would be tedious to do so. There is also the concept of Turing complete, which means that something is as powerful as Turing machines. Most programming languages are Turing complete, and you don't need a lot of features, just a few instructions are enough. So you can usually safely just use modern programming languages or pseudo-code in your proofs as well.

https://en.wikipedia.org/wiki/Turing_completeness

There is another model introduced around the same time called lambda calculus which was invented by Alonzo Church. But that system is less intuitive and I think harder to work with, so Turing's model prevailed more so than Church's.

https://en.wikipedia.org/wiki/Lambda_calculus
 
Last edited:
  • Like
Likes pinball1970, NoahsArk, Baluncore and 1 other person
  • #6
Thank you for the responses.

Correct me if I'm wrong but it looks like the halting problem only shows that there is at least one type of problem a computer program can't solve. I.e. a computer program can't correctly state what another program will do when the other program is programmed to do the opposite of what the predicting program says. This doesn't speak to what's knowable in general. Hal can still help people know how Barry is going to act- we just assume that whatever Hal says, Barry does the opposite.

Is it true that the halting problem’s solution is not a general formula which shows you how to know whether or not any given problem is solvable or not? For example, I am assuming you can't use it to conclude that Goldbach's conjecture is or isn’t solvable? If I’m not mistaken, the solution is just a proof, by way of one example, that there is a problem out there that a computer can’t give us the right answer to?

Also, the halting problem reminds me of a problem that I read about related to predicting the future in general. As soon as you disclose the prediction to someone, they change their future action in light of the prediction, and the prediction doesn't come true. E.g. if I predict that a bridge will collapse in a certain town tomorrow and that five people will be injured, anyone who believes in that prediction will do their best to assure no one drives on the bridge tomorrow, and the prediction won't come true. So it seems like the halting problem could've been derived through a pure thought experiment, and isn't in any way related to technology.
 
  • #7
NoahsArk said:
Thank you for the responses.

Correct me if I'm wrong but it looks like the halting problem only shows that there is at least one type of problem a computer program can't solve. I.e. a computer program can't correctly state what another program will do when the other program is programmed to do the opposite of what the predicting program says. This doesn't speak to what's knowable in general. Hal can still help people know how Barry is going to act- we just assume that whatever Hal says, Barry does the opposite.

You can think of a Turing machine as a computer program, or algorithm (finite set of instructions). For the halting problem, ##HALT## the problem instances are a set of strings which are encodings of a program and an input ##<P,x>##. There is no Turing machine which takes inputs ##<P,x>## and outputs yes if ##P(x)## halts, and no if ##P(x)## doesn't halt.

It is a proof by counter example. But nevertheless, that's enough to tell us that the Turing machine above cannot exist.

Once that is established, then it is like a starting point from which a lot of other things follow easily. For example, the questions Does P halt on 0, does P(x)=x for some x, does P halt on a finite number of inputs? Does P(x) = true for all x? All of those problems can easily be shown undecidable using the result that ##HALT## is undecidable. Because if we could solve them, then that would allows us to solve ##HALT##, which is a contradiction.

So after ##HALT## is proven undecidable, there is a cascade of other problems that are directly proven undecidable as well.

But that doesn't generalize completely.

https://cs.stackexchange.com/questi...hat-wouldnt-be-solvable-with-a-halting-oracle

NoahsArk said:
Is it true that the halting problem’s solution is not a general formula which shows you how to know whether or not any given problem is solvable or not? For example, I am assuming you can't use it to conclude that Goldbach's conjecture is or isn’t solvable?

If it were the case that there was a Turing machine that could decide ##HALT##, then that would give us the ability to solve Goldbach's conjecture. But the proof that a Turing machine cannot decide ##HALT## doesn't tell us if Goldbach's conjecture is solvable (I don't think).

NoahsArk said:
If I’m not mistaken, the solution is just a proof, by way of one example, that there is a problem out there that a computer can’t give us the right answer to?
Yes. But like I said, a lot of things directly follow from that result. And anyway, we know altogether now a lot of practical undecidable problems, so it's not only weird contrived examples.
NoahsArk said:
So it seems like the halting problem could've been derived through a pure thought experiment, and isn't in any way related to technology.
It's related to what questions humans can find answers to in general, no matter what technology they use. It's a much stronger statement than about what real computers can do. In the real world there are problems which could be solved in theory, but not in the real world because the universe wouldn't last long enough.

This video might be interesting to you.

 
Last edited:
  • Like
Likes sysprog and NoahsArk
  • #8
Jarvis323 said:
Once that is established, then it is like a starting point from which a lot of other things follow easily. For example, the questions Does P halt on 0, does P(x)=x for some x, does P halt on a finite number of inputs? Does P(x) = true for all x? All of those problems can easily be shown undecidable using the result that HALT is undecidable. Because if we could solve them, then that would allows us to solve HALT, which is a contradiction.

It's hard for me to see the connection between the halting problem and answers to other questions that you gave examples of. I may just need to understand more about computing since it's a new subject for me. I bought the book "The Annotated Turing" by Charles Petzold. Hopefully it gives me a foundation.
 
  • #9
NoahsArk said:
It's hard for me to see the connection between the halting problem and answers to other questions that you gave examples of. I may just need to understand more about computing since it's a new subject for me. I bought the book "The Annotated Turing" by Charles Petzold. Hopefully it gives me a foundation.
The relationship is one problem reduces to another, meaning if you can solve one problem, you could solve the other. For example, if only I had a million dollars, I could buy a house on the beach. So the problem of buying a house on the beach reduces to the problem of aquiring a million dollars.

As practical analogy of how you can use these kinds of reductions, suppose we want to know if some machine can do ##x##. If we assume that the machine can do ##x##, and then find that doing ##x## would violate the conservation of energy (if violating the conservation of energy reduces to being able to do ##x##), then we know that the machine can't do ##x##.

Like the halting problem, what we know about conservation of energy allows us to determine a lot more through this kind of analysis. In this sense, that halt is undecidable is like a law of computing as conservation of energy, or that nothing can go faster than the speed of light, is to physics.

Take the first one, does p halt on 0? Call that problem ##H_0##. To show ##H_0## is undecidable we could suppose we can decide ##H_0##, and show that would allow us to decide ##HALT##, which would be a contradiction since we know ##HALT## is undecidable.

Here is a proof, for example, that ##H_0## is undecidable.

Assume there is a program ##d## that decides ##H_0##, e.g. you give ##d## a program ##q## and ##d## tells you if ##q## will halt on input 0 or not.

Now suppose we are given an arbitrary program and input pair, ##(p,x)##, and we want to know if ##p## will halt on input ##x##.

We can then use ##d## to help us out by creating a program ##q## to use as input to ##d## and then using that answer to deduce if ##p## halts on ##x## or not. So we make a converter function ##c## which takes ##(p,x)## and outputs a program ##q## to input to ##d##, such that either ##d(q)=yes## if and only if ##p## halts on ##x##, or ##d(q)=no## if and only if ##p## halts on ##x##.

Python:
c(p,x) :
    return "q(y) : run p on input x"

No we can compute ##q=c(p,x)##, and then run ##d(q)##.

Case 1: ##p## halts on ##x##.

Then ##q## halts on every input, including 0. So ##d(q)=yes##.

Case 2: ##p## doesn't halt on ##x##.

Then ##q## runs forever on every input, including 0. So ##d(q)=no##.

So we can just use the answer directly to tell us if ##p## halts on ##x##. And that means that ##d(c(p,x))## decides the halting problem. ##\bot## (we know the halting problem is undecidable, so we have derived a contradiction from our assumption).

This means our assumption that ##d## decides ##H_0## must be false, and therefore ##H_0## is undecidable. ##\square##
 
Last edited:
  • Like
Likes sysprog
  • #10
Kurt Gödel proved a really interesting fact about ZFC and similar mathematical systems, which is that no such system is both complete (every true or false statement expressible in the language can be proven true or false) and consistent (the axioms don't lead to contradictions).

There are plenty of systems that are "decidable" (can be proved both complete and consistent). Propositional logic is one of them.

Godel proved that arithmetic was not decidable. I seem to recall that if the exponentiation operator is excluded then the resulting reduced arithmetic is decidable. Not certain about that though. There is no simple test to tell whether a system is decidable.
 

FAQ: Halting Problem and Turing's Solution

What is the Halting Problem?

The Halting Problem is a fundamental question in computer science that asks whether it is possible to determine if a given program will halt or run forever. It was first formulated by mathematician and logician Alan Turing in the 1930s.

What is Turing's Solution to the Halting Problem?

Turing's solution to the Halting Problem is a proof that it is impossible to create a program that can determine whether any other program will halt or run forever. This proof is based on his concept of a Turing machine, which is a theoretical model of a computing device that can perform any computation that can be described algorithmically.

Why is the Halting Problem important?

The Halting Problem is important because it highlights the limitations of computer programs and the concept of computability. It shows that there are problems that cannot be solved by any algorithm or program, no matter how powerful or sophisticated it may be.

How does the Halting Problem relate to the concept of undecidability?

The Halting Problem is an example of an undecidable problem, meaning that there is no algorithm that can solve it for all cases. This concept was first introduced by Turing in his solution to the Halting Problem and has since been applied to other problems in computer science and mathematics.

What are the practical implications of the Halting Problem?

The Halting Problem has practical implications in the field of computer science as it shows that there are limits to what can be computed and automated. It also has implications in the development of programming languages and software, as developers must be aware of the potential for undecidable problems in their code.

Similar threads

Replies
9
Views
2K
Replies
2
Views
1K
Replies
16
Views
2K
Replies
6
Views
2K
Replies
14
Views
5K
Back
Top