- #1
T S Bailey
- 26
- 0
I came across this argument on thinkinghard.com: "
To make the contradiction obvious, let the human mathematician who understands that G(G) is non-terminating be the same human mathematician for whom F determines their mathematical ability. If the mathematician was a robot, telling them that G(G) is non-terminating would cause a genuine increase in their mathematical ability. But Roger Penrose claims that the mathematician already knows G(G) is non-terminating, because they understand the Godelian argument.
I will show that this is not the case. We must return to basic principles. The task assigned to the function F is the following -
If we tell the mathematician that F is the program determining their mathematical ability, then we are giving them extra information, and that is what enables them to state that G(G) is non-terminating, apparently going beyond the capability determined by F.
We can just as easily program a robot mathematician to accept claims made by trustworthy parties about things that the robot does not already know, for example that a function F is the function that determines that robot's mathematical ability. But the moment that the robot accepts that information, F goes out of date as a description of that robot's mathematical ability."
Doesn't this argument beg the question by assuming that a human's mathematical ability could be determined by F? Wouldn't F necessarily be computable, and if so, wouldn't postulating it's existence be the same as concluding that computationalism is correct?
To make the contradiction obvious, let the human mathematician who understands that G(G) is non-terminating be the same human mathematician for whom F determines their mathematical ability. If the mathematician was a robot, telling them that G(G) is non-terminating would cause a genuine increase in their mathematical ability. But Roger Penrose claims that the mathematician already knows G(G) is non-terminating, because they understand the Godelian argument.
I will show that this is not the case. We must return to basic principles. The task assigned to the function F is the following -
- Given program X and data Y, determine if X(Y) does not terminate.
If we tell the mathematician that F is the program determining their mathematical ability, then we are giving them extra information, and that is what enables them to state that G(G) is non-terminating, apparently going beyond the capability determined by F.
We can just as easily program a robot mathematician to accept claims made by trustworthy parties about things that the robot does not already know, for example that a function F is the function that determines that robot's mathematical ability. But the moment that the robot accepts that information, F goes out of date as a description of that robot's mathematical ability."
Doesn't this argument beg the question by assuming that a human's mathematical ability could be determined by F? Wouldn't F necessarily be computable, and if so, wouldn't postulating it's existence be the same as concluding that computationalism is correct?