Logic in Computation: Exploring Deviations from Classical Logic

In summary, this conversation focuses on the relationship between logic and computation, specifically in the context of the Liar's paradox and the concept of bottom in type theory. The discussion also touches on the use of number theory to assess computability and the potential benefits of standardizing computation in this way. Overall, the conversation raises important questions about the laws of logic and their deviations in the realm of computation, as well as the potential for further research and development in this field.
  • #1
John Creighto
495
2
Since I know that this will get lost in the Liar's paradox thread I wanted to separate it out from the thread:

In post 18 of the liar's paradox thread Hurkyl writes:"
Of course, there are other variations on logic than classical logic. I mentioned computability theory: that the alternatives are {true, false, infinite loop} is rather important to the theory. e.g. it gives a way out to the specific construction of the Liar's paradox I mentioned earlier: to refresh:
Q(R) := not R(R)
P := Q(Q)
The naive implementation of the predicate Q and the sentence P clearly results in P evaluating to "infinite loop".

and in post #47 he writes:

Hurkyl said:
Third, I think it would be interesting to point out that in the logic of computation, there is no problem with there being a sentence P satisfying "P = P is not true". Here's an implementation in python:
Code:
def P():
    return not P()
however, your computer probably cannot do this computation: an equivalent implementation that will not overflow your stack is:
Code:
def P():
    while True:
        pass
I would like to know where he is going with this line of thought. A related concept I've seen before is the concept of bottom:

"In type theory, a theory within mathematical logic, the bottom type is the type that has no values. It is also called the zero or empty type, and is sometimes denoted with falsum (⊥).
A function whose return type is bottom cannot return any value. In the Curry–Howard correspondence, the bottom type corresponds to falsity.
"
http://en.wikipedia.org/wiki/Bottom_type

So while bottom acts like a third truth value, it is actually a subtype of both true and false. So both true and false can be of type Bottom. However, normally both True and False should not be of type bottom unless their is an error in the computation.

To summarize the intent of this thread: How do the laws of logic relate to computation and what deviations might their be from the law of logic (for instance: The law of the excluded middle).

P.S. Also in the Liar's paradox thread the question of the Halting problem came up:
http://en.wikipedia.org/wiki/Halting_problem
 
Last edited:
Physics news on Phys.org
  • #2
One thing that might be of interest to you is a number theory approach to computation.

Gregory Chaitin developed a way to take code and convert it to a number theoretic problem.

I see this kind of thinking as a critical way to systematically and analytically assess whether something is computable, because if you have a number theory system expression to solve for a particular collection of variables, then if you can show that no solution exists, then you have indirectly (or directly) shown that it can not be computed.

We know that trying to compute 2x = (1 MOD 2) for any x is impossible, and such a statement shows that x is completely un-computable.

This kind of formalism provides a nice way for showing uncomutability properties for a general algorithm, if it can be turned into a number theory system expression (could be a collection of constraints like you have in the Chinese Remainder Theorem calculations, or one constraints like a simple one line congruence relationship).

If each computation was done in this manner and a standard mechanism was used to find the solutions, then we know that the program will not halt if there is no solution to the diophantine system and this is of course very useful.

Not only does the above approach standardize computation, it tells us about halting if we are able to show that the system has no solution (trying to use a computer to solve the diophantine system using a search algorithm that only terminates when it finds the solution is the approach I would use).

Most of this post just takes Gregory Chaitin's work and uses the idea of having standardized algorithms be put in Diophantine form where no-solution implies that under a standardized algorithm that only returns when it finds a solution, it will only halt if one exists and not halt if there are no solutions.

Some food for thought.
 
  • #3
chiro said:
This kind of formalism provides a nice way for showing uncomutability properties for a general algorithm, if it can be turned into a number theory system expression (could be a collection of constraints like you have in the Chinese Remainder Theorem calculations, or one constraints like a simple one line congruence relationship).

I bet that this approach has high complexity both because the Chinese remainder theory is for linear equations and computers can deal with quite large numbers.
 
  • #4
John Creighto said:
I bet that this approach has high complexity both because the Chinese remainder theory is for linear equations and computers can deal with quite large numbers.

It does, but it's beneficial to have it in this kind of form because it means that when the field is developed enough to the point where one can check if solutions exist, then from computer science of view this is very useful in terms of finding whether algorithms ever halt.

Number theory in its current form is very young, but I imagine later that this kind of specification (or a similar kind) is going to be probably the way people look at computation because not only does it standardize it, it offers the opportunity given further research to find whether the algorithm will terminate.
 
  • #5


In response to the discussion on logic in computation and deviations from classical logic, I would like to offer some insights as a scientist.

Firstly, it is important to understand that classical logic is just one form of logic and there are many other variations that can be explored. In particular, the logic of computation plays a significant role in computer science and has its own set of rules and principles.

One interesting concept in the logic of computation is the idea of a third truth value, which can be represented by "infinite loop" in some cases. This third truth value allows for a different perspective on the Liar's paradox and provides a way out of the paradox through the concept of computability.

Additionally, the concept of bottom type in type theory also adds another layer to our understanding of logic in computation. While it may act as a third truth value, it is actually a subtype of both true and false, and its use is typically reserved for cases of error in computation.

Overall, the relationship between logic and computation is complex and constantly evolving. There may be deviations from classical logic in the context of computation, and it is important for scientists to continue exploring and understanding these deviations in order to further our understanding of the laws of logic and their applications in computation.
 

FAQ: Logic in Computation: Exploring Deviations from Classical Logic

What is "Logic in Computation"?

"Logic in Computation" is a field of study that explores the use of logical reasoning and computational methods to solve problems and make decisions. It involves the application of mathematical logic and computer science to analyze and design algorithms and systems that can process and manipulate information in a logical way.

How does "Logic in Computation" differ from classical logic?

Classical logic is a branch of philosophy that deals with the principles of valid reasoning. It is based on the law of excluded middle, which states that a statement is either true or false. "Logic in Computation", on the other hand, allows for deviations from this principle and considers the use of different types of logic, such as fuzzy logic or modal logic, to better model and solve real-world problems.

What are some applications of "Logic in Computation"?

"Logic in Computation" has a wide range of applications in fields such as artificial intelligence, natural language processing, database systems, and game theory. It is also used in the development of programming languages and software for automated reasoning and decision-making.

How is "Logic in Computation" relevant to everyday life?

"Logic in Computation" has many real-world applications that impact our daily lives, such as in the development of smart home devices, virtual assistants, and self-driving cars. It also plays a role in improving efficiency and accuracy in industries like healthcare, finance, and transportation.

What are the potential challenges in implementing "Logic in Computation"?

One of the main challenges in implementing "Logic in Computation" is the complexity of real-world problems and the need for more sophisticated and flexible logic systems. There is also a need for more research and development in this field to better understand the limitations and potential of different types of logic and their applications. Additionally, ethical considerations must be taken into account when using logic in decision-making processes, as biased or flawed logic can have significant consequences.

Back
Top