Can you solve Penrose's chess problem and win a bonus prize?

In summary, a chess problem has been created to challenge a computer, but also be solvable for humans. The goal is to either force a draw or win as white. The Penrose Institute is looking for solutions and will scan the brains of those who solve it. The problem may seem hopeless for white, but it is possible to draw or even win. Chess computers struggle with this type of problem due to the massive number of possible positions to consider. Humans are advised to find peace and quiet when attempting to solve it and may even experience a flash of insight. The first person to legally demonstrate the solution will receive a bonus prize. Both humans and computers are invited to participate.
  • #141
Buzz Bloom said:
Hi Demystifier:

I appreciate your post, but its succinctness is a bit disappointing. Although we disagree, I respect your knowledge, and I think I would benefit from understanding your reasons for disagreeing. It may well be that our disagreement is only about the use of terminology.

Regards,
Buzz
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
 
Mathematics news on Phys.org
  • #142
Demystifier said:
I agree that it's off topic, but disagree with the rest.
Which part?
 
  • #143
ObjectivelyRational said:
Which part?
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.
 
  • #144
Demystifier said:
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.

Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?
 
  • #145
ObjectivelyRational said:
Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?
The paper does not talk about p-zombies explicitly. However, the paper defends the same basic ideas as does the Chalmers's book. Then, to see how p-zombies are logically possible, one can see that book.
 
  • #146
ObjectivelyRational said:
Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?

I noted something of your paper which we likely disagree on:

You state:

1. Physical laws are entirely syntactical.
2. Brains are entirely based on physical laws.
3. Anything entirely based on syntactical laws is entirely syntactical itself.
Therefore brains are entirely syntactical.

1 may be true but 2 is false.

Physical laws are our attempt to describe reality and they have a certain form, but they are abstractions.
Reality is and acts as it is and does, reality does not follow nor is it based on our physical laws.

You are conflating two distinct things here, one is science, i.e. the study of reality and the abstractions and formulations in math and language we use in order to try to understand it. The other is reality itself which has a nature and behaves according to its nature. We try to understand reality but our understanding is not something reality follows or is based on.

This kind of error helps me understand why we disagree.

Best of luck!
 
  • #147
ObjectivelyRational said:
I noted something of your paper which we likely disagree on:

You state:

1. Physical laws are entirely syntactical.
2. Brains are entirely based on physical laws.
3. Anything entirely based on syntactical laws is entirely syntactical itself.
Therefore brains are entirely syntactical.

1 may be true but 2 is false.

Physical laws are our attempt to describe reality and they have a certain form, but they are abstractions.
Reality is and acts as it is and does, reality does not follow nor is it based on our physical laws.

You are conflating two distinct things here, one is science, i.e. the study of reality and the abstractions and formulations in math and language we use in order to try to understand it. The other is reality itself which has a nature and behaves according to its nature. We try to understand reality but our understanding is not something reality follows or is based on.

This kind of error helps me understand why we disagree.

Best of luck!
You are right, if my axiom 2 is wrong, the so are my conclusions. In that sense I can conditionally agree with you about the p-zombies. Note also that the last paragraph of that section in my paper is a conditional statement, i.e. contains an "if".
 
Last edited:
  • #148
There is one point that I forgot to mention in my previous post. Suppose you say that "functionally"** a computer program is exactly the same as a sentient human being.

Now suppose you accepted LEM for basically any non-recursive set (assume halt set to be specific). Then by that "very acceptance" you are saying that the sentient human being has "potentiality" to go beyond a computer program. That is, even though the sentient human being can't prove all the statements in a set past a given threshold (of his training that is), it is "possible" to "help" him (wouldn't this be very point of taking LEM true?).

I am not really not necessarily taking any point of view. I just have genuine difficulty seeing how someone would take both of the following viewpoints simultaneously:
-a- "equating" computer programs and sentient human beings for "all" functional purposes*** (in the sense of potentiality****)
-b- accepting LEM for halt set

If you reject (b), then I can see why someone can take view (a) above though (as at least there is no internal inconsistency seemingly).

But I personally feel quite strongly that all of this discussion is eclipsed by my previous posts, so perhaps while it is good for a mention (for the sake of completeness), it is of less fundamental nature (in my view).** I keep emphasizing this distinction on the following basis:
Suppose you made an automaton out of "pure circuitry" and "nothing else", but which by all appearances wouldn't appear and act (let's assume so ... for all practical purposes) so. But so what? Should I say it is "really" conscious? It could even deceive someone who didn't know it was "pure circuitry"? But even then what difference does it make?

*** Notice that I don't just mean "pragmatic functional purposes" or "practical functional purposes", but certainly in a deeper sense than that.

**** I am personally completely convinced that equivalence doesn't even hold in sense of "past of a certain threshold" (of training) let alone the sense of "potentiality".

Edit:
Perhaps some clarification would make things clearer. Perhaps this is too much for a point that isn't all that important (at least in my opinion). But since I already made the post, I guess explanation would be better to avoid ambiguity.

When we talk about a statement such as:
"This program loops forever on this input"
We can only talk about "absolutely unprovable" because proving this statement false (if it really is) is trivial.
Call the positions of these supposedly "absolutely unprovable" statements as denoted by some set S.

S can't be r.e. That's because if it was, every statement could be decided in a sound way on following basis:
(1) Start with number 0.
(2) Call for "help". If the statement belongs to S "help" will never come. But eventually it would just be "enumerated" (because of S being r.e.) and we could just return "true". If the statement belongs to complement S then "help" will come at some point. So it is just a matter of waiting long enough.
(3) Move to next number.

"help" means pressing a button on a controller that sends the signal to some "genius mathematicians" in a far away galaxy. With the help button, they start working on the problem "eventually" resolving it (if it is resolvable at all).
Also note that roughly the idea here is that if the "genius mathematicians" start retorting to guesses, to be sure they may get the result right (that is returning "true") for a finite number of initial values of S, but that comes at the cost of making eventual mistakes (potentially at any statement number).

Now the possibility of (a) being true and (b) being false could "presumably" occur when there exists a recursive and sound reasoning system that halts on all values that belong to the set S' (complement of S).
Is there something obviously wrong with it or not? I can't say to be honest.

Now by a recursive and sound reasoning system I mean a partial recursive function f:N→N such that:
-- it can't return "false" when the statement for given number is true
-- it can't return "true" when the statement for given number is false
-- it can't return "false" when the statement number belongs to set S
-- it can run forever for any given input

P.S. I have tried to remove any "major" mistakes in the "Edited" part, but still there might be though, as I hadn't any of this in a thoroughly written form before (though I had given some thought to these issues before).
 
Last edited:
  • #149
Demystifier said:
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
Hi Demystifier:
Thanks for the link.

From the abstract it seems we mostly agree. I plan to complete reading the paper soon.

Regards,
Buzz
 
  • #150
Demystifier said:
You are right, if my axiom 2 is wrong, the so are my conclusions. In that sense I can conditionally agree with you about the p-zombies. Note also that the last paragraph of that section in my paper is a conditional statement, i.e. contains an "if".

Then in a sense we are likely in agreement.

Simulation of a system, using physics, science, computation is not the same as replication of a system. That is not to say that simulation of a system cannot replicate certain aspects of the system, but if "what matters" about a real system cannot be successfully simulated, then certainly replication of "what matters" about that system cannot be achieved through simulation.

This does not mean that consciousness cannot be actually replicated, it only means that it cannot be replicated through simulation. A model of a wave on water will never actually be a wave. IF a wave is "what matters" phenomenologically, we can of course set one up with another liquid... hence replicating waves exhibited by water with something else... of course we had to know enough about waves to know we could replicate waves we see on water with waves on another liquid.

If and when the hard problems of consciousness are solved, replication would entail ensuring that what matters about a natural system i.e. what it is about our brains that makes consciousness possible and causes it to be, is present in the system which is to exhibit it. In this case of course, replication would not be simulation, but actual exhibited phenomena of consciousness, which would emerge because the conditions which create it are present.
 
  • Like
Likes Demystifier
  • #151
I think black can only move their bishops those of which cannot capture any of the pawns so I believe that as long as white just moves his king around a three fold repetition will occur eventually
 
  • #152
Also the white king can help protect the passed pawn and have it promote leading to checkmate
 
  • #153
Actually I think that wouldn't work but I think I see a mating pattern involving getting the king to C6 and using the pawn to deflect the queen and the other pawn to mate
 
  • #154
tl;dr. but the idea that there are too many combinations for brute forcing this position is stupid - black can only move the bishops, and they can go to approx 26 places only, and white's king to approx 47 places, So it's 26*25*24*47=733200 combinations only, far from the claim that it "exceeds all the computational power on planet earth".
 
  • #155
georgir said:
tl;dr. but the idea that there are too many combinations for brute forcing this position is stupid - black can only move the bishops, and they can go to approx 26 places only, and white's king to approx 47 places, So it's 26*25*24*47=733200 combinations only, far from the claim that it "exceeds all the computational power on planet earth".
The number of possible positions grows massively if white makes one of the stupid moves. The number of possible game trees is even larger.
Seeing that these moves are stupid is the point. A human can do it. Can computer programs do it as well?
 
  • Like
Likes Auto-Didact
  • #157
stevendaryl said:
You're a David Chalmers fan! I consider him sort of a friend--he has stayed overnight at my house (way back when he wasn't famous).
Yes, Chalmers is my favored philosopher. My second favored philosopher is Descartes, who happened to die at the same day (not the same year) at which I was born.
 
  • Like
Likes Auto-Didact
  • #158
mfb said:
The number of possible positions grows massively if white makes one of the stupid moves. The number of possible game trees is even larger.
Seeing that these moves are stupid is the point. A human can do it. Can computer programs do it as well?

It's certainly possible for computers to see patterns, although I don't know how much (if any) is programmed in current chess-playing programs.
 
  • #159
Buzz Bloom said:
I don't think I have ever met anyone who is like whom you describe as "ontological functionalists". The individuals whom I have met who consider themselves to be "functionalists", like myself, do not believe the physical description is impossible, but rather just irrelevant. The emergent behavior of emergent phenomena, like consciousness, do not depend on the physical description of constituents, only on the functionality of constituents.

I agree. For an example of a "functional" theory in biology, I would say Darwinian evolution is an example. The key components of evolution are:
  1. Reproduction
  2. Inheritable traits
  3. Variety (mutations)
  4. Differential reproductive success for different combinations of traits
Finding out that DNA sequences are the physical representations of traits was certainly an important discovery of biology, but I wouldn't say that this "physical" understanding of genes replaces the functional theory of evolution. DNA is one way that traits can be encoded, but DNA is not necessary for the theory of evolution to apply. If some organism turns out to use something different--RNA, maybe, or proteins, or silicon chips--the theory of evolution could still be applicable.

The abstract/functional theory of evolution is not an alternative to the biochemistry of living organisms---neither can replace the other. They are two different, though interrelated, research programs.
 
  • Like
Likes Buzz Bloom
  • #160
Demystifier said:
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.

This whole thread is much more philosophical than many threads that have been closed for being overly philosophical. But the threads that are in danger of being shut down are always my favorites :oldsmile:

The problem I have with philosophical zombies and qualia and all this other subjective mental stuff is that it is exceedingly difficult to know what would count as evidence that various subjective claims are true or false. If subjective states are allowed to be disconnected from their role in the functioning of organisms (how they respond to the environment), then they become completely unconstrained. How do you know whether rocks or drops of water have subjective states? Maybe they do, but they just lack the brains and muscles to do anything about their subjective states. On the other hand, if you assume that they are always accompanied by their functional roles, then what reason is there, logically, not to just equate them with the role they play? In which case, the idea of zombies (that respond to stimuli the same way we do, but lack subjective states) becomes incoherent.

I think it's sort of an interesting topic, but it seems like trying to nail jello to the wall to get anywhere.
 
  • Like
Likes MrRobotoToo and Demystifier
  • #161
stevendaryl said:
This whole thread is much more philosophical than many threads that have been closed for being overly philosophical.
Maybe this thread is not closed yet because it is in the general math subforum, where overly philosophical threads are not abundant.

stevendaryl said:
But the threads that are in danger of being shut down are always my favorites :oldsmile:
Mine too. :oldsmile:
 
  • Like
Likes Auto-Didact
  • #162
Demystifier said:
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
Hi @Demystifier:

I enjoyed reading the cited paper. I entirely agree with the abstract. I have some issues with the arguments presented, but since the issues are quite philosophical I think it would not be appropriate to discuss them here. If you are interested, we could discuss them using the PFs "Conversations" feature.

There is one issue I think is appropriate to discuss here. When you cited the article it was in response to my comment about your short response to my question.
Do you agree or disagree that in a conscious being experience can (or must) cause learning and adaptation, and thereby change the range of possible behaviors?​
You responded, "I disagree."
I commented,
Although we disagree, I respect your knowledge, and I think I would benefit from understanding your reasons for disagreeing.​
Then you responded with the quote at the top if this post.

The cited paper has no discussion about the role of experience, and in particular no mention about why experience does not change the range of possible behaviors. I would much appreciate your posting a few sentences that summarize your reasoning about your disagreeing.

Regards,
Buzz
 
  • #163
stevendaryl said:
I think you're misunderstanding what they are saying. Let's take the example of a calculator: To be a calculator means a particular functional relationship between inputs and outputs. So you can develop a theory of calculators independently of any particular choice of how it's implemented. But if you're going to build a calculator, of course you need physics in order to get a thingy that implements that functional relationship. Turing invented a theory of computers before there were any actual computers. An actual computer implements (actually, only partially, because Turing's computers had unlimited memories) the abstraction.

So the people developing a functional theory of mind are trying to understand what abstraction the physical mind is an instance of. Does that count as a refutation of physicalism? Only if someone is wanting to be provocative.
Key point: the implementation of any Turing machine in principle is something physical. Only the suggestion of the possibility of some non-physical implementation is problematic. Many (selective) Platonists may - and some opportunists do - argue against physicalism using exactly such arguments, if it is able to further their (often religious) agendas.

So unless you are positing such a possibility - specifically a reified abstract actual infinite non-physical Universal Turing machine, any other (i.e. any non-ontological functionalist) description will always be intrinsically physical as is the case with todays computers, with the functional theoretical definition not a reification but merely a mathematical idealisation, which actually is a generalised idealised description of aspects of the natural world and therefore de facto part of physics.

This is true regardless of the intent of why such a model was or is being made, the most famous example of course being Carnot's most famous work which had a pure engineering intent towards making ideal engines but is today referred to as the Second Law of Thermodynamics. The same could be said for Shannon's theory and also for Turing's; the fact that academia or curricula are not structured this way is mostly for more practical reasons (division of theory into science/engineering).
stevendaryl said:
No, that's not what I meant. It's not a placeholder at all. Take the example of a computer: Turing develop a theory of computers that was independent of any specific implementation of a computer. It is not correct to say that Turing's theory was a "placeholder" for a more physical theory of computers that was only possible after the development of solid state physics. The abstract theory of computation is neither a placeholder for a solid state physics description of computers, nor is it a replacement for such a description. It's two different, but related, lines of research: the theory of computation, and the engineering of building computers.

Correspondingly, there could be a functionalist theory of mind which relates to a physical theory of the brain in the same way that the abstract theory of computation relates to electroninc computers.
As I said above, in a sense of describing some aspect of the natural world, the theory of computation is a branch of (applied) physics or engineering (and so, applied physics), whether or not it would be categorized so by academia today. Whether or not one intends it as a placeholder, in this case doesn't mean it isn't ultimately exactly just that.

Moreover, describing this stance as functionalism is a misnomer (pseudofunctionalism would be more appropriate), because this is not ontological functionalism. One is merely naming oneself a 'functionalist' for whatever particular reason the idea resonates with them on a superficial level without fully or adequately embracing the core philosophy; compare this to the "ontology engineering" movement in computer science which has absolutely nothing whatsoever to do with ontology. Misappropriation of terms seems to be a bit of trend in computer science these days.

More importantly (also @SSequence ) I will repost this seeing I did not get a reply: Glymour 1987, Psychology as Physics

This paper I think adequately demonstrates that cognitive science cannot be fundamentally about functionalism but has to be about physics.
Buzz Bloom said:
Hi Auto-Didact:

I don't think I have ever met anyone who is like whom you describe as "ontological functionalists". The individuals whom I have met who consider themselves to be "functionalists", like myself, do not believe the physical description is impossible, but rather just irrelevant. The emergent behavior of emergent phenomena, like consciousness, do not depend on the physical description of constituents, only on the functionality of constituents.
Read what I said above to stevendaryl.
Also, emergent phenomenon is today viewed fully as a branch of physics in the theory of non-linear dynamical systems. Similar dynamics under substrate independence is very much 'physics', regardless of what one may personally consider to be physics. As far as I can see, this goes for all theories from biology, ecology, psychology, economics or even politics falling under the term emergence.
 
  • #164
Auto-Didact said:
Key point: the implementation of any Turing machine in principle is something physical. Only the suggestion of the possibility of some non-physical implementation is problematic. Many (selective) Platonists may - and some opportunists do - argue against physicalism using exactly such arguments, if it is able to further their (often religious) agendas.

It's not clear to me that there is any substance to the disagreement between physicalists and such platonists. It's an argument over words.

Is an abstraction such as "the number 2" or "a function" or "a sort routine" something that "exists"? Everyone agrees that they don't exist as physical entities--you can't hit somebody on the head with an abstraction. Everyone agrees, on the other hand, that they are coherent topics to reason about. The disagreement is over what "exists" means. What difference does it make?

As I said above, in a sense of describing some aspect of the natural world, the theory of computation is a branch of (applied) physics or engineering (and so, applied physics), whether or not it would be categorized so by academia today. Whether or not one intends it as a placeholder, in this case doesn't mean it isn't ultimately exactly just that.

I disagree with that. The theory of computation is not a description of anything that exists in the world. It's not a placeholder. It's more akin to mathematics. Mathematics can be used to describe the real world, but the theory of the natural numbers is not a description of some aspect of the natural world. You can use the theory to reason about counting rocks, or whatever, but there is no sense in which the theory is a placeholder theory to one day be replaced by a more physical theory of rocks.

Moreover, describing this stance as functionalism is a misnomer (pseudofunctionalism would be more appropriate), because this is not ontological functionalism.

I just think that you are misunderstanding the topic. I think that there is a conflict of interest in your role in this discussion, because you are both trying to define a position, and simultaneously attacking that position. That's not intellectually honest. That is what "attacking a strawman" means. Maybe there is somebody who believes the position that you are attacking, but they aren't arguing in this thread, so why should anyone care?
 
  • #165
Auto-Didact said:
Also, emergent phenomenon is today viewed fully as a branch of physics in the theory of non-linear dynamical systems. Similar dynamics under substrate independence is very much 'physics', regardless of what one may personally consider to be physics. As far as I can see, this goes for all theories from biology, ecology, psychology, economics or even politics falling under the term emergence.

Is there any substance (no pun intended) to what you're saying, because arguing over words? Is something physics, or not? What difference does it make? Can you relate whatever disagreement you are having back to this thread?

If you're just saying that you don't think someone should be considered to be studying the mind unless they are studying the physical properties of the brain? Is that just a matter of labeling?
 
  • #166
Auto-Didact said:
Moreover, describing this stance as functionalism is a misnomer (pseudofunctionalism would be more appropriate), because this is not ontological functionalism.
Hi Auto-Didact:
Wikipedia seems to disagree with you. Perhaps you may decide to correct it.
Here are some quotes.
Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. Its core idea is that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs.https://www.physicsforums.com/javascript:void(0) Functionalism is a theoretical level between the physical implementation and behavioral output.https://www.physicsforums.com/javascript:void(0) Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").​

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.​

In the second and third quotes I underlined "ontological" to make it easier to locate. Is your concept of ontological functionalism compatible with the sentence that begins, "However, any weaker form of physicalism..."?

Regards,
Buzz
 
Last edited by a moderator:
  • #167
Buzz Bloom said:
Hi Auto-Didact:
Wikipedia seems to disagree with you. Perhaps you may decide to correct it.
Here are some quotes.
Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. Its core idea is that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs.https://www.physicsforums.com/javascript:void(0) Functionalism is a theoretical level between the physical implementation and behavioral output.https://www.physicsforums.com/javascript:void(0) Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".​

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.​

In the second quote I underlined "ontological" to make it easier to locate. Is your concept of ontological functionalism compatible with the sentence that begins, "However, any weaker form of physicalism..."?

I would like to get to the bottom of what difference it makes. If you have a purely physical understanding of the mind as a property of the brain's chemistry, then if someone creates a system that acts like a conscious being, but is implemented in some completely different way (electronics, or gears, or whatever), then a pure physicalist would presumably say that it wasn't actually conscious, because "conscious" is a property of brains, and it doesn't have a brain. A functionalist might say that it is conscious, because even though it's not implemented the same way, it embodies the same functional relationships.

That sounds like a big difference, but is it, really? Is it just a matter of labeling? Or is there some terminology-independent disagreement?
 
Last edited by a moderator:
  • #168
stevendaryl said:
Is it just a matter of labeling?
Hi stevendaryl:

I think it is, but that does not mean there isn't a problem with it. When people use different labels for concepts, or use the same labels in different ways for concepts, it is almost inevitable they will confuse each other without realizing it.

Regards,
Buzz
 
  • #169
Maybe I will comment on some other points mentioned (unless the thread goes inactive at that point), but what it "seems" like to me is that this is perhaps a debate of "Physics first" or "Maths first"? But I hardly know enough Physics to give a remotely useful comment in this regard.

Auto-Didact said:
...
This paper I think adequately demonstrates that cognitive science cannot be fundamentally about functionalism but has to be about physics.
...
I haven't read the article yet.
But briefly speaking, my "own" view point simply is cognitively what an idealized rational mathematical agent can do is not related to Physical reality (this seems to be highly in line with Brouwer's way of thinking).

But "can" itself is not that simple of a term. Even in the simplest sense, it has two meanings:
(i) what you can do at a given threshold of ability
(ii) what you can do in principle if explained and helped more and more (increasing threshold)

In the second sense the functional equivalence of mind and computer program is quite definitively incorrect (post#81, my main post in the thread).

Whether the functional equivalence is also correct in the first sense (beyond certain threshold of training) is something I said based upon applying it to myself first and foremost***.

stevendaryl said:
...
I disagree with that. The theory of computation is not a description of anything that exists in the world. It's not a placeholder. It's more akin to mathematics. Mathematics can be used to describe the real world, but the theory of the natural numbers is not a description of some aspect of the natural world. You can use the theory to reason about counting rocks, or whatever, but there is no sense in which the theory is a placeholder theory to one day be replaced by a more physical theory of rocks.
...
My personal point of view is heavily in line with this (so you can perhaps say I also kind of agree with "maths first" in a manner of speaking). But mathematics, in my view, far exceeds the limits of abstract computer science (which I regard as a specialized branch of it).*** The thread got me thinking about how if someone claimed "if you are right then just give me the function for such and such big element" (and started thinking a basic sketch (that could potentially be actualized) for very large but specific elements). Now there are two aspects to it:
(1) The smallest program would keep increasing in length for bigger and bigger elements so ultimately within a limited life span there is a certain limit to what one can write anyway. But this a fairly trivial sense which isn't important here.

What I mean by "bookkeeping" below is a systematic tracking/account of normal functions.
(2) The second part is far more interesting. After thinking about it, there seem to be at least five aspects here:
(a) The first part is bookkeeping (using a larger and larger countables --- with apparently no preset bound beforehand). Can the bookkeeping be done in a precise manner. I think answer to it is definitively yes (a fuller explanation would be far too long).
(b) The bookkeeping will keep extending indefinitely. Can a human mind always spot after enough instances (without any sort of upper-bound beforehand) that the bookkeeping has to be extended (and also in what way) as the need arises. I personally also consider this to be yes. But this is also perhaps related to (a). If you don't have a precise tool for bookkeeping you would find it very hard how to extend it.
(c) Can a precisely kept book-keeping be converted into a ordering function always (by the same person). This should certainly be correct (given enough time).
(d) If one was relying not on strict proof but on patterns, would the human mind always pick up the correct pattern (while being free to test the details of the pattern to his contentment) without proof? My opinion on this is yes, but I understand that why someone else could find this dissatisfying.
An example here would be the guarantee that a function formed by picking up elements from such and such positions be guaranteed to be normal?

(e) Can one always give a proof that the given bookkeeping is correct (not fully clear on this at this point).

Both part(b) and part (d) are very interesting. As far as part(d) is concerned, I am not formally trained in logic (obviously trying to learn more quite gradually). But I can easily write simple "bookkeepings" and guarantee them to be correct. I can also think of much more layered and difficult "bookkeepings" and declare them to be correct with full confidence (after testing various components of it to contentment --- I am not saying that this isn't an exceptionally laborious process).
However, a logician would argue that you have to prove every bookkeeping (part (e)) --- just precisely stating it and then declaring it to be correct isn't enough. This thread certainly got me thinking more about part (e). Note though that it is still "ONLY" about the "right answer" and nothing else (as far as the problem of functional equivalence is concerned).

Personally I am quite convinced about (d) with experience, but considering part (e) seems interesting to me. That's because (d) follows absolutely trivially from (e).

So definitely something to be tried out given (a lot of) free time (or added in a "to do" list for later). I am thinking about (e) in terms of direct termination proofs that prove that a given book-keeping is always correct. I suspect that logicians would generally consider it to be far too laborious (and that's why(+talent admittedly) their tools are probably much more sophisticated when these kind of proofs are concerned). But this simple kind of method also seems to be the most "non-creative" way of proving.

P.S. Note that although all of this while looking a little too abstract, is still squarely in domain of logic (and hence also math).
 
Last edited:
  • #170
This is incredibly fascinating so I hope it can continue. I'm trying to understand the concepts of physicalism and functionalism myself. If someone could explain it to me (with use of the following hypothetical) I would be very appreciative.

Take for example a wave in reality of some sort on a membrane or surface. The "waviness" we observe is a distribution of tension and displacements and momenta of the various portions of the surface. There are certain states of portions and relationships of various portions to each other and the whole which are exhibited by the natural stuff while exhibiting (in the context of a surface) waviness. Stuff is interacting, pulling on each other, moving etc. Although the number of particles (atoms etc.) are integral, they move continuously in space and continually in time (assuming no discretization of actual space and time).

Now consider from a high level a simulation of this wave. The simulation would have data which represents time, data which represents the positions, momenta, tension of the various portions of the surface which are each also represented by data objects (due to limitations of computation these necessarily have a finite bit length and are discretized). Functions are carried out which according to our best science varies the data representing these aspects of the portions of the surface in accordance with how we think it should evolve with time. Calculations are made one by one, eventually dealing with each portion of the surface and after some time (or at discrete instantaneous instances in time) some function outputs (or simply announces the finality) of a state of the surface in the simulation at that time (or at multiple discrete instances of time). The simulation can output a series of numbers in a list or generate an image in human readable form which represents the data representing the surface position etc. We see a set of numbers or pixels on a screen which we know represents the state of the simulated wave.

Consider now that what is actually physically happening in the simulation is that electricity, in the form of voltages and currents are gated and shunted around, arranged into sequences (1010100001001) which represent the numbers (we use numbers as abstractions) to represent the magnitudes of the properties of each portion of the simulated surface which are stored and modified through more gating of currents voltages and sequences. They are carefully operated upon so that they vary through calculation, from time to time , in their respective unconnected memory stores, to correspond to what according to the our understanding they should be.

If I understand correctly, physicalism for waviness, would hold that "waviness" is exhibited only by systems which have actual portions of a surface which interact and continuously and continually move as a wave, but that functionalism for waviness would be a claim that what matters is that even though the totality of the simulated wave is represented by a disembodied disconnected collection of information (stored voltages and currents) which are representative (stored in bits) of numbers representing magnitudes of simulated properties, if it changes (even in reverse or discontinuous time order) over time... somehow that remote correspondence of the natural world stuff used to represent the abstractions (numbers) modified according to science (our way of understanding and describing things) becomes itself an instance of the aspect "waviness" of a physical system.

I'm almost certain I have got Physicalism and Functionalism wrong here. I would appreciate someone's relating these and differentiating them with respect to the hypothetical.

Cheers!
 
  • #171
SSequence said:
Maybe I will comment on some other points mentioned (unless the thread goes inactive at that point), but what it "seems" like to me is that this is perhaps a debate of "Physics first" or "Maths first"?
Hi SSequence:

It seems to me that the debate so far in this tread is more complicated than that. The various posts illustrate a wide variety of philosophical views about the nature of reality. Many of these views include the belief that the poster's particular view is the only possible correct view. That particular view makes it impossible for a recognition that a great deal of the various views have much in agreement, because the small points of disagreement together with the certainty of correctness for one view (almost) completely mask the points of agreement.

How about this premise as a basis for disagreement.
There are multiple right ways to think about issues of reality, and that the multiple ways are still right even when they disagree with each other.​
Be aware that when you think about this there are multiple definitions for "right".

Regards,
Buzz
 
  • #172
Yes, identifying a point of agreement or disagreement can be important. To be fair, if this kind of thread was in a physics sub-forum, I certainly wouldn't feel qualified to make any post at all. Because this is in math forum, I felt that there was something to add (on the very least a viewpoint that I have arrived at by myself and certainly seen no one taking explicitly).

That's why I always try to describe from the outset what my larger view is, so if someone disagrees they would know right away the basic reason.
 
Last edited:
  • #173
SSequence said:
To be fair, if this kind of thread was in a physics sub-forum, I certainly wouldn't feel qualified to make any post at all. Because this is in math forum, I felt that there was something to add (on the very least a viewpoint that I have arrived at by myself and certainly seen no one taking explicitly
Hi SSequence:

Since you identify yourself as a mathematician, I am curious about your position regarding the following.
Abstractions are not real. In particular:
1. Numbers are not real.
2. Equations are not real.
3. Variables in equations are not real.
4. Mathematical models are not real.
5. Mathematics is not real.
6. Physics is not real.
7. Chemistry is not real.
8. Biology is not real.
9. Psychology is not real.
10. The mind is not real.
11. Consciousness is not real.
12. Knowledge is not real.

Regards,
Buzz
 
  • #174
Buzz Bloom said:
Hi SSequence:

Since you identify yourself as a mathematician, I am curious about your position regarding the following.
Abstractions are not real. In particular:
1. Numbers are not real.
2. Equations are not real.
3. Variables in equations are not real.
4. Mathematical models are not real.
5. Mathematics is not real.
6. Physics is not real.
7. Chemistry is not real.
8. Biology is not real.
9. Psychology is not real.
10. The mind is not real.
11. Consciousness is not real.
12. Knowledge is not real.

Regards,
Buzz
What? No, I am not a mathematician. Just an enthusiast with some knowledge of fairly elementary topics (perhaps in some cases slightly specialized topics).

Quite briefly its about the level of abstraction:
Rougly 4,6,7,8,9 are at a lower level of abstraction (less fundamental in a sense). For 12 only the "true" mathematical knowledge is at the highest level of abstraction (more fundamental) --- "some/few" parts of cultural mathematical knowledge "might" be incorrect.

I am not sure how much of this is relevant though. I am afraid that the thread might get closed for getting too philosophical (I have seen much less philosophical threads get closed --- so you should keep this in mind too).
So to keep the discussion more on point just keeping it to (10) and (11) --- both are essentially the same that they exist together.

From a "purely" mathematical point of view, I see this problem as follows:
What is the only thing left that a computer program provably can't (apart from not being able to calculate uncomputable functions) do (in a pure mathematical sense). That just leaves what I have mentioned before. My thought process is really that simple :P
 
Last edited:
  • Like
Likes Buzz Bloom
  • #175
stevendaryl said:
It's not clear to me that there is any substance to the disagreement between physicalists and such platonists. It's an argument over words.
Philosophically, it makes all the difference in the world. Gödel and other prominent scientists have for example have used such arguments to 'prove' the existence of God.
Is an abstraction such as "the number 2" or "a function" or "a sort routine" something that "exists"? Everyone agrees that they don't exist as physical entities--you can't hit somebody on the head with an abstraction. Everyone agrees, on the other hand, that they are coherent topics to reason about. The disagreement is over what "exists" means. What difference does it make?
I will quote Poincaré from The Foundation of Science:
Poincaré said:
What does the word exist mean in mathematics? It means, I said, to be free from contradiction. This M. Couturat contests. "Logical existence," says he, "is quite another thing from the absence of contradiction. It consists in the fact that a class is not empty." To say: a's exist, is, by definition, to affirm that the class a is not null.

And doubtless to affirm that the class a is not null, is, by definition, to affirm that a's exist. But one of the two affirmations is as denuded of meaning as the other, if they do not both signify, either that one may see or touch a's which is the meaning physicists or naturalists give them, or that one may conceive an a without being drawn into contradictions, which is the meaning given them by logicians and mathematicians.
Carrying on.
I disagree with that. The theory of computation is not a description of anything that exists in the world. It's not a placeholder. It's more akin to mathematics. Mathematics can be used to describe the real world, but the theory of the natural numbers is not a description of some aspect of the natural world. You can use the theory to reason about counting rocks, or whatever, but there is no sense in which the theory is a placeholder theory to one day be replaced by a more physical theory of rocks.
This is simple really. Let's use an analogy:

Are cells things which exist in the natural world? Yes. Can there be a physical model describing them? Yes. Can there be a more abstract purely formal model describing their workings? Yes. Can that theory therefore be regarded as a special theory about cells from another domain (biology), which strictly describes natural phenomenon? Yes.
Therefore, cells and even their mathematical abstractions can be viewed as falling under the purview of physics.

Replace the word or concept 'cell' in the above with the word 'computer' or 'fridge' and it becomes immediately clear that the same applies to them as well.

From a pure biology point of view, cells can even be described in a myriad of ways, even outside of any organic chemistry, the Standard Model of particle physics or physics at all by purely referring to their function in a formal description; recall that the same applies to the 'gene' concept that Darwin invented long before people started thinking about DNA. Doing this is a way of completely removing physics from the equation, but to then go onto state that such things actually (can) exist is to immediately make a falsifiable claim about physical phenomenon.

The fact that we do not characterize computation as Turing defines it necessarily as a physical phenomena but as a formal one does not imply that such a characterization is impossible; I would even argue any actual instantiation of a Turing machine carrying out computation in the real world clearly is a physical phenomenon and all physical phenomena capable of being described this way fall under this class.

Of course, you can regard the theory as belonging more properly to mathematics, I in fact do the same as well. But this gets us into the ugly business of tacitly reifying abstract mathematical things, and possibly confusing what is or is not physical in the case of instantion when all instantion seems to necessarily be physical. This gets us too far into what is mathematics and what is physics discussion; if we are talking about phenomena that exist in the world and their properties, as we are when we are talking about minds and actual computers, then we are necessarily talking about physics. The fact that the classification of things in physics is so different than how classifying phenomena works in e.g. biology or astronomy is more what we arguing here.
I just think that you are misunderstanding the topic. I think that there is a conflict of interest in your role in this discussion, because you are both trying to define a position, and simultaneously attacking that position. That's not intellectually honest. That is what "attacking a strawman" means. Maybe there is somebody who believes the position that you are attacking, but they aren't arguing in this thread, so why should anyone care?
I am defining a prevalent position in this discussion in academia, even if it doesn't seem to be one on this board. First and less interestingly, it is because it doesn't seem to be true as is argued in the paper by Glymour which I linked, please have a look at that.
Second and I think more importantly, let me tell you why you should care, seeing you don't seem to be aware of or directly experience the unwanted side effects.

I have spent hours in real life arguing with non-physics academics, specifically scientists from neuroscience, biomedicine and cognitive psychology on interdisciplainary discussions about this matter. They are the ones who not only mostly research the mind, write textbooks and construct curricula and so perpuate the false idea in new students, but also decide what research gets funded. This means when it is time to decide which research should be pursued and funded, only those proposals which clearly jive with the functionalist argument, taking the mind to be necessarily isomorphic to ideas from computer science as fact and therefore de facto removing any need for any physics approaches, tend to get chosen. This is purely because these people are convinced that argument is true; it is a terrible tacit selection criteria for doing research, but it is the situation which we are in.

This stance has immensely crippled many more physics and applied mathematics interdisciplainary type research proposals in these topics (which are strongly underrepresented but badly needed) mainly due to the acceptance of the argument by many, due to the alienation of the few physics researchers who do try to research the mind, and due to many uncritically thinking computer science and physics proponents and popularisers continuously echoing this argument. This has stifled among many others the dynamical systems approach to the mind for over at least a decade, certainly at the university I work and has completely alienated the physics group who were once interested in working with biologists on neuroscience topics.

It is only in the last year I have even ever seen a proper challenge against this trend (by a biologist of all people) arguing for doing research into the practopoietic theory of the mind. This is a novel - fundamentally non-functionalist - biological theory of consciousness based on an actual description of biological observations, in line with the mathematics of non-equilibrium thermodynamics research from (non-high energy) theoretical physics research and deeply connected to non-linear dynamical systems theory by being a dimensionless parameter updating model in bifurcation theory. When the other physicists/applied mathematics looked at it carefully, they unanimously quickly saw not just the potential of this theory but all its possible mathematics and physics spinoffs and backed it pretty much immediately.

@Buzz Bloom Actually Wikipedia agrees exactly with me:

"There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so."


This clearly says three things:
- There are many people who argue that functionalism disproves physicalism; these people tend to be cognitive psychologists who reject physicalism altogether and religious people arguing for some form of dualism, in this case namely by appealing to ontological functionalism to disprove physicalism.
- Functionalism need for many not refer to ontological functionalism meaning they do not regard functionalism as a theory of ontology, i.e. to them it does not concern itself with what exists like physicalism and dualism among many other theses do; 'ontological physicalism' is a nonsensical term since physicalism is always about ontology, i.e. about what exists in the real world.
Therefore the stance - let's call it 'minimal functionalism' to avoid confusion with other forms of functionalism - need not be in strict disagreement with physicalism.
- Type physicalism (an identity theory of mind and body i.e. some physics) is incompatible with functionalism.

I, along with Penrose btw, am very much arguing for an identity theory of physicalism, whether that be first order as in type theory or second order as in token theory or higher order. Moreover, it is important to pay attention to details like this because this subject is directly related to clinical practice, meaning actual guidelines for treating comatose and neuropsychological patients are constructed and used by physicians on a day to day basis on the basis of exactly such arguments.

stevendaryl said:
Is there any substance (no pun intended) to what you're saying, because arguing over words? Is something physics, or not? What difference does it make? Can you relate whatever disagreement you are having back to this thread?

If you're just saying that you don't think someone should be considered to be studying the mind unless they are studying the physical properties of the brain? Is that just a matter of labeling?
It's deeper that that as I have tried to explain above how such 'semantic trivialities' seem to dominate interdisciplinary research programmes by imposing tacit selection criteria upon research, an unwanted emergent phenomena in science due to the politics of academia (pun intended).Not recognising e.g. that emergence can be studied using methods from physics is doing a disservice to both that emergent phenomena and to physics, for it inhibits unforeseen offshoots in both directions.

If you are questioning whether it is useful to regard dynamical systems as a subject in either mathematics or in physics, I would just answer it is a subject in both, properly even mathematical physics (regardless of the content of what is necessarily thought in contemporary mathematical physics programmes).
 

Similar threads

Replies
10
Views
4K
Replies
10
Views
4K
Replies
12
Views
2K
3
Replies
82
Views
13K
Replies
11
Views
9K
Back
Top