Can you solve Penrose's chess problem and win a bonus prize?

In summary, a chess problem has been created to challenge a computer, but also be solvable for humans. The goal is to either force a draw or win as white. The Penrose Institute is looking for solutions and will scan the brains of those who solve it. The problem may seem hopeless for white, but it is possible to draw or even win. Chess computers struggle with this type of problem due to the massive number of possible positions to consider. Humans are advised to find peace and quiet when attempting to solve it and may even experience a flash of insight. The first person to legally demonstrate the solution will receive a bonus prize. Both humans and computers are invited to participate.
  • #71
Just as an experiment, I restored a 15 year old chess program I had saved. It has no trouble with all the nuances of correct play despite the silly evaluation of 22 in favor of black. Even at one second per move, it plays flawlessly. Penrose is just totally off base here.
 
  • Like
Likes Nugatory
Mathematics news on Phys.org
  • #72
mfb said:
Computer programs playing these games on an expert level.A human brain is just a bunch of cells firing once in a while. The basic steps alone don't tell you what the overall product is capable of.
In particular, a human is in in princple able to fully simulate a computer transistor by transistor with pen and paper, and a computer is in principle able to fully simulate a human brain neuron by neuron or even atom by atom, given enough memory and time. Unless you propose some magic outside the realms of physics, there is nothing a human can do that a computer could never achieve.

This is more philosophy than physics, we don't know what consciousness is and how it emerges. In a sense the emergence of consciousness already is a miracle. I think that you as a physicist are going too far with 'it's all a bunch a particles which can be simulated".

Do you have any insight on the problems in the philosophy of mind (which are clearly related with your post) or are you just speculating about consciousness based on your physics insights? It is a enormous reach to claim with such a certain tone that neurons/consciousness can be simulated.

You should read Searle's Chinese room thought experiment before insisting on a such clear path between physics and consciousness.
 
Last edited:
  • Like
Likes Buzz Bloom and Auto-Didact
  • #73
mfb said:
Unless you propose some magic outside the realms of physics, there is nothing a human can do that a computer could never achieve.
One should distinguish what a human can do, from what a human can experience. The former can be described and explained by physics. The latter is a "magic" that cannot be described or explained by known physical laws.
 
  • Like
Likes Auto-Didact and durant35
  • #74
PAllen said:
No, it is a fact that no major current programs err in this position. They just give a silly evaluation.

[edit: the reason is simple. Using current search technology, which is not pure brute force, all so called promising bad moves are rapidly calculated to lead to disaster. They meet heuristics for deep search for forcing moves. Meanwhile, the correct moves preserve the evaluation. Thus, in a fraction of a second, correct moves are found for both sides. Further, if a black error is played, the correct win for white is rapidly found.]
Okay, so for this to be an algorithm trap, not only would we need the winning line to involve a prohibitive deep search, but the losing line must be provably disastrous only after a prohibitively deep search.

But, of course, there are always better algorithms. Anything logic that can be followed by a person, can be replicated in an algorithm. When I tackle a nasty algorithm problem, I often start by saying to myself "given the information, what would I as a person be able to deduce, and how would I know what to do?".
 
  • #75
Demystifier said:
One should distinguish what a human can do, from what a human can experience. The former can be described and explained by physics. The latter is a "magic" that cannot be described or explained by known physical laws.
That is quite a bit bolder than I would normally put it, but I fully agree. The capability of experiencing implies having some kind of awareness; it is argued that insight relies on both and it should be quite clear computers or AI seem to have nothing of the sort.

Unless this phenomenon can somehow actually fully be explained away using functionalism, which I don't believe is possible at all on the grounds that that would de facto constitute a refutation of the thesis of physicalism, I think it is quite obvious that not merely the SM but contemporary physics itself obviously has a gaping hole.
 
  • #76
Auto-Didact said:
That is quite a bit bolder than I would normally put it, but I fully agree. The capability of experiencing implies having some kind of awareness; it is argued that insight relies on both and it should be quite clear computers or AI seem to have nothing of the sort.

Unless this phenomenon can somehow actually fully be explained away using functionalism, which I don't believe is possible at all on the grounds that that would de facto constitute a refutation of the thesis of physicalism, I think it is quite obvious that not merely the SM but contemporary physics itself obviously has a gaping hole.
Note, this is altogether different from what Penrose argues. He believes the brain and consciousness are products of physics. He disagrees with some others about which physics is involved (quantum entanglement).
 
  • #77
Auto-Didact said:
but even today w.r.t. biology and neuroscience more remains to be unknown than known.
It does not matter. You keep arguing "but we cannot simulate it today" - that is not the point.

I cannot build a rocket that goes to space today. But with sufficient resources, I know I could. Why? Because rockets that can go to space exist, and they are made out of atoms - atoms I can get and arrange as well. Assembling a rocket atom by atom (or simulating it atom by atom if we just want to predict its actions) is a stupid approach - but it shows the general feasibility.
Auto-Didact said:
given only the SM, derive the complete theory of superconductivity and so determine all possible high ##T_c## superconductors.
Give me a sufficiently powerful computer and I tell you ##T_c## of all materials.
Auto-Didact said:
This is a regime for which it has not yet been shown that QM predictions using large quantum numbers match classical predictions.
It has been shown mathematically that classical motion is the quantum mechanical limit for small ##\hbar## - or "large" systems. But that is not the point. It can be studied.
Auto-Didact said:
Moreover, new physics does not only signify advances in high energy particle theory; the overthrow of or modification to any accepted orthodox physical theory by experimental data, whether that be in condensed matter theory, in biophysics or just in plain old mechanics, constitutes new physics.
It only produces new effective models. Which you don't need if you have unlimited computing power to simulate everything without effective models.
durant35 said:
This is more philosophy than physics, we don't know what consciousness is and how it emerges.
For the chess AI, it does not matter if the brain simulation has consciousness, or what that means in the first place.
durant35 said:
You should read Searle's Chinese room thought experiment before insisting on a such clear path between physics and consciousness.
I did not discuss anything related to consciousness yet. I am well aware of the standard thought experiments, thank you.
Demystifier said:
One should distinguish what a human can do, from what a human can experience. The former can be described and explained by physics. The latter is a "magic" that cannot be described or explained by known physical laws.
I said "can do" on purpose.
Auto-Didact said:
The capability of experiencing implies having some kind of awareness; it is argued that insight relies on both and it should be quite clear computers or AI seem to have nothing of the sort.
It is a bit off-topic, but why do you think this is clear? To me, this is just the old "we must be special!" story. First Rome had to be the center of the world. Why? Because. Then the Earth had to be the center of the world. Then the sun. Then our galaxy. In parallel, humans made up stories how humans were created different from all other animals. After Darwin, it was "tool use is only human", "long-term planning is only human", and so on - all refuted with more observations. "tool use is only found in mammals", "long-term planning is only found in mammals" - again refuted. "tool production is only human"? Same thing.
"Only humans can play Chess well" - until computers beat humans in Chess.
"Okay, they can play Chess, but Go requires insights computers don't have" - until computers beat humans in Go.
"Okay, but Poker is different" - until computers won in Poker.

There is absolutely no indication that humans can do or have anything other systems cannot do/have.
 
  • #78
mfb said:
"Okay, but Poker is different" - until computers won in Poker.
Can computers read the human body language associated with bluffing? The rest is "trivial".
 
  • #79
Demystifier said:
Can computers read the human body language associated with bluffing? The rest is "trivial".
The poker AI just saw the cards played by humans, and the human players only saw the cards played by the AI as well.
The rest is clearly not trivial, it took until 2017 to make an AI that can bluff properly.

Independent of the poker competition: There is software that can interpret human body language. The quality is not very convincing so far.
 
  • Like
Likes Auto-Didact
  • #80
mfb said:
it took until 2017 to make an AI that can bluff properly.
Where can I see more details?
 
  • #81
Libratus had the first consistent win against professional players, and it won with a significant margin.
I don't know if they wrote a paper about it.
 
  • #82
PAllen said:
Note, this is altogether different from what Penrose argues. He believes the brain and consciousness are products of physics. He disagrees with some others about which physics is involved (quantum entanglement).
I am arguing that brain and consciousness are part of physics as well, merely that our present-day understanding of physics is insufficient to describe consciousness. This is precisily what Penrose has argued for years
i.e. that standard QM is a provisional theory which will eventually be replaced by a more correct theory, i.e. a theory of not 'quantum gravity' but 'gravitized QM' where gravitationally induced wavefunction collapse (objective reduction, OR) due to unstable superposed spacetime curvutares, with mass functioning as the stability parameter in the bifurcation diagram. Moreover, he posits the full dynamical theory around this process will essentially contain non-computable aspects as well as a generalized Liouville theorem guaranteeing that information loss in black holes is exactly offset by information gain due to this non-computational gravitational OR process.
The functionalist argument on the other hand posits computers or AI of being capable of literally doing everything humans can do without utilizing the same physics that brains use, i.e. not only full substrate independence but also dynamics independence of consciousness. There is sufficient theoretical argument to doubt this and no experimental reason to believe that functionalism is true at all.

It seems to me that physicists who accept functionalism do not realize that they are essentially saying physics itself is completely secondary to computer science w.r.t. the description of Nature.
mfb said:
It does not matter. You keep arguing "but we cannot simulate it today" - that is not the point.
I'm not just arguing we cannot simulate it today, I am arguing we do not even have a proper theory today, i.e. it is not even clear whether or not it exists and what are the relevant properties. Positing that something can be understood in principle using some effective theory is useless if that something lacks a definite theoretical description, and it is doubly useless if unlimited computational resources are required as well.
It has been shown mathematically that classical motion is the quantum mechanical limit for small ##\hbar## - or "large" systems. But that is not the point. It can be studied.
Mere mathematical demonstration is not sufficient, only the experimental demonstration matters; this is why physics can be considered scientific at all.
It only produces new effective models. Which you don't need if you have unlimited computing power to simulate everything without effective models.
You are basically saying 'given enough computer power and the SM, the correct dynamical/mathematical theory of literally any currently known or unknown phenomenon whatsoever will automatically role out as well'. This is patently false if the initial conditions aren't taken into consideration as well, not even to mention the limitations due to chaos. The SM alone will certainly not uniquely simulate our planet nor the historical accidents leading to the formation of life and humanity.
It is a bit off-topic, but why do you think this is clear? To me, this is just the old "we must be special!" story. [...]
I am not arguing for some 'human specialness' in opposition to the Copernican principle. I am merely saying that human reasoning is not completely reducible to the same kind of (computational) logic which computers use, but it is instead qualitatively different in much the same way that performing flux pinning or the Meissner effect on superconductors is a completely different method of physically achieving 'suspension in the air' compared to a bird flapping its wings. Empirical observation and experimentation supports this position, even without a nice mathematically defined overarching theoretical framework.
 
  • #83
Well actually since the thread has derailed/expanded significantly, I would simply put a small part of my viewpoint.

Basically it all depends on abstraction that one takes. Among the below mentioned abstractions it is very clear which one I take to be correct, so I won't even mention it. This is after years of experience and development of mathematical sense.
So I won't justify it anymore except that I absolutely take it to be correct for a rational/idealised mathematical agent, whose initial training (possibly self-learned) and mathematical adeptness (and perhaps more importantly sense) has been developed beyond a certain point.

But basically if someone was inclined on taking the abstraction-2 (personal reasons, not having philosophical inclination etc.), I am more inclined to agree with mfb's viewpoint (as far as the functional part is concerned). Even though I don't necessarily take any side on this, it is true that people severely tend to underestimate what computer programs (as a whole collection -- which mfb was alluding to in one post) are able to do (at least "after" the "formalisation" stage). It only becomes more clear when one has tried quite a few examples (and much more clearer with more examples).
See post#51 (my previous post in this thread) as an example. Another example is that about four years ago, I tried a very large number of diagonalisation tricks (about a dozen perhaps -- I don't even remember most of them). And no matter how smart the trick was, there was always some sort of underlying explanation that foiled the trick. And it didn't matter what the trick was, or how difficult the explanation of why it didn't work was***, the key point was that it didn't work.

Though it is true that the process from pre-formalisation to formalisation (that is converting impules/responses from and to environment) is a complex one. Also there is an element of choices**** (such as card games etc.). It is bit beyond mathematics (and I am honestly much more interested in "post-formalisation" stage) so I leave it at that.

Here are the abstractions:
(1) Finite Memory Machine
This says the correct level of abstraction is a finite memory machine. Also (2) and (3) are incorrect.
(2) Computer Programs
The correct level of abstraction is a computer program. Also (3) is incorrect.
(3) Solver Level
The correct level of abstraction is being able to writing down notations for arbitrarily large elements of ωCK.Note that in (3) nowhere (at all) the demand is made that program indexes corresponding to all notations have to be enumerated (let alone decided which is an even stronger condition). And while these are sound mathematical questions, obviously my opinion is that for the purposes of conceptual/mathematical understanding of problem (of a sentient mind) these are not the right questions.

Note that (as directly just by stating it) (3) doesn't violate the church's thesis at all (of course which the question(s) mentioned in above paragraph did directly).

But perhaps this is also a difference between logician and intuitionist mindset. A logician seemingly would insist on some notion of proof based upon some kind of axioms (I should add that this isn't an easy task). An intuitionist would insist that you should just be able to discern and understand patterns until there is no doubt whatsoever left (obviously real-world circumstances forcing to make an quick conclusions isn't the main point here -- the point is idealised circumstances).
There is a lot of similarity here with views of "later Godel" (which I read very recently) --- but perhaps there might be some differences too. His criticism of constructivism is also something I agree with.

In fact, it doesn't seem to me (as directly just by stating it) that with (3) you can't make a computer program fail the turing test (with infinite time that is) either. That's because a computer program could give you an absolutely horrible number (essentially "deceiving" so to speak) and claim that it is a notation for such and such element (which you might not be able to verify or refute in any clear way at all).
But I think perhaps it would be failed (but still not in a preconceived mechanised manner) with much more severe restrictions placed (the restrictions being forced to follow line of reasoning or patterns). But basically this isn't mathematical domain (the previous paragraph was more in-line with mathematical domain) --- well at least strictly speaking it seems.

Also, as a last but significantly important point, there is a very important difference between (3) and what was described in post#51 (my previous post). In post#51 you had to cross every threshold at the right time (and being "aware" of it). Here for example, you can create a demarcation within ωCK (using program lengths for example). However, nowhere there is any condition that you have to be "aware" that a certain threshold has been crossed. The only condition is that it has to be eventually crossed.

*** In hindsight though, building a corresponding program would have been much more direct (and perhaps somewhat easier) than an explanation. But the explanation is usually somewhat more difficult (and also more illuminating).
**** Some would argue that this goes into domain of statistics perhaps? I don't know much about it.
 
Last edited:
  • Like
Likes Auto-Didact
  • #84
Hi @Auto-Didact:

Something seems to be missing from the problem statement. A chess position is given, but what is the statement of the problem? Also technically missing, but not particularly important in this case is: Whose move is it?

A glance tells me that no matter whose move it is, careful play leads to a draw by the 50 move rule, although I think that for some matches it has become a 75 move rule. Another observation is that with bad play either player can lose.

I gather that one purpose is to investigate how bad a computer chess program has to be before it loses, or what the human brain's neurological behavior differences are between a human who loses and a human who draws. I am not sure what useful insight if any might be learned from computer programs regarding this test. The article seems to be rather fuzzy about that. I gather since the article is very recent, there has been no actual test data collected yet.

Regards,
Buzz
 
  • #85
Buzz Bloom said:
Hi @Auto-Didact:

Something seems to be missing from the problem statement. A chess position is given, but what is the statement of the problem? Also technically missing, but not particularly important in this case is: Whose move is it?

A glance tells me that no matter whose move it is, careful play leads to a draw by the 50 move rule, although I think that for some matches it has become a 75 move rule. Another observation is that with bad play either player can lose.

I gather that one purpose is to investigate how bad a computer chess program has to be before it loses, or what the human brain's neurological behavior differences are between a human who loses and a human who draws. I am not sure what useful insight if any might be learned from computer programs regarding this test. The article seems to be rather fuzzy about that. I gather since the article is very recent, there has been no actual test data collected yet.

Regards,
Buzz
Yes there is test data. An earlier post links to a chessbase article where this was tested on current progroms. An earlier post here describes a test using gnuchess. I posted the results of testing with a 15 year old chess program. All handle the position perfectly, including exlpoitation of errors by either side, despite the silly evaluation. In an earlier post I explained why this is so, and also that there are long known positions which much better show limitations in current chess programs. The core issue here is understanding fortresses. The better problem position has the feature that an immediate sacrifice has to be made to achieve a fortress eventually, while any other move eventually loses. All current chess programs fail this test. (while strong human players pass, without even calculating everything, just seeing that any alternative to sacrificing and aiming for a fortress is clearly hopeless) However, this is just a question of where programmers put their effort. Suggestions for how to generally handle fortresses go back twenty years; they just have not been implemented.
 
  • Like
Likes Buzz Bloom
  • #86
Auto-Didact said:
It seems to me that physicists who accept functionalism do not realize that they are essentially saying physics itself is completely secondary to computer science w.r.t. the description of Nature.
Hi @Auto-Didact:

I do not understand this point of view. I understand functionalism to simply be taking into account "emergent phenomena", and hypothesizing that human consciousness may be such a phenomenon. Do you disagree with this?

Regards,
Buzz
 
  • #87
PAllen said:
Yes there is test data.
Hi Paul:

What I don't get is what the chess program experiments are actually trying to discover, or what did they did discover. Can you summarize that?

Regards,
Buzz
 
  • #88
Buzz Bloom said:
Hi Paul:

What I don't get is what the chess program experiments are actually trying to discover, or what did they did discover. Can you summarize that?

Regards,
Buzz
The tests show two things:

1) All programs tested give an avaluation that black is much better. This wrong, given that the position is drawn.
2) However, this has no impact on the computer's ability to play the position correctly, because wrong moves are correctly evaluated to worsen the position. The huge number of superfluous moves does not distract computers from finding this due to selective search heuristics being triggered by the forcing nature of the bad moves.
 
  • Like
Likes Buzz Bloom
  • #89
PAllen said:
Even without 50 move rule, draw must occur by 3 fold repetition eventually, if neither side blunders.
Hi Paul:

That might work reliably for computers, but I think for many reasonably competent human players, recognizing when a position has occurred three times might be difficult since even on the restricted fortress layout, the number of possible positions is rather large.

Regards,
Buzz
 
  • #90
PAllen said:
1) All programs tested give an avaluation that black is much better. This wrong, given that the position is drawn.
2) However, this has no impact on the computer's ability to play the position correctly, because wrong moves are correctly evaluated to worsen the position. The huge number of superfluous moves does not distract computers from finding this due to selective search heuristics being triggered by the forcing nature of the bad moves.
Hi Paul:

This is sort of interesting, but how does that result relate to the purpose that Penrose had when he posed the problem?

Regards,
Buzz
 
  • #91
Buzz Bloom said:
Hi Paul:

That might work reliably for computers, but I think for many reasonably competent human players, recognizing when a position has occurred three times might be difficult since even on the restricted fortress layout, the number of possible positions is rather large.

Regards,
Buzz
My point is simply that the position is theoretically a forced draw without the 50 move rule. For computer chess, you often want to remove this rule because there are computer chess positions with mate in 500 or so.
 
  • #92
Buzz Bloom said:
Hi Paul:

This is sort of interesting, but how does that result relate to the purpose that Penrose had when he posed the problem?

Regards,
Buzz
Penrose proposed this problem shows a fundamental limitation of computer chess. My response is:

1) The evaluation function is a means to an end for chess programs, not the end in itself. I worked for a while on query optimizer, for example, and sometimes used cost functions known to be wrong in principle, but that led, in practice, to good choices for a given query situation in the real world, and trying to achieve the same with a correct evaluation would make the optimizer too slow.

2) By the criterion of actual play, Penrose problem fails to expose any issues with computer play.

3) Had Penrose discussed the matter with chess computer experts, he would know that the issue is well known, and there are also long known positions that expose computer chess weakness by the criterion of actual play.

4) But this is still not fundamental, because the whole area of weakness could be removed in general. And I think chess is a fundamentally poor arena for Penrose to pursue his argument. Not only is chess fundamentally computable, but there is nothing fundamentally noncomputational about how humans play.
 
Last edited:
  • Like
Likes Buzz Bloom
  • #93
I can't see a white win. Say the white uses the furthest pawn forward to kill the castle... check...
black can only use queen to block as there's another pawn if king is used so the queen takes the pawn then white takes queen with pawn and then black king takes pawn no. 2.
Now pawn no. 3 ( second closest to the back takes castle no. 2 of black leaving pawns and three bishops for the blacks. Now... I am a bit unsure but whites could try taking out pawns if blacks don't trap him in doing so. Otherwise it could try trapping itself forcing a stalemate in the next few moves along its 2 remaining pawns, but it looks highly unlikely.
 
  • #94
supersub said:
I can't see a white win. Say the white uses the furthest pawn forward to kill the castle... check...
black can only use queen to block as there's another pawn if king is used so the queen takes the pawn then white takes queen with pawn and then black king takes pawn no. 2.
Now pawn no. 3 ( second closest to the back takes castle no. 2 of black leaving pawns and three bishops for the blacks. Now... I am a bit unsure but whites could try taking out pawns if blacks don't trap him in doing so. Otherwise it could try trapping itself forcing a stalemate in the next few moves along its 2 remaining pawns, but it looks highly unlikely.

White can win in a very unlikely and cooperative way. The white king somehow makes its way to a8 (top left corner) and black removes it's bishops from the b8-h2 diagonal (the diagonal colinear to the three bishops in the original diagram) White can then advance the pawn and no matter what black plays white then delivers checkmate by promoting to queen.

Also in chess pawns can't move backwards so pawn can't take back queen.
 
  • #95
Mastermind01 said:
The white king somehow makes its way to a8
d7 is fine too.
Mastermind01 said:
White can win in a very unlikely and cooperative way.
Well, white can win if black tries to win. That probability is unknown to me.
 
  • #96
Aufbauwerk 2045 said:
But I think we need to be careful about saying they "got some of this "creativity", "imagination", "insight" or whatever" because at the end of day it's still just a machine which runs through its program step by step.
Strictly speaking the computer never has creativity, imagination, or insight. This is true whether it is running a simple program to perform some arithmetic, or a complex program that uses AI techniques such as recursive search with backtracking or pattern recognition.
Even a neural network program is just another program. I can implement a neural network in C. It can "learn" to recognize letters of the alphabet. Does that mean my little program is "intelligent?" Obviously not.
Hi Aufbauwerk:
I get the impression that the issue you are raising is a vocabulary definition issue rather than a computer science issue.

There are two basically different language uses being used here.
1. The words "creativity", "imagination", "insight" relate to mental behavior that humans exhibit. When you say AI fails to exhibit this behavior, I think you mean that the human behavior is different with respect to certain qualities, such as for example, versatility, and therefore doesn't qualify to have these words apply to the AI's behavior.
2. The words "creativity", "imagination", "insight" are used as metaphors because the AI's behavior exhibits some similar aspects to the human behavior. Since metaphors never completely match all aspects of the normal, non-metaphorical usage, it is technically (and punctiliously) accurate to say the usage is "incorrect". However, that criticism is applicable to all metaphors, including those used to describe in a natural language what quantum mechanics math tells us about reality.

Some uses of AI methods are described with less "accuracy" than others with respect to the "creativity", "imagination", "insight" vocabulary. Methods that do not include adaptability behavior seem to be less accurately described with this vocabulary than those AI methods that do. This seems to be appropriate because humans use their creativity, imagination, and insight in a way them allows them to improve. Neural nets is one AI method that demonstrates adaptability, and the newer technologies involving "big data" appears to have potential for even more impressive adaptability.

Regards,
Buzz
 
  • #97
Buzz Bloom said:
Hi Paul:

That might work reliably for computers, but I think for many reasonably competent human players, recognizing when a position has occurred three times might be difficult since even on the restricted fortress layout, the number of possible positions is rather large.

Regards,
Buzz

The most famous example being:

https://en.wikipedia.org/wiki/Threefold_repetition#Fischer_versus_Spassky

In this case, the two players would agree a draw. A chess game wouldn't continue unless one player is trying to win. If one player insisted on playing on, then eventually the 50-move rule would end the game.

Both the 50-move rule and three-fold repetition are rare. Drawn games are usually agreed by the players. Stalemate is also rare.
 
  • Like
Likes Buzz Bloom
  • #98
PAllen said:
White is never forced to move a pawn. That would be idiotic loss. By not doing so, white forces draw. You may be confused by GO, where repeating position (in superko rule variants) is prohibited. In chess, repetition is not prohibited, and leads to draw, whenever both sides would face adverse consequence of avoiding repetition. That is precisely the case here. Thus, this position is an absolute draw without idiotic blunder which even weak programs will not make.

You are right, I was confused here.
 
  • #99
mfb said:
Biology is irrelevant at this point. If all parts of the brain follow physical laws, and we can find the physical laws, then a computer can in principle simulate a brain.
Hi mfb:

I hesitate to disagree with you because I know you are much better educated in these topics than I am, but I think your argument has a couple of flaws.

1. The physical laws that you suggest might be used to simulate brain function I presume includes QM. I do not understand how in principle QM laws can be used for such an a simulation. As far as I know there has never been any observational confirmation that brain behavior is dependent on the randomness of the uncertainty principle. If I am correct about this, then the simulation of the probabilistic possibilities of quantum interactions within the brain would not be sufficient to capture the behavior of the brain behavior. On the other hand, a neurological model might be able to do it, but this would not involve simulating any physics laws.

2. Your argument ignored emergent phenomena. Because of my limitations, the following is just an over simplification of how brain function is an emergent phenomenon much removed from the underlying physics.
a. The chemistry of the brain function is not very well described in terms of the physics because much of the relevant physics is not readily predictive about the complexities of the chemistry due to the chemistry complexity.
b. The biology of the brain cell structure and function is partially, but not very well described in terms of the relevant chemistry because much of the relevant chemistry is not readily predictive about the complexities of the cell structure and function due to the structure and function complexity.
c. The neurology of inter-brain cell structures and interconnectivity is partially, but not very well described in terms of the relevant brain cell structure and function because much of the relevant chemistry is not readily predictive about the complexities of the cell structure and function due to the structure and function complexity.
d. The psychology of the brain's neurological behavior is partially, but not very well described in terms of the relevant inter-brain cell structure because much of the relevant inter-brain structure is not readily predictive about the complexities of the psychological behavior.

Regards,
Buzz
 
  • #100
Demystifier said:
One should distinguish what a human can do, from what a human can experience. The former can be described and explained by physics. The latter is a "magic" that cannot be described or explained by known physical laws.
Hi Demystifier:

Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.

Regards,
Buzz
 
  • #101
Haven't read the entire thread, but what computer thinks black will win here? Today's computers have like 3400 elo. That's insane, and there is no way you're going to get me to believe a computer can't figure this out, and rather easily. Even a primitive brute force computer should be able to check that all of black's pieces are trapped, and his bishops aren't on the right squares to do anything useful.

The only way I can see this fooling a computer, is if the computer is truly brute force and nothing else. Chess computers seem bad at long term strategy, but this position should be one of the easiest ones for a computer to recognize.
 
  • #102
stevendaryl said:
I greatly admire Penrose as a brilliant thinker, but I actually don't think that his thoughts about consciousness, or human versus machine, are particularly novel or insightful.

I have read several books by Penrose, including the brilliant Road to Reality, but yes, he is both bizarrely brilliant, and bizarrely simplistic in his understanding of certain matters. In fact, his attempts to inject theology into unrelated topics often serve as a good reminder that brilliant people are brilliant in one thing, not everything.
 
  • #103
Buzz Bloom said:
Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences. Philosophically speaking, there is no proof that philosophical zombies are impossible.
 
  • Like
Likes durant35
  • #104
Auto-Didact said:
I am arguing that brain and consciousness are part of physics as well, merely that our present-day understanding of physics is insufficient to describe consciousness. This is precisily what Penrose has argued for years

...

It seems to me that physicists who accept functionalism do not realize that they are essentially saying physics itself is completely secondary to computer science w.r.t. the description of Nature.

I don't see how that follows. To me, that's like saying that if I believe that the "A" I just typed is the same letter as the "A" I printed on a piece of paper, then I must believe in something beyond physics. Functionalism defines psychological objects in terms of their role in behavior, not in terms of their physical composition, in the same way that the alphabet is not defined in terms of the physical composition of letters.
 
  • Like
Likes Buzz Bloom
  • #105
@Buzz Bloom: How do you simulate an ant hill?
You simulate the behavior of every ant. You need absolutely no knowledge of the concept of ant streets or other emergent phenomena. They naturally occur in the simulation even if you don't have a concept of them.

How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
For a brain to differ from this simulation, we need atoms that behave differently. Atoms and their interactions with other atoms have been studied extremely well. Atoms do not have a concept of "I am in a brain, so I should behave differently now".
Buzz Bloom said:
I do not understand how in principle QM laws can be used for such an a simulation.
I do not understand the problem. Randomness certainly applies to every interaction. It is currently unknown if that is relevant for large-scale effects or if the random effects average out without larger influence. A full simulation of a neuron would settle this question, and the question is not relevant for a simulation that can take quantum mechanics into account.
Auto-Didact said:
Mere mathematical demonstration is not sufficient, only the experimental demonstration matters; this is why physics can be considered scientific at all.
Classical mechanics has been tested on large scales, I thought that part was obvious. Apart from that: A mathematical proof is the best you can get. You can experimentally get 3+4=7 by putting 3 apples on a table and then 4 apples more and count 7, but showing it mathematically ensures it will work for everything, not just for apples.
Auto-Didact said:
You are basically saying 'given enough computer power and the SM, the correct dynamical/mathematical theory of literally any currently known or unknown phenomenon whatsoever will automatically role out as well'. This is patently false if the initial conditions aren't taken into consideration as well, not even to mention the limitations due to chaos. The SM alone will certainly not uniquely simulate our planet nor the historical accidents leading to the formation of life and humanity.
You can get the initial conditions.
Chaos (and randomness in QM) is not an issue. The computer can simulate one path, or a few if we want to study the impact of chaotic behavior. No one asked that the computer can simulate every possible future of a human (or Earth, or whatever you simulate). We just want a realistic option.
Auto-Didact said:
I am not arguing for some 'human specialness' in opposition to the Copernican principle. I am merely saying that human reasoning is not completely reducible to the same kind of (computational) logic which computers use
That is exactly arguing for 'human specialness'.

Buzz Bloom said:
Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.
The actions of humans can be predicted from brain scans before the humans think consciously about the actions.
 

Similar threads

Replies
10
Views
4K
Replies
10
Views
4K
Replies
12
Views
2K
3
Replies
82
Views
13K
Replies
11
Views
9K
Back
Top