Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #141
neurocomp2003 said:
tishammerw: what counterexample?

I have several, but I'll list two that seem to be the most relevant. Remember it was said earlier:

However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument

One of the counterexamples is the instance of a complex set of instructions including learning algorithms without literal understanding taking place. From post #103 (with a typo correction):

the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he doesn’t understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

So even a program with learning algorithms is not sufficient for literal understanding to exist.

It was said earlier:

we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding.

The other counterexample be found in post #126 where I talk about the robot and program X. This is an instance in which the "right" program (you can have it possessing complex learning algorithims etc.) is run and yet there is still no literal understanding.

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

TheStatutoryApe claimed just having “the right hardware and the right program” would be enough. Clearly having the “right” program doesn't work. He mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

So here we have an instance of the "right" program--learning algorithms and all--being run in a robot with sensors, and still no literal understanding. There is no real understanding even when this program is being run.

One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?


that searle's argument says that there is no literal understanding by the brain without this "something else" tha tyou speak of?

Searle argued that our brains have unique causal powers that go beyond the simple (or even complex) manipulation of input.


I'm still lost with your counterexample...or is it that if something else can imitate the human and clearly not understand...and then doesn't this imply that humans may not "understand" at all? what makes us so special?

Because we humans have that "something else."


why do you believe that humans "understand"?

Well, I'm an example of this. I am a human, and I am capable of literal understanding whenever I read, listen to people, etc.


wouldn't searles argument also argue against human understanding?

No, because we humans have that "something else."


It is fair for you to ask "what else" but you must also answer the question

Fair enough, but I have answered this question before. I personally believe this "something else" is the soul (Searle believes it is the brain’s unique causal powers, but I believe the physical world cannot be the source of them). Whether you agree with my belief however is irrelevant to the problem: you must still find a way out of the counterexamples if you wish to rationally maintain your position. And I don't think that can be done.
 
Physics news on Phys.org
  • #142
tishammerw: cool i see yor argument now...taking aside what we know from physics...do you believethe soul is made out of some substance in our universe? does it exist in some form of physicality(not necessarily what we understand of physics today) or do you believe it exists from nothing?

also do you really think you can "understand" what can be written ...or can you see it as a complex emergence behaviour that gives you this feel for having a higher cognitive path then robots? The instincts to associate one word form to some complex pattern of inputs?

as for you support arguements to searle(the counterexamples)...how can you take a finite fragment of life(that is t0-t1) and state that a computer clearly cannot
understand because of this finite time frame...i could do the same thing with children.
And they nod their heads in agrreement though they will literally not understand...though as time goes forth they will grasp that concept. You do not think that a computer can do the same and grasp this concept over time? Do children not imitate their adult surroundings? I think you have neglected the true concept of learning by imitation and learning by interaction with the adults around you.
 
  • #143
neurocomp2003 said:
tishammerw: cool i see yor argument now...taking aside what we know from physics...do you believethe soul is made out of some substance in our universe?

No.


does it exist in some form of physicality(not necessarily what we understand of physics today) or do you believe it exists from nothing?

I believe the soul is incorporeal. Beyond that there is only speculation (as far as I know).


also do you really think you can "understand" what can be written ...or can you see it as a complex emergence behaviour that gives you this feel for having a higher cognitive path then robots?

The ability to understand likely relies on a number of factors (including learning “algorithms”). So the answer is “yes” if you're asking me if the mechanics is complex, "no" if you're asking me if it magically “emerges” through some set of physical parts.


as for you support arguements to searle(the counterexamples)...how can you take a finite fragment of life(that is t0-t1) and state that a computer clearly cannot
understand because of this finite time frame

I'm not sure what you're asking here. If you're asking me why I believe that computers (at least with their current architecture: complex system of rules acting on input etc.) given my finite time in the universe, my answer would be "logic and reason"--the variants of the Chinese room thought experiment as my evidence.


...i could do the same thing with children.
And they nod their heads in agrreement though they will literally not understand...though as time goes forth they will grasp that concept. You do not think that a computer can do the same and grasp this concept over time?

No, because it lacks that "something else" humans have. Think back to the robot and program X counterexample. Even if program X (with its diverse and complex set of learning algorithms) is run for a hundred years, Bob still won’t understand what's going on. The passage of time is irrelevant because it still doesn't change the logic of the circumstances.
 
  • #144
Tisthammerw said:
Where did you answer these questions?
Note what happened below:
TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.
I responded that while people are obviously capable of understanding (there's no dispute there) my claim that a complex set of instructions--while perhaps necessary--is not sufficient for understanding (as this example proves: we have the “right” program and still no understanding).
Note what happened above.
You conveniently did not quote my full answer...
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis. It's sensory information which is syntactic. The man's brain takes in syntactic information, that is information that has no more meaning than it's pattern structure and context with no intrinsic meaning to be understood, and it decifers the information without any meaningful thought and understanding what so ever in order to produce those chinese characters that he's looking at. The understanding of what the "picture" represents is an entirely different story but just ataining the "picture" that is sensory information is easily done by the processes the mans brain is already doing that are not requiring meaningful thoughts or output of him as a human. So I don't see the problem with allowing the man sensory input from outside. This is all that the man in the box has access to is the syntax of the information being presented. So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books? It all depends on the complexity of the language being used. Any spoken human language is incredibly complex and takes a vast reserve of experiencial data (learned rules of various sorts) to process, and experiencial data is syntactic as well.
Give the man in the room a more simple language to work with then. Start asking the man in the room math questions. What is one plus one? What is two plus two? The man in the room will be able to understand math given enough time to decifer the code and be capable of applying it.
Do you see that you have not adressed all of this?
I understand that it's a bit hodge podge so let me condense it down to my point that I don't think you have adressed.
You say that giving the homunculus sensory input via program X will only give the homunculus more script that it cannot meaningfully understand. The basis of this is that the homunculus can only draw conclusions based on the syntax of the information.
First off let's cut out the idea of the homunculus understanding what it sees since this is not what I am trying to prove yet. I am only trying to prove that it can actually see the outside world utilizing this program X.
Human sensory input itself is syntactic information which the brain translates into visual data (I'm just going to use vision as an example for the argument to keep this simple). The human brain acomplishes this feat in a minute fraction of a second without any meaningful understanding taking place. There are other parts of the brain that will give the data meaning but I am not bothering with going that far yet. Based on the rules of the C.R. and relating it to the manner in which humans receive sensory information we should be able to deduce that the homunculus in the CR should be capable of at least "seeing" if not understanding what it is seeing.
Can we agree on this?
As I already stated there remains the matter of understanding what is seen but if we could let's put that and all other matters on the back burner for the moment and see if we can agree on what I have proposed here. Let's also dispense with the idea of the homunculus formulating output based on the input since sensory input does not necesitate output. Let's say that the sensory input is solely for the benefit of the homunculus and it's learning program. Instead of throwing it directly into converations in chinese and asking of it "sink or swim" in regard to it's understanding of the conversation let us say that we are going to take it to school first and give it the opertunity to learn beforehand.
Perhaps if we can move through this point by point it will make it easier to communicate. We'll start with whether or not the CR homunculus can "see", not understand but just see, and formulate the CR environment so that it is in "learning mode" instead of being forced to respond to input.

One other thing though...

Tisthammerw" said:
Fair enough, but I have answered this question before. I personally believe this "something else" is the soul (Searle believes it is the brain’s unique causal powers, but I believe the physical world cannot be the source of them). Whether you agree with my belief however is irrelevant to the problem: you must still find a way out of the counterexamples if you wish to rationally maintain your position. And I don't think that can be done.
Perhaps to help you understand a bit more where I am coming from in this I do consider the idea of there being a sort of "something more" but not in the same manner that you do. Instead of "soul" I simple call it a "mind". The difference is that I do not believe that this is a dualistic thing. A more appropriate name for it might be "infospace", a sort of holographic matrix of information that has no tangible substance to it. My perception of it is not dualistic because I believe that it is wholely dependant upon a physical medium whether that be a brain or a machine. I believe that the processes of computers exist in "infospace". I see the difference between the "mind-space" and the purely computational "infospace" of a computer as nothing but a matter of structure and complexity.
I'm sure you don't agree with this idea, at least not completely, but hopefully it will help you understand better the way I perceive the AI problem and the comparison of human to machine.
 
  • #145
TheStatutoryApe said:
Note what happened above.
You conveniently did not quote my full answer...

Initially, I (wrongfully) dismissed it as not adding any real substance to the text I quoted.

Do you see that you have not adressed all of this?

In post #135 I addressed the question you asked, and responded (I think) to the gist of the text earlier.


I understand that it's a bit hodge podge so let me condense it down to my point that I don't think you have adressed.
You say that giving the homunculus sensory input via program X will only give the homunculus more script that it cannot meaningfully understand. The basis of this is that the homunculus can only draw conclusions based on the syntax of the information.
First off let's cut out the idea of the homunculus understanding what it sees since this is not what I am trying to prove yet. I am only trying to prove that it can actually see the outside world utilizing this program X.

That doesn't seem possible given the conditions of this thought experiment. Ex hypothesi he doesn't see the outside world at all; he is only the processor of the program.


Human sensory input itself is syntactic information which the brain translates into visual data (I'm just going to use vision as an example for the argument to keep this simple). The human brain acomplishes this feat in a minute fraction of a second without any meaningful understanding taking place. There are other parts of the brain that will give the data meaning but I am not bothering with going that far yet. Based on the rules of the C.R. and relating it to the manner in which humans receive sensory information we should be able to deduce that the homunculus in the CR should be capable of at least "seeing" if not understanding what it is seeing.
Can we agree on this?


The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.


In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.


As I already stated there remains the matter of understanding what is seen but if we could let's put that and all other matters on the back burner for the moment and see if we can agree on what I have proposed here. Let's also dispense with the idea of the homunculus formulating output based on the input since sensory input does not necesitate output.

Bob (the homunculus in the robot and program X scenario) does indeed formulate output (based on program X) given the input.


Lets say that the sensory input is solely for the benefit of the homunculus and it's learning program. Instead of throwing it directly into converations in chinese and asking of it "sink or swim" in regard to it's understanding of the conversation let us say that we are going to take it to school first and give it the opertunity to learn beforehand.

Again, while we can teach the homunculus a new language this doesn't have any bearing on the purpose of the counterexample: this (the robot and program X experiment) is a clear instance in which the “right” program is being run and yet there is still no literal understanding. And you still haven't answered the questions I asked regarding this thought experiment.

You can modify the thought experiment all you want, teach the homunculus a new language etc. but it still doesn't change the fact that I've provided a counterexample. "The right program with the right hardware" doesn't seem to work. Why? Because I provided a clear instance in which the "right program" was run on the robot and still there was no literal understanding. To recap:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.

One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?
 
  • #146
tishammerw- so the soul isn't made of anything detectable or not yet detectable but still persists as a single classifiable object. that is what your saying?

you stated "understanding" is not magically emergent yet you say the soul is not physically detectable."incorporeal" Are these statements not contradictory?

as for the finite time...i didn't mean your time but the time of the counterexample...which btw you referred to as bob as the robot though in your example bob was the human. Such a finite example of a robots life...
but isn't human "understanding" built through many years of learning. And thus you would need to take a grandeur example(many pages rather then 5 lines) inorder to give me an idea of what you are talking about with the pseudo-understanding because if i captured that instance of two humans rather than human-robot then i could say that both could be robots.

as for the learning that i describe in my last post..i wasn't talking about learning algorithms of programming techniques but the concept of learning from sociological/psychological standpoint...
 
  • #147
neurocomp2003 said:
tishammerw- so the soul isn't made of anything detectable or not yet detectable but still persists as a single classifiable object. that is what your saying?

Er, sort of. It is indirectly detectable; we can rationally infer its existence. The soul exists, but the precise metaphysical properties may be beyond our current understanding.


you stated "understanding" is not magically emergent yet you say the soul is not physically detectable."incorporeal" Are these statements not contradictory?

I don't see why they would be contradictory.


as for the finite time...i didn't mean your time but the time of the counterexample...which btw you referred to as bob as the robot though in your example bob was the human.

No, I was referring to Bob the human. Any other implication was unintentional. And in any case it is as I said; even if the counterexample were run for a hundred years Bob wouldn't understand anything.


but isn't human "understanding" built through many years of learning.
And thus you would need to take a grandeur example(many pages rather then 5 lines) inorder to give me an idea of what you are talking about with the pseudo-understanding because if i captured that instance of two humans rather than human-robot then i could say that both could be robots.

Huh?


as for the learning that i describe in my last post..i wasn't talking about learning algorithms of programming techniques but the concept of learning from sociological/psychological standpoint...

Well, yes we humans can learn. But learning algorithms for computers seem insufficient for the job of literal understanding.
 
  • #148
Tisthammerw said:
That doesn't seem possible given the conditions of this thought experiment. Ex hypothesi he doesn't see the outside world at all; he is only the processor of the program.
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole. The homunculus is supposed to represent the processing power of whole system not just a lone processor amidst it. Even a human's capacity for understanding is based on it's whole system acting as a single entity. If a human had never experienced eye sight this would leave a large gap in it's ability to understand human language. If your stripped a human down to nothing but a brain it would be in the same exact situation that you insist that a computer is in because it is now incapable of developing meaningful understanding of the outside world. Any sensory system that you give a computer should be treated exactly as the ones for a human, as part of the whole rather than just another source of meaningless script, because those tools are part of the systems corpus as a whole, just like a human.

Tisthammerw said:
The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.

In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.
It is true that if you lay down a bunch of binary in front of a human they are not likely to understand it. This does not mean though that the human brain is incapable of decifering raw syntactic information. As a matter of fact it translates syntactic sensory information at a furious pace continuously and that information is more complex than binary. The problem is that the CR asks the human to translate it with a portion of his brain illsuited to the task. You might as well ask your pacman to preform calculus or your texas instruments to play pacman. If you are intending to ask the man in the CR to interpret syntactic sensory data as fast and efficiently as possible you may as well let him use the portions of his brain that are suited to the task and give him a video feed. This would only be fair and the information he would be receiving would still be syntactic in nature.

I either missed it or didn't understand it but we do agree that translation of sensory data is a purely syntactic process right? Not the recognition but just the actual "seeing" part right?

How about some experimental evidence that may back this up. In another thread Evo reminded me of an experiment that was run where the subjects were given eyewear that inverted their vision. After a period of time their eyes adjusted and they began to see normally with the eyewear on. There was no meaningful understanding involved, no intentionality, no semantics. The brain simply adjusted the manner in which it interpreted the syntactic sensory data to fit the circumstances without the need of any meaningful thought on the part of the subjects.

How about one that involves AI. A man created small robots on wheels that were capable of using sensors to sense their immediate surroundings and tell if there was a power source near by. They were programmed to have "play time" where they scurried about the room and then "feeding time" when they were low on power where they sought out a power source and recharged. They were capable of figuring out the layout of they room they were in so as to avoid running into objects when they "played" and seeking out and remembering where the power sources were that they had sought out for when it was time to "feed". The room could get changed around and the robots would adapt.
Even with just regard to this last bit would you still contend that a computer would be unable to process syntactic sensory data, learn from it, and utilize it?

Tisthammerw said:
Bob (the homunculus in the robot and program X scenario) does indeed formulate output (based on program X) given the input.
A computer only produces output when it's program suggests that it should, AI or not. It isn't necessary so I see no need to continue with forcing the AI to produce output when ever it receives any kind of input here in the CR.

I'll have to finish later I need to get going.
 
  • #149
TheStatutoryApe said:
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole.

Ah, the old systems reply. The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese? That doesn’t strike me as plausible. Second, Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language. So the systems reply doesn't seem to work at all.


The homunculus is supposed to represent the processing power of whole system not just a lone processor amidst it.

Well, in the Chinese room he is the processing power of the whole system.


The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.

In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.

It is true that if you lay down a bunch of binary in front of a human they are not likely to understand it. This does not mean though that the human brain is incapable of decifering raw syntactic information. As a matter of fact it translates syntactic sensory information at a furious pace continuously and that information is more complex than binary.

That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.

BTW, don't forget the brain simulation reply:

One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.

So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.


I either missed it or didn't understand it but we do agree that translation of sensory data is a purely syntactic process right?

No (see above and below for more info).


Not the recognition but just the actual "seeing" part right?

The "seeing" of objects I do not believe to be purely syntactic (though I do believe it involves some syntactic processes within the brain).


How about some experimental evidence that may back this up. In another thread Evo reminded me of an experiment that was run where the subjects were given eyewear that inverted their vision. After a period of time their eyes adjusted and they began to see normally with the eyewear on. There was no meaningful understanding involved, no intentionality, no semantics. The brain simply adjusted the manner in which it interpreted the syntactic sensory data to fit the circumstances without the need of any meaningful thought on the part of the subjects.

Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive. One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.


How about one that involves AI. A man created small robots on wheels that were capable of using sensors to sense their immediate surroundings and tell if there was a power source near by. They were programmed to have "play time" where they scurried about the room and then "feeding time" when they were low on power where they sought out a power source and recharged. They were capable of figuring out the layout of they room they were in so as to avoid running into objects when they "played" and seeking out and remembering where the power sources were that they had sought out for when it was time to "feed". The room could get changed around and the robots would adapt.
Even with just regard to this last bit would you still contend that a computer would be unable to process syntactic sensory data, learn from it, and utilize it?

I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).

And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
 
  • #150
Tisthammerw said:
Ah, the old systems reply. The systems reply goes something like this:
This does not adress my objection what so ever. I am not saying that the whole system understands chinese. I'm not saying that combining the man with the book and pen and paper will make him understand chinese. The situation would be a bit more accurate with regard to paralleling a computer though.
The objection I had was in regard to the manner in which you are seperating the computer from the sensory input. My entire last post was in regard to sensory input. I told you in the post before that this is what I wanted to discuss before we move on. Pay attention and stop detracting from the issues I am presenting.
If I were to rip your eye balls out, somehow keep them functioning, and then have them transmit data to you for you to decifer you wouldn't be able to do it. Your eyes work because they are part of the system as a whole. You're telling me that the "eyes" of the computer are separate from it and just deliver input for the processor to formulate output for. In your argument it's "eyes" are a separate entity processing data and sending information on to the man in the room. Are there little men in the eyes processing information just like the man in the CR? Refusing to allow the AI to have eyes is just a stuborn manner by which to preserve the CR argument.
This is no where near an accurate picture. This is one of the reasons I object to you stating that the computer must produce output based on the sensory input. You're distracting from the issue of the computer absorbing and learning by saying that it is incapable of anything other than reacting when this isn't even accurate. Computers can "think" and simply absorb information and process it without giving immediate reactionary output. As a matter of fact most computers "think" before they act now a days. Computers can cogitate information and analyze it's value, I'll go into this more later.
Are you really just unaware of what computers are capable of now a days?
With the way that this conversation is going I'm inclined to think that you are a chinese man in an english room formulating output based on rules for arguing the Chinese Room Argument. Please come up with your own arguments instead of pulling out stock arguments that don't even adress my points.

Tisthammerw said:
Well, in the Chinese room he is the processing power of the whole system.
He should be representative of the system as a whole including the sensory aperati. If you were separated from your sensory organs and made to interpret sensory information from an outside source you would be stuck in the same situation the man in the CR is. You are not a homunculus residing inside your head nor is the computer a homunculus residing inside it's shell.

Tisthammerw said:
That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.

BTW, don't forget the brain simulation reply:
No. Your hypothetical system does not allow the man in the room to use the portions of his brain suited for the processing of the sort of information you are sending him. Can you read the script on a page by smelling it? How easily do you think you could tell the difference between a piece by Beethoven and one by Motzart with your finger tips? How about if I asked you to read a book only utilizing the right side of your brain? Are any of these a fair challenge? The only one that you might be able to pull of is the one with your finger tips but either way you are still not hearing the music are you?
It has nothing to do with not having the "right program". The human brain does have the right program but you are refusing to allow the man in the room to use it just like you are refusing to allow the computer to have "eyes" of it's own but rather it's outsourcing the job to another little man in another little room somewhere who only speaks chinese.

Tisthammerw said:
One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.
So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.
Even here yet again you fail to adress my objection while using some stock argument. My objection was that you are not allowing the man in the room to properly utilize his own brain. Yet again you force us to devorce the man in the room from the entirety of the system by creating some crude mock up of a nueral net rather than allowing him to utilize the one already in his head. Why create the mock up when he has the real thing with him already? Creating these intermediaries only hinders the man. You continually set him up to fail by not allowing him to reach his goal in the most well suited and efficient manner at his disposal. If anyone were to actually design computers like you(or Searle) design your rooms which are supposed to parallel them they'd be fired.

Tisthammerw said:
Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive.
Here you seem to misunderstand the CR argument. The property of the information that the man in the CR room is able to understand is the syntax; the structure, the context, the patterns. This isn't just the manner in which it arrives it is the manner in which he works with it and perceives it. He lacks only the semantic property. Visual information is nothing but syntactic. There is no further information there except the structure, context, and pattern of the information. You do not have to "understand" what you are looking at in order to "see" it. The man in the box does not understand what the chinese characters are that he is looking at but he can still perceive them. He lacks only the ability to "see" the semantic property, that is all.

Tisthammerw said:
One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.
You do understand why the brain surgeon can not perceive colour right? It's a lack of the proper hardware, or rather wetware in this case. The most common problem that creates colour blindness is that the eyes lack the proper rods and cones (I forget exactly which ones do what but the case is something of this sort none the less.). If she were to undergo some sort of operation to add the elements necessary for gathering colour information to her eyes, a wetware upgrade, then she should be able to see in colour assuming that the proper software is present in her brain. If the software is not present then theoretically she could undergo some sort of operation to add it, software upgrade for her nueral processor. Funny enough your own example is perfect in demonstrating that even a human needs the proper software and hardware/wetware to be capable of perception! So why is it that the proper software and hardware is necessary for a human to do these special processes that you attribute to it but the right software and hardware is not enough to help a computer? Does the human have a magic ball of yarn? What? LOL!
And I already know what you are going to say. You'll say that the human does have a magic ball of yarn which you have dubbed a "soul". Yet you can not tell me the properties of this soul and what exactly it does without invoking yet more magic balls of yarn like "freewill" or maybe Searle's "Causal Mind" or "Intrinsic Intentionality". So what are these things what do they do? Will you invoke yet more magic balls of yarn? Maybe even the cosmic magic ball of yarn called "God"? None of these magic balls of yarn prove anything. Ofcourse you will say that the CR proves that there must be "Something More". So what if I were to just take a cue from you and say that all we need to do is find a magic ball of yarn called "AI" and embue a computer with it. I can't tell you what it does except say that it gives the computer "Intrinsic Intentionality" and/or "Freewill". Will you except this answer to your question? If you won't then you can not expect me to accept your magic ball of yarn either, so both arguments are then uselss and invalid for the purpose of our discussion since they yield no results.

Tisthammerw said:
I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).

And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
Obviously it doesn't understand things the way we do but what about understanding things the way a hamster does? You seem to misunderstand the way AI works in instances such as these. The AI is not simply following instructions. When the robot comes to a wall there is not an instruction that says "when you come to a wall turn right". It can turn either right or left and it makes a decision to do one or the other. Ofcourse this is a rather simplistic example so let's bring it up a notch.
Earlier in this discussion Deep Blue was brought up. You responded in a very similar manner to that as you did this but I never got back to discussing it. You seem to think that a complex set of syntactic rules is enough for Deep Blue to have beaten Kasperov. The problem though is that you are wrong. You can not create such rules for making a computer play chess and have the computer be successful. At least not against anyone who plays chess well and especially not against a world champion such as Kasperov. You can not simply program it with rules such as "when this is the board position move king's knight one to king's bishop three". If you made this sort of program and expected it to respond properly in any given situation you would have to map out the entire game tree. Computers can do this far faster than we can and even they at current max processing speed will take at least hundreds of thousands of years to do this. By that time we will be dead and unable to write out the answers for every single possible board position. So we need to make short cuts. I could go on and on about how we might accomplish this but how about I tell you the way that I understand it is actually done instead.
The computer is taught how to play chess. It is told the board set up and how the pieces move, take each other, and so on. Then it is taught strategy such as controling the center, using pieces in tandum with one another, hidden check, and so forth. So far the computer has not been given any set of instructions on how to respond to any given situation such as the set up in the CR. It is only being taught how to play the game, more or less in the same fashion that a human learns how to play the game except much faster. The computer is then asked based on the rules presented for the game and the goals presented to it to evaluate possible moves and pick one that is the most advantageous. This is pretty much what a human does when a human plays chess. So since the computer is evaluating options and making decisions would you still say that it can not understand what it is doing and is only replacing one line of code with another line of code as it says to do in it's manual?
 
  • #151
tishammerw: at what age of a being does the soul arise? I think you posted before that you were not sure...but if you were not sure how can you quantify its existence? also you say that the soul is more metaphysics then physicality but maintained still within the brain does this mean that some physical structure of the brain creates this phenomenon? if not how does the soul become limited within the brain...that is to say why doesn't it float around aside the body? what makes its restraint inside the head.

btw this might be more of a personally question but I was wondering if you have children or have you ever helped raise children?
 
Last edited:
  • #152
Sorry I had to go again and couldn't finish thoughtfully. I'll continue.

Something I have failed to bring up yet. The scenario in the CR of a computer program with a large enough set of instructions telling the computer to replace certain lines of script with other lines of script making it indistinguishable from a human is not possible in reality. Any significantly long coversation or just a person testing it to see if it is a computer of this sort or not will reveal it for what it is. Just the same as in the case of mapping out the game tree for chess it would be equally impossible to map out a "conversation tree" sufficient enough to cover even a significant portion of the possible conversational scenarios. It's fine as a hypotheitical scenario because a hypothetical can allow something that is impossible. BUT if you were to come across a computer in reality that could carry on a conversation indestinguishable from a human you would have to assume that the computer was capable of some level of semantic understanding. I see no other way of getting around the problem.
 
  • #153
Had to take off again.

So if it is impossible to create a program with a sefficient syntactic rule book like the one in Searle's chinese room to be indestinguishable from a human in conversation due to the shear vastness of the "Conversation Tree" then likewise due to the shear vastness of the game tree for chess the Chinese Room should predict that a computer will not be able to play a good game of chess.
Whether or not you want to agree that Deep Blue is capable of any sort of "understanding" Deep Blue and programs like it are still proof that AI has broken out of the Chinese Room.
 
  • #154
TheStatutoryApe said:
This does not adress my objection what so ever.

Please consider the context of the quote:

Tisthammerw said:
TheStatutoryApe said:
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole.

Ah, the old systems reply. The systems reply...

My point is that the "system" approach doesn't seem to work. You may have included additional claims (and I did address other parts of the post) but I felt the need to explain the systems reply anyway.


The objection I had was in regard to the manner in which you are seperating the computer from the sensory input.

I didn't really separate the computer from the sensory input though (in the robot and program X scenario). On the contrary, program X receiving sensory input is an essential part of the thought experiment.


If I were to rip your eye balls out, somehow keep them functioning, and then have them transmit data to you for you to decifer you wouldn't be able to do it.

Not necessarily. If the data is transmitted in a form that my physical brain would recognize as it normally does, I would be able to see. The eyes are a separate organ from the brain, but the eye can convert what it sees into signals and pass them off to the brain where I can see and act accordingly (sound familiar?).


Your eyes work because they are part of the system as a whole.

And the robot's cameras are also part of the system as a whole.


Refusing to allow the AI to have eyes is just a stuborn manner by which to preserve the CR argument.

But I am allowing the AI to have eyes. That's an essential part of my thought experiment. It's just that I replaced the part of the robot that would normally process the program with Bob...

But then if you wish to claim that the robot with its normal processor could understand you should answer the questions I asked earlier.


Are you really just unaware of what computers are capable of now a days?

I am a computer science major and am at least roughly aware of the capability of computers. They can do many impressive things, but I know of no computer accomplishment that would lead me to believe they are capable of understanding given the evidence of how computers work, the evidence of the thought experiments etc.


Your hypothetical system does not allow the man in the room to use the portions of his brain suited for the processing of the sort of information you are sending him.

If you are asking me if he literally sees the outside world, you are correct. If you are saying I am not allowing him to process the information (i.e. operate the program) you are incorrect. He clearly does so, and we have an instance in which the "right" program is being run and still there is no literal understanding.


It has nothing to do with not having the "right program".

That's not quite what you said in post #121. You said it was just a matter of having "the right hardware and the right program." I supplied the "right" program, still no understanding. And my subsequent questions criticized the usefulness of joining the "right hardware" to this "right program;" questions you have not answered.

Even here yet again you fail to adress my objection while using some stock argument. My objection was that you are not allowing the man in the room to properly utilize his own brain.

He is using his own brain to operate the program. Is he using his brain in such a way he can learn a new language? No, but that is beside the point. I'm not claiming a human being can't learn and literally understand. If you wish to object to my thought experiment, please answer my questions. The fact that a human being is capable of seeing and learning does not imply that my argument is unsound. That is my objection to your objection.


You do understand why the brain surgeon can not perceive colour right?

Well, I didn't say that...

It's a lack of the proper hardware, or rather wetware in this case.

Even still, she can have complete knowledge of the non-color-blind brain, know its every rule of operation and the sequence of neurons firing etc. and still not see color.

The most common problem that creates colour blindness is that the eyes lack the proper rods and cones (I forget exactly which ones do what but the case is something of this sort none the less.). If she were to undergo some sort of operation to add the elements necessary for gathering colour information to her eyes, a wetware upgrade, then she should be able to see in colour assuming that the proper software is present in her brain.

True, but you're missing the point...

Funny enough your own example is perfect in demonstrating that even a human needs the proper software and hardware/wetware to be capable of perception!

Something I never disputed.

So why is it that the proper software and hardware is necessary for a human to do these special processes that you attribute to it but the right software and hardware is not enough to help a computer?

I claim that the "right program" and "right hardware" are not sufficient for a computer because of my thought experiment regarding the robot and program X, which you have consistently ignored.


And I already know what you are going to say. You'll say that the human does have a magic ball of yarn which you have dubbed a "soul". Yet you can not tell me the properties of this soul and what exactly it does without invoking yet more magic balls of yarn like "freewill" or maybe Searle's "Causal Mind" or "Intrinsic Intentionality". So what are these things what do they do? Will you invoke yet more magic balls of yarn? Maybe even the cosmic magic ball of yarn called "God"? None of these magic balls of yarn prove anything.

I can directly perceive my own free will, and thus can rationally believe in the existence of the soul. Just because I am not able to discern the precise mechanics of how they work does not mean I cannot have rational basis to believe in it.

In any case my personal beliefs are not relevant. My counterexamples still remain as do my unanswered questions.


You seem to misunderstand the way AI works in instances such as these. The AI is not simply following instructions. When the robot comes to a wall there is not an instruction that says "when you come to a wall turn right". It can turn either right or left and it makes a decision to do one or the other.

I'd be interested in knowing the mechanisms on how this "decision" works. Sometimes its deterministic (e.g. an "if-then" type of thing) or perhaps it is "random," but even "random" number generators are actually built upon deterministic rules (and hence are actually pseudorandom). Are you even aware of how this works?


Earlier in this discussion Deep Blue was brought up. You responded in a very similar manner to that as you did this but I never got back to discussing it. You seem to think that a complex set of syntactic rules is enough for Deep Blue to have beaten Kasperov. The problem though is that you are wrong. You can not create such rules for making a computer play chess and have the computer be successful.

Depends what you mean by "syntactic" rules. If you are referring to a complex set of instructions (the kind that computer programs can use) that work as a connected and orderly system to produce valid output, then you are incorrect. You can indeed create such rules for making a computer play chess and having it be this successful. As I said, Deep Blue did (among other things) use an iterative deepening search algorithm with Alpha-Beta pruning. The program of Deep Blue was indeed a complex set of instructions, and computer programs (which are complex sets of instructions) can do some pretty impressive stuff, as I said before. I never thought it would be you who would underestimate that.


At least not against anyone who plays chess well and especially not against a world champion such as Kasperov. You can not simply program it with rules such as "when this is the board position move king's knight one to king's bishop three". If you made this sort of program and expected it to respond properly in any given situation you would have to map out the entire game tree.

Deep Blue didn't map out the entire game tree, but its search algorithms did go as deep as 14 moves. Being able to see 14 moves ahead is a giant advantage. From their it could pick the "best" solution.


Computers can do this far faster than we can and even they at current max processing speed will take at least hundreds of thousands of years to do this. By that time we will be dead and unable to write out the answers for every single possible board position. So we need to make short cuts. I could go on and on about how we might accomplish this but how about I tell you the way that I understand it is actually done instead.
The computer is taught how to play chess. It is told the board set up and how the pieces move, take each other, and so on. Then it is taught strategy such as controling the center, using pieces in tandum with one another, hidden check, and so forth.

And this is done by...what? Hiring a computer programmer to write the right set of instructions.


So far the computer has not been given any set of instructions on how to respond to any given situation such as the set up in the CR.

Actually, the Chinese room thought experiment can be given this type of programming. Remember my variants of the Chinese room were rather flexible and went well beyond mere "if-then" statements.


The computer is then asked based on the rules presented for the game and the goals presented to it to evaluate possible moves and pick one that is the most advantageous.

Like the search algorithms of Deep Blue?


This is pretty much what a human does when a human plays chess. So since the computer is evaluating options and making decisions would you still say that it can not understand what it is doing and is only replacing one line of code with another line of code as it says to do in it's manual?

Short answer, yes.

Let's do another variant of the Chinese room thought experiment. The questions are asking what to do in a given chess scenario. Using a complex set of instructions found in the rulebook, the man in the room writes down an answer (e.g. "Pawn to Queen's Four"). We can even have him carry out the same mathematical and logical operations the Deep Blue program does in binary code, and still he won't understand what's going on.
 
  • #155
Tisthammerw said:
Let's do another variant of the Chinese room thought experiment. The questions are asking what to do in a given chess scenario. Using a complex set of instructions found in the rulebook, the man in the room writes down an answer (e.g. "Pawn to Queen's Four"). We can even have him carry out the same mathematical and logical operations the Deep Blue program does in binary code, and still he won't understand what's going on.
Agian you relagate the man to the postion of a singlar processor of information utilizing portions of his brain illsuited for the process that he is preforming. Certainly you realize that the man going through these voluminous manuels and using only his critical faculties for every byte of information is going to take an exceedingly ponderous time for every single move made? How long do you think it would take? Hours? Weeks? Monthes? Don't you think that if you allowed the man to utilize the full processing capacity of his brain so that the information processing moved at the same pace as his own mind that he may start to catch on and find meaning in the patterns?


Remember Searle's argument was that the syntactic patterns of the information was not enough to come to an understanding. The fact that the man is unable to decifer meaning in the patterns when put into the shoes of the computer is supposed to prove this. If we prove that a man can be put in those shoes and find an understanding then this ruins the proof of the argument. Simply stating that the man is expected to be capable of understanding because he already is capable of it does not save the proof from being invalidated because the proof hinges on the man not being able to understand. Otherwise you admit that the Chinese Room is a useless argument.
 
Last edited:
  • #156
Tisthammerw said:
But I am allowing the AI to have eyes. That's an essential part of my thought experiment. It's just that I replaced the part of the robot that would normally process the program with Bob...
That's the point. Bob should represent the whole system, including the cameras. You are just taking out a pentium chip and replacing it with a human. He is not representing the sum of the parts just a processor.

Tisthammerw said:
I can directly perceive my own free will...
You can tell me that and the man in the chinese room will tell me that he can speak chinese too. Many a conginitive science major will tell you that your pereptions are just illusions like the illusion that the man in the CR understands chinese. This is why I have told you that this argument argues against the idea that there is something special about humans better than it does that there is from a cognitive science perspective since you can not prove or quantify in any meaningfully scientific fashion that this "something else" exists given the circumstances.
 
  • #157
The Chinese Room

Tisthammerw said:
The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese?
Why is it necessary for the room to be conscious in order for it to understand Chinese? I do not see that consciousness is a necessary prerequisite for understanding a language.

Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.
IMHO this is incorrect reasoning. The man is not conscious of the fact that he understands the language, and yet he is perfectly capable of carrying out rational conversations in the language. The fact that he is able to carry out a rational conversation in Chinese is demonstration that he (unconsciously) understands Chinese.

The error in Searle’s reasoning is the assumption that “understanding a language” requires “consciousness”. It is easy to see how this assumption is made, since to date our only experiences of agents with the capacity to understand language have been conscious agents, but I would submit that this need not necessarily always be the case in the future.

In fact the systems reply does work very well indeed.

MF
 
  • #158
moving finger said:
Why is it necessary for the room to be conscious in order for it to understand Chinese? I do not see that consciousness is a necessary prerequisite for understanding a language.


IMHO this is incorrect reasoning. The man is not conscious of the fact that he understands the language, and yet he is perfectly capable of carrying out rational conversations in the language. The fact that he is able to carry out a rational conversation in Chinese is demonstration that he (unconsciously) understands Chinese.

The error in Searle’s reasoning is the assumption that “understanding a language” requires “consciousness”. It is easy to see how this assumption is made, since to date our only experiences of agents with the capacity to understand language have been conscious agents, but I would submit that this need not necessarily always be the case in the future.

In fact the systems reply does work very well indeed.

MF
The problem here is that human language has what Searle might refer to as a "semantic" property, the meaning of the words. The meanings of these words, for the most part, are attached to things of the outside world which the man in the box has no access to and therefore no referance by which to learn those meanings. This is why I have argued here so strongly for allowing the man in the box sensory input. The way Searle sets up his argument though sensory information from say a camera will just be in another language which the man in the room will not understand. Personally I think that this is unfair and unrealistic.

Searle's Chinese Room may have started out as a sincere thought experiment but since I think the room has been shaped specifically to make the man in the room fail in his endevour to understand what is happening.
 
  • #159
TheStatutoryApe said:
Something I have failed to bring up yet. The scenario in the CR of a computer program with a large enough set of instructions telling the computer to replace certain lines of script with other lines of script making it indistinguishable from a human is not possible in reality.

Why not?


Any significantly long coversation or just a person testing it to see if it is a computer of this sort or not will reveal it for what it is.

Ah, so you’re criticizing the reliability of the Turing test for strong AI, instead of the technological possibility of a program being able to mimic a conversation.


Just the same as in the case of mapping out the game tree for chess it would be equally impossible to map out a "conversation tree" sufficient enough to cover even a significant portion of the possible conversational scenarios.

This requires some explanation (especially on what you mean by “conversation tree”).


It's fine as a hypotheitical scenario because a hypothetical can allow something that is impossible. BUT if you were to come across a computer in reality that could carry on a conversation indestinguishable from a human you would have to assume that the computer was capable of some level of semantic understanding.

I don't see why, given the counterexample of the Chinese room. Note that I didn't specify exactly what kinds of instructions were used. It doesn't have to be a giant "if-then" tree. The person in the room can use the same kinds of rules (for loops, arithmetic etc.) that the computer can. Thus, if a computer can simulate understanding so can the man in the Chinese room. And yet still there is no literal understanding.


TheStatutoryApe said:
Whether or not you want to agree that Deep Blue is capable of any sort of "understanding" Deep Blue and programs like it are still proof that AI has broken out of the Chinese Room.

This requires some justification, especially given the fact that we haven't even been able to produce the Chinese Room (yet).


Tisthammerw said:
Let's do another variant of the Chinese room thought experiment. The questions are asking what to do in a given chess scenario. Using a complex set of instructions found in the rulebook, the man in the room writes down an answer (e.g. "Pawn to Queen's Four"). We can even have him carry out the same mathematical and logical operations the Deep Blue program does in binary code, and still he won't understand what's going on.

Agian you relagate the man to the postion of a singlar processor of information utilizing portions of his brain illsuited for the process that he is preforming.

Perhaps so, but you're missing the point. This is a clear instance of a program simulating chess without having any real understanding. You can make arguments showing how a human being can understand, but this has little relevance to the counterexample.

Certainly you realize that the man going through these voluminous manuels and using only his critical faculties for every byte of information is going to take an exceedingly ponderous time for every single move made?

And thus of course real computers would do it much faster. But this doesn't change the point of the argument (e.g. simulation of chess without real understanding) and if need be we could say that this person is an extraordinary autistic savant capable of processing inordinate amounts of information rapidly.


Remember Searle's argument was that the syntactic patterns of the information was not enough to come to an understanding. The fact that the man is unable to decifer meaning in the patterns when put into the shoes of the computer is supposed to prove this. If we prove that a man can be put in those shoes and find an understanding then this ruins the proof of the argument.

This doesn't work for a variety of reasons. First, we are not disputing whether or not a man can learn and understand, so showing that a man can understand by itself has little relevance. Second, even though we can find ways a human can literally understand, this doesn’t change the fact that we have a clear instance of a complex set of instructions giving valid output (e.g. meaningful answers, good chess moves) without literal understanding; and so the counterexample is still valid. Third, you have constantly failed to connect analogies of a human understanding with a computer literally understanding in ways that would overcome my counterexamples (e.g. the robot and program X, and you still haven't answered my questions regarding this).


But I am allowing the AI to have eyes. That's an essential part of my thought experiment. It's just that I replaced the part of the robot that would normally process the program with Bob...

That's the point.

Well then, please answer my questions regarding what happens when we replace Bob with the robot's ordinary processor. You haven't done that. Let's recap the unanswered questions:

Tisthammerw said:
One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?

I (still) await your answers.


Bob should represent the whole system...

Remember, I already responded to the systems reply (e.g. post #149). But I can do so again in a way that's more fitting for this thought experiment. Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).

If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions above.


I can directly perceive my own free will...

You can tell me that and the man in the chinese room will tell me that he can speak chinese too.

Er, no he won't. He’ll tell you he won't understand Chinese ex hypothesi, remember?


Many a conginitive science major will tell you that your pereptions are just illusions

And many a cognitive science major will tell me that my perceptions of free will are correct. The existence of free will falls into the discipline of metaphysics, not science (though there is some overlap here). Here's the trouble with the "illusion" claim: if I cannot trust my own perceptions, on what basis am I to believe anything, including the belief that free will does not exist? Hard determinism gets itself into some major intellectual difficulties when examined closely.
 
  • #160
moving finger said:
Tisthammerw said:
The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese?

Why is it necessary for the room to be conscious in order for it to understand Chinese? I do not see that consciousness is a necessary prerequisite for understanding a language.

It depends on how you define "understanding," but I'm using the dictionary definition of "grasping the meaning of." Without consciousness there is nothing to grasp meaning. Suppose we knock a Chinese person unconscious. Will he understand anything we say to him? Suppose we speak to a pile of books. Will they understand anything? How about an arrangement of bricks?


Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.

IMHO this is incorrect reasoning. The man is not conscious of the fact that he understands the language, and yet he is perfectly capable of carrying out rational conversations in the language.

Which (I think) is a giant non sequitur for literal understanding. He can simulate a conversation using a complex set of rules manipulating input (e.g. if you see X replace with Y), but he clearly doesn't know the meaning of any Chinese word ex hypothesi. (If you wish to dispute this, you’ll have to show that such a thing is logically impossible, and that will be difficult to prove.) Similarly, a person can do the logical/arithmetic operations a computer can without understanding what the strings of binary digits mean.
 
  • #161
Tisthammerw said:
This doesn't work for a variety of reasons. First, we are not disputing whether or not a man can learn and understand, so showing that a man can understand by itself has little relevance. Second, even though we can find ways a human can literally understand, this doesn’t change the fact that we have a clear instance of a complex set of instructions giving valid output (e.g. meaningful answers, good chess moves) without literal understanding; and so the counterexample is still valid. Third, you have constantly failed to connect analogies of a human understanding with a computer literally understanding in ways that would overcome my counterexamples (e.g. the robot and program X, and you still haven't answered my questions regarding this).
You don't seem to understand how a thought experiment works. The thought experiment is supposed to present an analogous situation. The man in the room is supposed to represent a computer. The argument is supposed to show that even a man attempting to understand chinese while in the shoes of a computer is unable to do so and hence a computer obviously will not be able to understand chinese. If we can show that the room can be manipulated in such a way that reflects the situation of a computer and allows the man to understand chinese then the proof of the CR(the man not being able to understand while in the shoes of a computer) has been invalidated. If you don't understand this then there is no more point to discussing the CR.
 
  • #162
TheStatutoryApe said:
You don't seem to understand how a thought experiment works.

How's that?


The thought experiment is supposed to present an analogous situation. The man in the room is supposed to represent a computer.

True.


The argument is supposed to show that even a man attempting to understand chinese while in the shoes of a computer is unable to do so and hence a computer obviously will not be able to understand chinese.

That's not how I'm using this particular thought experiment. I'm using it as a counterexample: complex instructions are yielding valid output and still no literal understanding. Thus, a complex set of instructions yielding valid output does not appear sufficient for literal understanding to exist. This is of course analogous to a computer (since a computer uses a complex set instructions etc.), but that doesn't change my purpose of the thought experiment.


If we can show that the room can be manipulated in such a way that reflects the situation of a computer and allows the man to understand chinese then the proof of the CR(the man not being able to understand while in the shoes of a computer) has been invalidated.

True--if it reflects the situation of a computer. Given your remarks my subsequent arguments it isn't clear that this is the case, nor have you answered my questions thereof (e.g. the robot and program X). You seem to forget what I said in the post you replied to:

Tisthammerw said:
you have constantly failed to connect analogies of a human understanding with a computer literally understanding in ways that would overcome my counterexamples (e.g. the robot and program X, and you still haven't answered my questions regarding this).
 
  • #163
TheStatutoryApe said:
The problem here is that human language has what Searle might refer to as a "semantic" property, the meaning of the words. The meanings of these words, for the most part, are attached to things of the outside world which the man in the box has no access to and therefore no referance by which to learn those meanings.
But it is not necessary for "the man in the box" to learn any meanings. The facility to understand is not resident within "the man in the box". It is the entire CR which has the facility to understand. "the man in the box" is simply functioning as part of the input/output mechanism for the CR. The only reason the CR can understand Chinese is because it must already be loaded with (programmed with) enough data which allow it to form relationships between words, which allow it to draw analogies, which allow it to associate words, phrases and sentences with each other, in short which allow it to grasp the meaning of words. It is not necessary for the CR to have direct access to any of the things in the outside world which these words represent in order for it to understand Chinese. If I lock myself in a room and cut off all access to the outside world, do I suddenly lose the ability to understand English? No, because my ability to understand (developed over a period of years) is now a part of me, it is internalised, my ability to understand is now "programmed" within my brain, and it continues to operate whether or not I have any access to the outside world.

TheStatutoryApe said:
This is why I have argued here so strongly for allowing the man in the box sensory input.
This is not necessary. Sensory input may be required as a (optional) way of learning a language in the first place, but once an agent has learned a language (the CR has already learned Chinese) then continued sensory input is not required to maintain understanding.

TheStatutoryApe said:
Searle's Chinese Room may have started out as a sincere thought experiment but since I think the room has been shaped specifically to make the man in the room fail in his endevour to understand what is happening.
imho the reason Searle's CR argument continues to persuade some people is because the focus continues to be (wrongly) on just the "man in the box" rather than on the entire CR.

MF
 
Last edited:
  • #164
Hi Tisthammerw

Tisthammerw said:
I'm using the dictionary definition of "grasping the meaning of." Without consciousness there is nothing to grasp meaning.
With respect, this is anthropocentric reasoning and is not necessarily correct.
Ask the Chinese Room (CR) any question in Chinese, and it will respond appropriately. Ask it if it understands Chinese, it will respond appropriately. Ask it if it grasps the meanings of words, it will respond appropriately. Ask it if it understands semantics, it will respond appropriately. Ask it any question you like to "test" its ability to understand, to grasp the meanings of words, and it will respond appropriately. In short, the CR will behave just as any human would behave who understands Chinese. On what basis then do we have any right to claim that the CR does NOT in fact understand Chinese? None.

Why should "ability to understand" be necessarily associated with consciousness? Yes, humans (perhaps) need to be conscious in order to understand language (that is the way human agents are built), but that does not necessarily imply that consciousness is a pre-requisite of understanding in all possible agents.

A human being can perform simple arithmetical calculations, such as add two integers together and generate a third integer. A simple calculator can do the same thing. But an unconscious human cannot do this feat of addition - does that imply (using your anthropocentric reasoning) that consciousness is a pre-requisite for the ability to add two numbers together? Of course not. We know the calculator is not conscious and yet it can still add two numbers together. The association between cosnciousness and ability to add numbers together is therefore an accidental association peculiar to human agents, it is not a necessary association in all agents. Similarly I argue that the association between understanding and consciousness in human agents is an accidental association peculiar to humans, it is not a necessary association in all possible agents.

Tisthammerw said:
Suppose we knock a Chinese person unconscious. Will he understand anything we say to him?
Perhaps not, because consciousness and ability to understand are accidentally associated in humans. The same Chinese person will also not be able to add two numbers together whilst unconscious, but that does not imply that a simple pocket calculator must necessarily be conscious in order to add two numbers together.

Tisthammerw said:
Suppose we speak to a pile of books. Will they understand anything? How about an arrangement of bricks?
I hope you are being flippant here (if I thought you were being serious I might start to doubt your ability to understand English, or at least your ability to think rationally). Neither a pile of books nor a pile of bricks has the ability to take the information we provide (the sounds we make) and perform any rational processing of this information in order to derive any kind of understanding. Understanding is a rational information processing exercise, a static agent cannot be in a position to rationally process information therefore cannot understand. The pile of books here is in the same position as the unconscious chinese man - neither can understand what we are saying, and part of the reason for this is because they have no way of rationally processing the information we are providing.

In the following (to avoid doubt) we are talking about a man who internalises the rulebook.
Tisthammerw said:
Which (I think) is a giant non sequitur for literal understanding. He can simulate a conversation using a complex set of rules manipulating input (e.g. if you see X replace with Y), but he clearly doesn't know the meaning of any Chinese word ex hypothesi.
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.

Your argument works only if you define “understanding” as necessarily implying “conscious understanding” (again this is an anthropocentric perspective). If one of your assumptions is that any agent who underdstands must also be conscious of the fact that it understands (as you seem to be saying) then of course by definition an agent can understand only if it is also conscious of the fact that it understands. But I would challenge your assumption. To my mind, it is not necessary that an agent be conscious of its own understanding in order for it to be able to understand, just as an agent does not need to be conscious of the fact that it is carrying out an arithmetic operation in order to carry out arithmetic operations.

Tisthammerw said:
(If you wish to dispute this, you’ll have to show that such a thing is logically impossible, and that will be difficult to prove.)
With respect, if you wish to base your argument on this assumption, the onus is in fact on you to show that “understanding” necessally implies “conscious understanding” in all agents (and is not simply an anthropocentric perspective).

Tisthammerw said:
Similarly, a person can do the logical/arithmetic operations a computer can without understanding what the strings of binary digits mean.
Again you are implicitly assuming an anthropocentric perspective in that “understanding what the strings of binary digits mean” can only be done by an agent which is conscious of the fact that it understands.

With respect,

MF
 
  • #165
moving finger said:
But it is not necessary for "the man in the box" to learn any meanings.
Without knowing any meanings to any of the words it seems to make little sense to claim he can understand them. Perhaps we'll have to agree to disagree on that point.

The facility to understand is not resident within "the man in the box". It is the entire CR which has the facility to understand.
Ah, the systems reply. A couple problems here. Does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese? That doesn’t strike me as plausible. Second, Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.

I'm using the dictionary definition of "grasping the meaning of." Without consciousness there is nothing to grasp meaning.
With respect, this is anthropocentric reasoning and is not necessarily correct.
Anthropocentric? I never said only humans are capable of understanding.

Ask the Chinese Room (CR) any question in Chinese, and it will respond appropriately. Ask it if it understands Chinese, it will respond appropriately. Ask it if it grasps the meanings of words, it will respond appropriately. Ask it if it understands semantics, it will respond appropriately. Ask it any question you like to "test" its ability to understand, to grasp the meanings of words, and it will respond appropriately. In short, the CR will behave just as any human would behave who understands Chinese. On what basis then do we have any right to claim that the CR does NOT in fact understand Chinese?
Ask the man inside the room if he understands Chinese. The reply will be in the negative. Also, see above regarding the systems reply.

Which (I think) is a giant non sequitur for literal understanding. He can simulate a conversation using a complex set of rules manipulating input (e.g. if you see X replace with Y), but he clearly doesn't know the meaning of any Chinese word ex hypothesi.
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese.
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.

(If you wish to dispute this, you’ll have to show that such a thing is logically impossible, and that will be difficult to prove.)
With respect, if you wish to base your argument on this assumption, the onus is in fact on you to show that “understanding” necessally implies “conscious understanding” in all agents (and is not simply an anthropocentric perspective).
To me it seems pretty self-evident (if you understand what consciousness is). Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness. If something (e.g. an arrangement of bricks) does not know the meaning of words it cannot possesses literal understanding of them. Calling my belief “anthropocentric” doesn't change the logic of the circumstances. An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.

And you've avoided my request. Ex hypothesi he has no knowledge of what any Chinese word means. He utters sounds but has no idea what they mean. Again, you'll have to show that such a thing is logically impossible, and you haven't done anything close to that. Nor is such a claim of logical impossibility plausible.
 
  • #166
Tisthammerw said:
moving finger said:
But it is not necessary for "the man in the box" to learn any meanings.
Without knowing any meanings to any of the words it seems to make little sense to claim he can understand them. Perhaps we'll have to agree to disagree on that point.
With respect, you have quoted me out of context here. Please check again my full reply in post #163 above on this point, which runs thus :
moving finger said:
But it is not necessary for "the man in the box" to learn any meanings. The facility to understand is not resident within "the man in the box". It is the entire CR which has the facility to understand. "the man in the box" is simply functioning as part of the input/output mechanism for the CR.
In other words (and at the risk of repeating myself), the ability "to understand" is not resident solely within the man in the box (the man is there simply to pass written messages back and forth, he could be replaced by a simply mechanism), the ability "to understand" is an emergent and dynamic property of the entire contents of the box, of which the man forms only a minor part. This is why it is not necessary for the man in the box to know the meanings of any words.
In the same way, individual neurons in your brain participate in the process of understanding that takes place in your brain, but the ability "to understand" is an emergent and dynamic property of your brain, of which each neuron forms only a minor part. It is not necessary (and indeed makes no sense) for anyone neuron to "know the meanings" of any words.
If you cannot see and understand this then (with respect) I am afraid that you have missed the whole point of the CR argument.
Tisthammerw said:
Does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese?
With respect, you seem to be ignoring my replies above – see for example post #164. I have made it quite clear that imho the association between “consciousness” and “ability to understand” (viz consciousness is a necessary pre-requisite to understanding) may be a necessary but accidental relationship in homo sapiens, and this does not imply that such a relationship is necessary in all possible agents. Please read again my analogy with the simple calculator, which runs as follows :
moving finger said:
A human being can perform simple arithmetical calculations, such as add two integers together and generate a third integer. A simple calculator can do the same thing. But an unconscious human cannot do this feat of addition - does that imply (using your anthropocentric reasoning) that consciousness is a pre-requisite for the ability to add two numbers together? Of course not. We know the calculator is not conscious and yet it can still add two numbers together. The association between cosnciousness and ability to add numbers together is therefore an accidental association peculiar to human agents, it is not a necessary association in all agents. Similarly I argue that the association between understanding and consciousness in human agents is an accidental association peculiar to humans, it is not a necessary association in all possible agents.
Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.
Again, I have already answered this in my post #164 above, thus :
moving finger said:
The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.
Tisthammerw said:
Anthropocentric? I never said only humans are capable of understanding.
Homo Sapiens is the only species that we “know” possesses consciousness. To be more correct, the only individual that I know who possesses consciousness is myself. I surmise that other humans possesses consciousness, but I challenge anyone to prove to another person that they are conscious. In the case of non-human species, I have no idea whether any of them are conscious or not. And I challenge anyone to prove that any non-human species is indeed conscious.
Tisthammerw said:
Ask the man inside the room if he understands Chinese. The reply will be in the negative.
See my first reply in this post. It makes no difference whether the man inside the room understands Chinese or not, this is the whole point. It is the entire room which possesses the understanding of Chinese. I do not wish to repeat the argument all over again, so please read the beginning of this post again.
.
Tisthammerw said:
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, you are not reading my posts correctly.
I am saying (in this very abstract theoretical case) that IF Searle could successfully internalise and implement the rulebook within his body, then the physical body of Searle understands Chinese (because he has internalised the rulebook) – ask him any question in Chinese and he will provide a rational response in Chinese, using that internalised rulebook. Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail. The body of Searle thus understands Chinese. The only thing he does not possesses is that he is not CONSCIOUS of the fact that he understands Chinese. He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese, but he is not CONSCIOUS of the fact that he knows the meanings of these words.
All of this assumes that Searle could INTERNALISE the rulebook and implement the rulebook internally within his person without being conscious of the details of what he is doing – whether this is possible in practice or not I do not know (but it was Searle who suggested the internalisation, not me!)
Tisthammerw said:
That isn't logical.
Imho it is completely logical.
Tisthammerw said:
To me it seems pretty self-evident (if you understand what consciousness is).
It also seems self-evident to me that understanding is an emergent property of a dynamic system, as is consciousness, and the two may be associated (as in home sapiens) but there is in principle no reason why they must be associated in all possible agents.
Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this does not follow. All you have shown here is that consciousness is associated with understanding in home sapiens. You have NOT shown that understanding is impossible without consciousness in all possible agents.
Tisthammerw said:
If something (e.g. an arrangement of bricks) does not know the meaning of words it cannot possesses literal understanding of them.
An arrangementt of bricks is a static entity. Understanding is a dynamic process. Please do not try to suggest that my arguments imply a static arrangement of bricks possesses understanding.
Tisthammerw said:
An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.
I never said it could.
Please do try not to misread or misquote. I said that imho an agent need not necessarily be conscious in order to understand the meaning of words. You have not proven otherwise.

(for the avoidance of doubt, in the following we are discussing a man who internalises the rulebook)
Tisthammerw said:
And you've avoided my request. Ex hypothesi he has no knowledge of what any Chinese word means. He utters sounds but has no idea what they mean. Again, you'll have to show that such a thing is logically impossible, and you haven't done anything close to that. Nor is such a claim of logical impossibility plausible.
Once again you seem not to bother reading my posts. I have answered (in post #164) as follows :
moving finger said:
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.
Your argument works only if you define “understanding” as necessarily implying “conscious understanding” (again this is an anthropocentric perspective). If one of your assumptions is that any agent who underdstands must also be conscious of the fact that it understands (as you seem to be saying) then of course by definition an agent can understand only if it is also conscious of the fact that it understands. But I would challenge your assumption. To my mind, it is not necessary that an agent be conscious of its own understanding in order for it to be able to understand, just as an agent does not need to be conscious of the fact that it is carrying out an arithmetic operation in order to carry out arithmetic operations.
With respect, if you wish to base your argument on this assumption, the onus is in fact on you to show that “understanding” necessally implies “conscious understanding” in all agents (and is not simply an anthropocentric perspective).
Now, can you substantiate your claim that consciousness is a necessary pre-requisite for understanding in all possible agents (not simply in homo sapiens)? If you cannot, then your argument that the CR does not understand is based on faith or belief, not on rationality.
As always, with respect,
MF
 
Last edited:
  • #167
I believe one day AI will become far more powerful than the human brain. I cannot explain why I believe this with words, I just think that, given enough time, it will happen.
 
  • #168
tomfitzyuk said:
I believe one day AI will become far more powerful than the human brain. I cannot explain why I believe this with words, I just think that, given enough time, it will happen.

Back around the 70's I agreed with this. My reasoning was that hardware and software were both being improved at an exponential pace, and there was no obvious upper limit to their power short of Planck's constant, while human brains were evolving at a much slower pace.

But since the "gene explosion" of the 80's I have revised my view. Nowadays it appears there is a Moore's law analog for tinkering with our own genetic inheritance, so our great grandkids, if we survive, may become smarter at the same or greater pace than AI's are.

Added: In view of the new posting guidlines, I should specify my definitions. Obviously I consider human intelligence to be simply a function of brain (and other body) structure and action, under control of genes and gene expressions. So the human side and the AI side, for me, are comparable. If you want to develop AI intelligence to any degree, I see no theoretical reason why you should not be able to, given sufficient time and skill. In particular I reject, as I have posted many times before, the idea that Goedelian incompleteness or anything Chaitin has demonstrated about digital limitations, constitute a hard cap. Brains are not necessarilty digital, and AI's need not be.
 
Last edited:
  • #169
moving finger said:
With respect, you have quoted me out of context here.
Yes and no. I took that part of the quote and responded to it, at the time not knowing you were using the systems reply. I subsequently responded to the systems reply (neglecting to modify the previous quote) and this made my response to the first quote relevant (see below).


Still, I admit that your complaint has some validity. I thus apologize.


In other words (and at the risk of repeating myself), the ability "to understand" is not resident solely within the man in the box (the man is there simply to pass written messages back and forth, he could be replaced by a simply mechanism), the ability "to understand" is an emergent and dynamic property of the entire contents of the box
But I can use the same response I did last time. Let the man internalize the contents of the Chinese room; suppose the man memorizes the rulebook, the stacks of paper etc. He still doesn't understand the Chinese language. Thus,

Without knowing any meanings to any of the words it seems to make little sense to claim he can understand them.
And my response applies.


With respect, you seem to be ignoring my replies above – see for example post #164. I have made it quite clear that imho the association between “consciousness” and “ability to understand”...
With respect, you seem to have ignored my replies above - see for example post #165. I have made it quite clear that I believe this relationship is necessary in all possible agents by virtue of what consciousness means. Please read my explanation, which runs as follows:


Tisthammerw said:
To me it seems pretty self-evident (if you understand what consciousness is). Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness. If something (e.g. an arrangement of bricks) does not know the meaning of words it cannot possesses literal understanding of them. Calling my belief “anthropocentric” doesn't change the logic of the circumstances. An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.

Now, moving on…


Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.

Again, I have already answered this in my post #164 above
The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.
Again, I have already answered this in my post #165 above


(I have reproduced my response for your convenience.)

Tisthammerw said:
The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese.
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.
To which you responded (in post #166):


Tisthammerw said:
That isn't logical.
Imho it is completely logical.
Imho you need to look up the law of noncontradiction.
Homo Sapiens is the only species that we “know” possesses consciousness.
Not at all (given how I defined it earlier). My pet cat for instance possesses consciousness (e.g. capable of perception).


To be more correct, the only individual that I know who possesses consciousness is myself. I surmise that other humans possesses consciousness, but I challenge anyone to prove to another person that they are conscious.
True, there is that epistemelogical problem of other minds. But for computers we at least have logic and reason to help guide us (e.g. the Chinese room and variants thereof).


Tisthammerw said:
Ask the man inside the room if he understands Chinese. The reply will be in the negative.
See my first reply in this post. It makes no difference whether the man inside the room understands Chinese or not, this is the whole point. It is the entire room which possesses the understanding of Chinese. I do not wish to repeat the argument all over again, so please read the beginning of this post again.
See my previous post. It makes a big difference whether the man (when he internalizes and becomes the system) understands Chinese or not. This is the whole point. I do not wish to repeat the argument all over again, so please read the beginning of my posts again.


But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, you are not reading my posts correctly.
I am saying (in this very abstract theoretical case) that IF Searle could successfully internalise and implement the rulebook within his body, then the physical body of Searle understands Chinese (because he has internalised the rulebook) – ask him any question in Chinese and he will provide a rational response in Chinese, using that internalised rulebook.
I'm not sure you've been reading my posts correctly. Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"? If you're a physicalist (as I suspect) the answer would seem to be no.


Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail.
Really? There is no test of understanding that he will fail? Let's ask him (in English) what Chinese word X means, and he will (quite honestly) reply "I have no idea." And of course, he is right. He doesn't know a word of Chinese.

He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese
You certainly haven't been reading my posts correctly. When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now). Please don’t twist the meaning of what I say again. This is getting tiresome.


Now, given my definition of the word “understand,” does the man understand a single word of Chinese? No, that is obviously not the case here. The man does not know a word of Chinese. If you ask him (in English) if he understands Chinese, his honest answer will be “no.” The part(s) of him that possesses understanding of words does not understand a single word of Chinese. When I say he “knows the meaning of the words” I did not mean he can use a giant rulebook written in English on how to manipulate Chinese characters to give valid output. (Note: “valid” in this case means that the output constitutes satisfactory answers [i.e. to an outside observer the answers seem “intelligent” and “rational”] to Chinese questions.)


Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this does not follow.

Yes it does. This is how I define consciousness. Given how I defined consciousness and understanding, it logically follows that literal understanding requires consciousness (regardless of what species one is). Look back at my definitions. When I say a computer cannot understand, I mean the definition of understanding that I have used. I'm not saying that a computer can't “understand” in some other sense (e.g. metaphorical, or at least metaphorical using my definition).
Please do try not to misread or misquote.
Ditto.


(for the avoidance of doubt, in the following we are discussing a man who internalises the rulebook)
Tisthammerw said:
And you've avoided my request. Ex hypothesi he has no knowledge of what any Chinese word means. He utters sounds but has no idea what they mean. Again, you'll have to show that such a thing is logically impossible, and you haven't done anything close to that. Nor is such a claim of logical impossibility plausible.
Once again you seem not to bother reading my posts. I have answered (in post #164) as follows :
disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese...

Once again you seem not to bother reading my posts. I have answered this (in post #165) as follows:


Tisthammerw said:
moving finger said:
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese.
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.
But then, you seemed to already know I responded to this. So why are you pretending that I didn't?


Now, can you substantiate your claim that consciousness is a necessary pre-requisite for understanding in all possible agents (not simply in homo sapiens)?


I could say, "Once again you seem not to bother reading my posts" and quote you my argument of why consciousness is a necessary pre-requisite for understanding in all possible agents, but I think that game is getting old. Please understand the terms as I have used and defined them. Once an intelligent person does that, I think it’s clear that my argument logically follows.
 
Last edited:
  • #170
moving finger said:
the ability "to understand" is not resident solely within the man in the box (the man is there simply to pass written messages back and forth, he could be replaced by a simply mechanism), the ability "to understand" is an emergent and dynamic property of the entire contents of the box
Tisthammerw said:
Let the man internalize the contents of the Chinese room; suppose the man memorizes the rulebook, the stacks of paper etc. He still doesn't understand the Chinese language.
I disagree. Let us assume that it is “possible” for the man to somehow internalise the rulebook and to use this rulebook without indeed being conscious of all the details. The physical embodiment of the man now takes the place of the CR, and the physical embodiment of the man therefore DOES understand Chinese. The man may not be conscious of the fact the he understands Chinese (I have explained this before several times) but nevertheless he (as a physical entity) does understand.
Tisthammerw said:
Thus, without knowing any meanings to any of the words it seems to make little sense to claim he can understand them.
But he DOES know the meaning of the words (in the example where he internalises the rulebook) – even though he is not CONSCIOUS of the fact that he knows the meanings.
Tisthammerw said:
And my response applies.
And your response does not apply.
Tisthammerw said:
To me it seems pretty self-evident (if you understand what consciousness is).
Do you claim to understand what consciousness is?
Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this is faulty logic. You have shown (possibly) that some of the characteristics of consciousness may have something in common with some of tha characteristics of understanding, but this does not imply that consciousness is a necessary pre-requisite for understanding.
Tisthammerw said:
An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.
That’s obvious. But in your example (where the human agent internalises the rulebook), the physical embodiment of the agent DOES know the meaning of words, in the same way that the CR knew the meaning of words. The difference being (how many times do we have to go round in circles?) neither the CR nor the agent are conscious of the fact that they know the meanings of words.
Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.
But he doesn't know the meaning of any Chinese word!
Yes he DOES! He is not CONSCIOUS of the fact the he knows the meaning of any Chinese word, but the physical embodiment of that man “knows Chinese”.
Tisthammerw said:
Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, read my responses again. In all of this you are assuming that “understanding” entails “conscious understanding” and that “knowing” entails “conscious knowing”, which is an assumption not a fact.
Tisthammerw said:
Imho you need to look up the law of noncontradiction.
Imho you need to look up what entails a logical proof. You have not proven that understanding is a necessary pre-requisite to consciousness, you have assumed it (implicit in your definition of understanding). Your assumption may be incorrect.
moving finger said:
Homo Sapiens is the only species that we “know” possesses consciousness.
Tisthammerw said:
Not at all (given how I defined it earlier). My pet cat for instance possesses consciousness (e.g. capable of perception).
Is “perception” all that is required for consciousness? I don’t think so, hence your conclusion is a non sequitur.
moving finger said:
To be more correct, the only individual that I know who possesses consciousness is myself. I surmise that other humans possesses consciousness, but I challenge anyone to prove to another person that they are conscious.
Tisthammerw said:
True, there is that epistemelogical problem of other minds.
Thank you. Then you must also agree that you do not “know” whether your cat is conscious or not.
Tisthammerw said:
It makes a big difference whether the man (when he internalizes and becomes the system) understands Chinese or not.
In the case of the internalised rulebook, if you ask (in Chinese) the “entity” which has internalised the rulebook whether it understands Chinese then it WILL reply in the positive. Just as it will reply rationally to any Chinese question. Whether the man is “conscious” of the fact that he understands Chinese is not relevant.
Tisthammerw said:
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, you are not reading my posts correctly.
I am saying (in this very abstract theoretical case) that IF Searle could successfully internalise and implement the rulebook within his body, then the physical body of Searle understands Chinese (because he has internalised the rulebook) – ask him any question in Chinese and he will provide a rational response in Chinese, using that internalised rulebook.
Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?
There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.
Searle has not consciously assimiliated the rulebook, and there is nothing in Searle’s consciousness which understands Chinese. But there is more to Searle than Searle’s consciousness, and some physical part of Searle HAS necessarily internalised the rulebook and is capable of enacting the rulebook – it is THIS part of Searle (which is not conscious) which understands Chinese, and not the “Searle consciousness”.
Thus the answer to your question ‘Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?’ is “yes, there is a difference.” Your question contains an implicit assumption that "Searle understands Chinese" actually means "Searle’s consciousness understands Chinese", whereas "Searle's physical body understands Chinese" does not necessitate that his consciousness understands Chinese.
moving finger said:
Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail.
Tisthammerw said:
Really? There is no test of understanding that he will fail? Let's ask him (in English) what Chinese word X means, and he will (quite honestly) reply "I have no idea."
It should be obvious to anyone with any understanding of the issue that asking him a question in English is NOT a test of his ability to understand Chinese.
He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese
Tisthammerw said:
When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now).
you implicitly assume that understanding requires consciousness, but you have not shown this to be the case (except by defining understanding to suit your conclusion)
Tisthammerw said:
Now, given my definition of the word “understand,” does the man understand a single word of Chinese?
I dispute your definition. I do not agree that an agent must necessarily be “aware of the fact that it understands” in order to understand.
Tisthammerw said:
No, that is obviously not the case here. The man does not know a word of Chinese. If you ask him (in English) if he understands Chinese, his honest answer will be “no.”
Asking a quaetion in English is not a test of understanding of Chinese. Why do you refuse to ask him the same question in Chinese?
Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this does not follow, as already explained above, you have shown (possibly) that some of the characteristics of consciousness may have something in common with some of tha characteristics of understanding, but this does not imply that consciousness is a necessary pre-requisite for understanding.
Tisthammerw said:
Given how I defined consciousness and understanding, it logically follows that literal understanding requires consciousness (regardless of what species one is).
Naturally, if one defines “X” as a being a pre-requisite of “Y” then it is trivial to show that X is a prerequisite of Y. But I have disputed your definition of understanding.
moving finger said:
Now, can you substantiate your claim that consciousness is a necessary pre-requisite for understanding in all possible agents (not simply in homo sapiens)?
Tisthammerw said:
I could say, "Once again you seem not to bother reading my posts" and quote you my argument of why consciousness is a necessary pre-requisite for understanding in all possible agents, but I think that game is getting old.
Your argument is invalid because you implicitly assume in your definition of understanding that understanding requires consciousness. I dispute your definition of understanding.
With respect
MF
 
  • #171
Some parts I have already addressed in my previous post, so I'll trim some of that.



Do you claim to understand what consciousness is?

Well, this is what I mean by consciousness (see my quote below):

Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.


No, this is faulty logic. You have shown (possibly) that some of the characteristics of consciousness may have something in common with some of tha characteristics of understanding, but this does not imply that consciousness is a necessary pre-requisite for understanding.

It does given how I defined consciousness and understanding. If a person did not possesses the aspects of consciousness as I defined it (e.g. the aspects of perception and awareness), it would be impossible to have literal understanding (given how I defined understanding). To recap what I said earlier:

When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now).

So exactly why doesn't my argument (regarding consciousness being necessary for understanding) logically follow, given the definition of the terms used?


Imho you need to look up what entails a logical proof.

Let's look at what I said in context.

But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.

Let's see the response:


Tisthammerw said:
moving finger said:
Tisthammerw said:
That isn't logical.
Imho it is completely logical.

Imho you need to look up the law of noncontradiction.

Can you see why the denial of what I said can be taken as a violation of the law of noncontradiction?



Tisthammerw said:
Not at all (given how I defined it earlier). My pet cat for instance possesses consciousness (e.g. capable of perception).

Is “perception” all that is required for consciousness? I don’t think so, hence your conclusion is a non sequitur.

My conclusion logically follows given how I defined conscoiusness. You yourself may have something different in mind, but please be aware of how I am using the term.


Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?

There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.

So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?


Searle has not consciously assimiliated the rulebook

Technically that's untrue. He has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives. He just doesn't understand any word of Chinese (given how I defined understanding...).


But there is more to Searle than Searle’s consciousness, and some physical part of Searle HAS necessarily internalised the rulebook and is capable of enacting the rulebook – it is THIS part of Searle (which is not conscious) which understands Chinese, and not the “Searle consciousness”.

The part that has internalized the rulebook is his conscious self, remember?

moving finger said:
Tisthammerw said:
Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail.

Really? There is no test of understanding that he will fail? Let's ask him (in English) what Chinese word X means, and he will (quite honestly) reply "I have no idea."

It should be obvious to anyone with any understanding of the issue that asking him a question in English is NOT a test of his ability to understand Chinese.

I think I'll have to archive this response in my “hall of absurd remarks” given how I explicitly defined the term “understanding.”

Seriously though, given how I defined understanding, isn't it clear that this person obviously doesn't know a word of Chinese? Do you think he's lying when he says he doesn't know what the Chinese word means?

Tisthammerw said:
He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese

You certainly haven't been reading my posts correctly. When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now).

To which you have replied:

you implicitly assume that understanding requires consciousness, but you have not shown this to be the case (except by defining understanding to suit your conclusion)

Indeed I have done that, but this doesn't change the fact of my conclusion. Given how I defined understanding, consciousness is a prerequisite. And my claim is that computers as we know them (as in the robot and program X story) cannot possibly have literal understanding in the sense that I am referring to simply by “running the right program.” Could it have understanding in some other, metaphorical sense (at least, metaphorical to my definition)? Maybe, but that is another issue. My original point about a computer not being able to perceive the meaning of words (i.e. "understand") stands as valid. The computer cannot literally understand any more than the man in the Chinese room understands a word of Chinese.


Your argument is invalid because you implicitly assume in your definition of understanding that understanding requires consciousness.

So, my argument is invalid because it is a tautology? Tautologies are by definition true and are certainly logically valid (i.e. if the premise is true the conclusion cannot fail to be true).


I dispute your definition of understanding.

Too bad for you. But this is what I mean when I use the term “understanding.” Thus (using my definition) if a person understands a Chinese word, it is necessarily the case that the person is aware of what the Chinese word means. This is clearly not the case with the man in the Chinese room. He doesn't understand a word of Chinese. Again, perhaps computers can have understanding in some metaphorical sense, but it seems that a computer cannot understand in the sense that I mean when I use the term.

It sounds like our disagreement has been a misunderstanding of terms. Can we agree that a computer cannot “understand” given what I mean when I use the word?
 
  • #172
Tisthammerw said:
So exactly why doesn't my argument (regarding consciousness being necessary for understanding) logically follow, given the definition of the terms used?
Allow me to paraphrase your argument, to ensure that I have the correct understanding of what you are trying to say.
According to you (please correct me if I am wrong),
Consciousness = sensation, perception, thought, awareness
Understanding = grasp meaning of = knows what words mean = perceives meaning of words = is aware of truth of words
Firstly, with respect, as I have mentioned already, in the case of consciousness this is a listing of some of the “components of consciousness” rather than a definition of what consciousness “is”. It is rather like saying “a car is characterised by wheels, body, engine, transmission”. But this listing is not a definition of what a car “is”, it is simply a listing of some of the components of a car.
Secondly, I do not see how you make the transition from “Consciousness = sensation, perception, thought, awareness” to the conclusion “consciousness is a necessary pre-requisite for understanding”. Simply because consciousness and understanding share some characteristics (such as “awareness”)? But to show that two concepts share some characteristics is not tantamount to showing that one is a necessary pre-requisite of the other. A car and a bicycle share the common characteristic that both entities have wheels, but this observation tells us nothing about the relationship between these two entities.
Tisthammerw said:
Can you see why the denial of what I said can be taken as a violation of the law of noncontradiction?
Your argument is based on a false assumption, which is that “he knows the meaning of the words without knowing the meaning of the words” – and I have repeated many times (but you seem to wish to ignore this) this is NOT what is going on here. Can you see why your argument is invalid?
Tisthammerw said:
My conclusion logically follows given how I defined conscoiusness.
With respect, you have not shown how you arrive at the conclusion “my pet cat possesses consciousness”, you have merely stated it.
Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?
moving finger said:
There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.
Tisthammerw said:
So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?
I did not say it does not take place in his brain. Are you perhaps assuming that brain is synonymous with consciousness?
Let Searle (or someone else) first tell me “where he has internalised the rulebook”, and I will then be able to tell you where the understanding takes place (this is Searle’s thought experiment, after all)
Tisthammerw said:
The part that has internalized the rulebook is his conscious self
I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not necessarily as a part of his consciousness. In the same way, memories in the brain exist as a part of us, but are not necessarily part of our consciousness (unless and until such time as they are called into consciousness and are processed there).
(In the same way, the man in the CR participates in the Chinese conversation, but need not be consciously aware of that fact).
moving finger said:
It should be obvious to anyone with any understanding of the issue that asking him a question in English is NOT a test of his ability to understand Chinese.
Tisthammerw said:
I think I'll have to archive this response in my “hall of absurd remarks” given how I explicitly defined the term “understanding.”
Then you would be behaving illogically. What part of “grasp the meaning of a word in Chinese” (ie an understanding of Chinese, by your own definition) would necessarily mean that an agent could respond to a question in English?
Tisthammerw said:
given how I defined understanding, isn't it clear that this person obviously doesn't know a word of Chinese?
First define “person”. With respect I suggest by “person” you implicitly mean “consciousness”, and we both agreee that the consciousness that calls itself “Searle” does not understand Chinese. Does that make you happy?
Nevertheless, there is a part of the physical body of Searle (which is not part of his consciousness) which does understand Chinese. This is the “internalised rulebook”. You obviously will not accept this, because in your mind you are convinvced that consciousness is a necessary pre-requisite for understanding – but this is something that you have (with resepct) assumed, and not shown rigorously.
Tisthammerw said:
Do you think he's lying when he says he doesn't know what the Chinese word means?
The consciousness calling itself Searle does not know the meaning of a word of Chinese.
But there exists a part of the physical body of Searle (which is not conscious) which does understand Chinese – this is the part that has internalised the rulebook.
Tisthammerw said:
Given how I defined understanding, consciousness is a prerequisite.
You have not shown that consciousness is a prerequisite, you have assumed it, and I explained why above.
Tisthammerw said:
The computer cannot literally understand any more than the man in the Chinese room understands a word of Chinese.
Are you referring once again to the original CR argument, where the man is simply passing notes back and forth? If so, this man indeed does not understand Chinese, nor does he need to.
Tisthammerw said:
So, my argument is invalid because it is a tautology? Tautologies are by definition true and are certainly logically valid (i.e. if the premise is true the conclusion cannot fail to be true).
Do you agree your argument is based on a tautology?
moving finger said:
I dispute your definition of understanding.
Tisthammerw said:
But this is what I mean when I use the term “understanding.”
Then we will have to agree to disagree, because it’s not what I mean
Tisthammerw said:
Thus (using my definition) if a person understands a Chinese word, it is necessarily the case that the person is aware of what the Chinese word means.
Let me re-phrase that :
“Thus (using your definition) if a consciousness understands a Chinese word, it is necessarily the case that the consciousness is aware of what the Chinese word means.”
I agree with this statement.
But imho the following is also correct :
“If an agent understands a Chinese word, it is not necessarily the case that consciousness is associated with that understanding.”
This is clearly the case with the Chinese Room. The man is not conscious of understanding a word of Chinese.
Tisthammerw said:
Can we agree that a computer cannot “understand” given what I mean when I use the word?
If you mean “can we agree that a non-conscious agent cannot understand given the assumption that consciousness is a necessary pre-requisite of understanding” then yes I agree that this follows - but this is a trivial argument (in fact a tautology).

The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.
With the greatest respect,
MF
 
  • #173
moving finger said:
Allow me to paraphrase your argument, to ensure that I have the correct understanding of what you are trying to say.
According to you (please correct me if I am wrong),
Consciousness = sensation, perception, thought, awareness
Understanding = grasp meaning of = knows what words mean = perceives meaning of words = is aware of truth of words

Fairly accurate, except that the last part should be "is aware of the truth of what the words mean."


Firstly, with respect, as I have mentioned already, in the case of consciousness this is a listing of some of the “components of consciousness” rather than a definition of what consciousness “is”.

I wouldn't say that. If an entity has a state of being such that it includes the characteristics I described, the entity has consciousness (under my definition of the term).


Secondly, I do not see how you make the transition from “Consciousness = sensation, perception, thought, awareness” to the conclusion “consciousness is a necessary pre-requisite for understanding”.

Simple. Understanding (as how I defined it) requires that the entity be aware of what the words mean (this would also imply a form of perception, thought etc.). This would imply the existence of consciousness (under my definition of the term). I’ll recap the definitions near the end of this post.


Tisthammerw said:
Can you see why the denial of what I said can be taken as a violation of the law of noncontradiction?

Your argument is based on a false assumption, which is that “he knows the meaning of the words without knowing the meaning of the words”

But I was not discussing the argument in the section I was referring to. As I mentioned in post https://www.physicsforums.com/showpost.php?p=790665&postcount=171".


[quote="Tisthammerw”]My conclusion logically follows given how I defined conscoiusness."

With respect, you have not shown how you arrive at the conclusion “my pet cat possesses consciousness”, you have merely stated it.
[/quote]

Not at all. My argument went as follows (some premises were implicit):

  1. If my cat possesses key characteristic(s) of consciousness (e.g. perception) then my cat possesses consciousness (by definition).
  2. My cat does possesses those attribute(s).
  3. Therefore my cat has consciousness.


Tisthammerw said:
moving finger said:
Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?
There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.

So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?

Your response:

I did not say it does not take place in his brain.

Then perhaps you can understand why I asked the question.


Are you perhaps assuming that brain is synonymous with consciousness?

Are you?


Let Searle (or someone else) first tell me “where he has internalised the rulebook”, and I will then be able to tell you where the understanding takes place (this is Searle’s thought experiment, after all)

In the physical plane, it would be the brain would it not?


Tisthammerw said:
The part that has internalized the rulebook is his conscious self

I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not necessarily as a part of his consciousness.

Perhaps we are confusing each other's terms. When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives. What do you mean by it?


Then you would be behaving illogically. What part of “grasp the meaning of a word in Chinese” (ie an understanding of Chinese, by your own definition) would necessarily mean that an agent could respond to a question in English?

Because understanding Chinese words (as I have defined it) means he is aware of what the Chinese words mean, and thus (since he knows and understands English) he can tell me in English if he understands Chinese.


Tisthammerw said:
given how I defined understanding, isn't it clear that this person obviously doesn't know a word of Chinese?
First define “person”.

An intelligent, conscious individual.

With respect I suggest by “person” you implicitly mean “consciousness”, and we both agreee that the consciousness that calls itself “Searle” does not understand Chinese. Does that make you happy?

Happier anyway.


Nevertheless, there is a part of the physical body of Searle (which is not part of his consciousness) which does understand Chinese.

That is not possible under my definition of understanding. There is no part of Searle--stomach, arm, liver, or whatever--that is aware of what the Chinese words mean.


Tisthammerw said:
So, my argument is invalid because it is a tautology? Tautologies are by definition true and are certainly logically valid (i.e. if the premise is true the conclusion cannot fail to be true).

Do you agree your argument is based on a tautology?

It depends on what you mean by "tautology." If you are referring to an argument that is true by virtue of the definitions involved due to a repetition of an idea(s) (e.g. "all bachelors are unmarried"), then I agree that my argument is a tautology.

Tisthammerw said:
Thus (using my definition) if a person understands a Chinese word, it is necessarily the case that the person is aware of what the Chinese word means.

Then we will have to agree to disagree, because it’s not what I mean

Let me re-phrase that :
“Thus (using your definition) if a consciousness understands a Chinese word, it is necessarily the case that the consciousness is aware of what the Chinese word means.”
I agree with this statement.
But imho the following is also correct :
“If an agent understands a Chinese word, it is not necessarily the case that consciousness is associated with that understanding.”
This is clearly the case with the Chinese Room.

This clearly cannot be the case with the Chinese Room--if we use my definition of understanding. He cannot even in principle perceive the meaning of any Chinese word.

Let’s recap my definition of understanding.

When I say “understand” I mean “grasp the meaning of.” When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean. When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean.

Let’s recap my definition of consciousness.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.


The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.

Then do you also disagree with the belief that all bachelors are unmarried? Remember what I said before about tautologies...

To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

(Note: “valid” in this case means that the output constitutes satisfactory answers [i.e. to an outside observer the answers seem “intelligent” and “rational”] to Chinese questions.)
 
Last edited by a moderator:
  • #174
I said yes to the poll question. If we weren't created by God and there is no metaphysical component to our intelligence, if we are nothing but biological machines, then the answer is definitely yes. If there is a metaphysical component to us then maybe yes, maybe no but if no it would come pretty darn close, close enough to fool almost anyone, like in Blade Runner.

I am of the belief that we operate by knowing rules. Everything we do is governed by rules. There is a group of AI researchers that believe this too and are trying to create intelligence by loading their construct with as many rules as they can. Most of what we are is rules and facts. Rules and facts can simulate whatever there is of us that isn't rules and facts and make AI appear to be self aware and intelligent (random choice for example or emotion). If you don't believe this, name something people do that isn't or couldn't be governed by rules and facts.
 
  • #175
And we're still attempting to define this..

ok I don't claim to be a neuroscientist, so bear with me

In order to understand conciousness we need to understand the processes that come into play.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness

asuming suficient technological advance, we can grant any of these characteristics to a machine, inlcuding, but not exclusive to: sensation, perception(learning through observation, eg point at a chair and say "chair")

As far as I can tell TH, your definition of conciousness is the ability to "understand" words and meaning through an associative process, which is the way we percieve it. Our brain processes input from our external senses, then compares it to our past experiences before determining a reaction, if any. EG, when we hear the word chair, our ears send this signal to our brain which then searches for that word, and if found associates it with the visual, aural, and other sensory input from memory. Then we sit in the chair. If we had never heard the the word "chair before, then our brain proceses this as an unknown, and as a response will cause us to atttempt to ascertain what this object is, what it's use is, what it feels like, etc.

that's a very rough overview, but it will do. What you are saying is that a machine understands the word, due the word "chair" being in it's memory chip. But it 's the same process. The machine's video perceives a chair. The cpu analyzes the external input and runs it against it's memory banks to see if it knows this object. If so it reacts accordingly, if not it attempts to ascertain the purpose of the object. It's the same process. Unless you're talking about a specific aspect of understanding, such as emotion, there is no difference.

TH your chinese room is inflexible and does not take into account that the chinese man, as it relates to our purpose, IS capable of learning chinese. What you're referring to is that even if the chinese man knew the words, he couldn't associate the reference that the words make. However, as it relates to AI, he is capable of learning the words after being taught a few letters. So through deduction and trial and error, he will deduce the alphabet, then meaning of the words. And when I say meaning, I mean through association. (point at chair-this is a chair). Then he will leave the room and through description and observation be able to deduce their meanings.

Yes, if we stick by the strict rules of the chinese room it makes sense. But the chinese room contradicts the capabilities of AI. Therefore it cannot fully explain any limitations of AI.
 

Similar threads

Replies
1
Views
1K
Replies
21
Views
2K
Replies
9
Views
2K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
1K
Replies
3
Views
1K
Back
Top