Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #246
chound said:
I've always wondered, arent we also programmed to do things? Like we have to get up at 6 am take bath, goto school/office, etc?
Or atleast when we are infants, we do what we are told to do just like computers so is AI better than child's intelligence?
Did you always do what you were told when you were an infant?

If so, I wish my kids had been more like you! :smile:

MF
 
Physics news on Phys.org
  • #247
Tisthammerw said:
Remember, an analytic statement must stand or fall by itself – you cannot “make a synthetic statement analytic” by adding qualifications (such as your preferred definition) to it in parentheses (otherwise I could claim that I can make ALL statements analytic, simply by defining the terms the way I wish).

In that case no statements are analytical, because they all depend on how one defines the words. Whether a statement can be considered “properly” analytical in the usual sense depends on if the definitions are conventional or unconventional. I really don’t think mine are all that unusual; that if we took a Gallup poll the majority of people would say “Yes, this matches my definition of understanding.” But since it seems unlikely we will agree on this point, let’s just simply recognize that “understanding requires consciousness” is an analytic statement if we use my definitions (not necessarily everyone else’s). Or if you prefer, we could call my definitions of “understanding” and “consciousness” “TH-understanding” and “TH-consciousness” respectively. In that case “TH-understanding requires TH-consciousness.” It sounds quite odd to me, but if it will cause you stop making ignoratio elenchi remarks I am willing to do it.



moving finger said:
D’oh! :rolleyes: What did I say already? See post #238 :
We can only agree on which statements are analytic and which are not if we firstly agree on the definitions of the terms we are using!

Again, when I said “understanding requires consciousness” I was explicitly referring to my definitions, not necessarily everybody else’s.


Tisthammerw said:
I am only referring to computers that follow the “standard” model (e.g. like that of a Turing machine). In that case I think the program X argument works quite nicely, because it represents any possible program that would provide understanding.

My reply is basically the same – You have not shown, either here or elsewhere, either that “all possible Turing machines are not conscious” or that “all possible Turing machines do not possesses understanding”.

You’re forgetting something (something I suggested in the very quote you responded to): program X stands for any program that would allegedly produce understanding (the kind of understanding I am referring to is what you have called TH-understanding). And yet we see that program X is run without TH-understanding.


Tisthammerw said:
Applying this to the Chinese language, ask Bob if he understands (again, using the “TH” definition) what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X.

Ahhhh, I see. Your argument is thus “a non-conscious agent does not TH-Understand, because we define TH-Understanding as requiring consciousness”.

Yes and no. The existence of consciousness is not, strictly speaking, a part of the definition of TH-understanding. Though it is true that TH-understanding requires consciousness. In terms of a man understanding words, here is the definition I am using:
  • The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.

And here is how I define consciousness:

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

My justification that “understanding requires consciousness” is an analytic statement comes from instantiating a few characteristics:

  • Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

So, “understanding requires consciousness” is an analytical statement (with the definitions I am using). Or if you prefer, “TH-understanding requires consciousness.”

That is a very impressive and insightful argument, I must say.
Do you have anything more useful to say, since I am not interested in more tautological timewasting?

If you consider the question of whether a computer can have TH-understanding (perceive the meaning of words etc.) what the @#$% are you doing replying to my posts?


Tisthammerw said:
I was not referring to the analytic………

……. not that circular reasoning isn’t a fallacy.

Groan – not still on about that are you? :rolleyes:

I am. I think it is important for you to understand what circular reasoning is so that you don’t recklessly charge people with it (as you have done here).


IF the sum total of your position on understanding is based on the argument “a non-conscious agent does not TH-Understand, because we define TH-Understanding as requiring consciousness”

See above and post #239 which among other things points out:

Note my argument (that justifies “understanding requires consciousness” is an analytic statement) takes the following format:

1. “This is what I mean by understanding…”
2. “This is what I mean by consciousness…”

Therefore: understanding requires consciousness (in the sense that I mean when I use the terms).

This is not a circular argument. Why? Because the conclusion is not a restatement of any single premise. It takes both premises for the conclusion to logically follow. You may claim that, if we assume all of the premises to be true (and they are: this is what I mean by understanding and consciousness) we assume the conclusion; but this is gong to be true for any valid deductive argument
 
  • #248
moving finger said:
Hmmmm, that, I must say, is a very deep and thoughtful conclusion. I wonder why the rest of us didn't see that? :rolleyes:
MF

I wonder too.
 
  • #249
moving finger said:
We can only agree on which statements are analytic and which are not if we firstly agree on the definitions of the terms we are using!
Tisthammerw said:
Again, when I said “understanding requires consciousness” I was explicitly referring to my definitions, not necessarily everybody else’s
(a) you have already agreed that your definition of understanding is not the only definition
(b) I have said many times that I do not agree with your definition. In my definition, it is not clear that understanding requires consciousness.

Since we do not agree on the definitions of the terms we are using, it follows that we do not necessarily agree that a statement using those terms is analytic!

It’s so blatantly obvious that it is worth repeating :

We can only agree on which statements are analytic and which are not if we firstly agree on the definitions of the terms we are using

Do you understand this?.

Tisthammerw said:
program X stands for any program that would allegedly produce understanding (the kind of understanding I am referring to is what you have called TH-understanding). And yet we see that program X is run without TH-understanding.
If the agent is not conscious it cannot possesses TH-Understanding, by definition.
This does not mean that all possible computers are incapable of possessing either consciousness or TH-Understanding, and you have not shown this to be the case.

Tisthammerw said:
The existence of consciousness is not, strictly speaking, a part of the definition of TH-understanding.
Of course it is. Simply because you have “split” the definition such that “TH-understanding requires awareness, and awareness requires consciousness”, does not mean that consciousness is not part of the definition of TH-understanding. By your definition of TH-Understanding, TH-Understanding requires consciousness. Period.

Tisthammerw said:
So, “understanding requires consciousness” is an analytical statement (with the definitions I am using). Or if you prefer, “TH-understanding requires consciousness.”
“TH-understanding requires consciousness” is another way of saying ““understanding requires consciousness with the definition of understanding that Tisthammerw is using”

Tisthammerw said:
If you consider the question of whether a computer can have TH-understanding (perceive the meaning of words etc.) what the @#$% are you doing replying to my posts?
Does this question make any sense to you? It doesn’t to me.

(One reason I am replying to your posts is because you keep asking me questions, and the words “Please answer my question” keeps cropping up. I guess I’m just too accommodating.)

Tisthammerw said:
This is not a circular argument. Why? Because the conclusion is not a restatement of any single premise.
Just because you have split your definition between two premises changes nothing. The premises combined result in the same thing : you choose to define understanding such that it requires consciousness. You have not shown that understanding requires consciousness, you have simply defined it that way.

Using your “logic”, I could define understanding to be anything I like (“understanding requires 3 heads” for example), and then use your deductive argument to then show that it follows that understanding requires 3 heads. Are you suggesting this would be a sound argument?

MF
 
  • #250
chound said:
I've always wondered, arent we also programmed to do things? Like we have to get up at 6 am take bath, goto school/office, etc?
Or atleast when we are infants, we do what we are told to do just like computers so is AI better than child's intelligence?

The thing is that once we truly replicate child's intellegence, human-like intellegence will immediately follow.

Knowing that you exist and relating to things around you; "Understanding" is required for human-like int. Not just running a program, even though that is certainly a part of it.

The simplest form of "understanding" is the ability to predict the outcome of a complex situation you have never encountered before. This is not yet human understanding but it gets close. (and this is more a symptom of understanding rather than the cause)

The basic set of motivations/instincts that are passed on to us genetically, give us self-awareness. Constantly asking "how does this affect me and satisfy my motivations" and the ability to gain new motivations and change and adapt old ones. That all exists in an infant at birth.
 
  • #251
moving finger said:
Tisthammerw said:
Again, when I said “understanding requires consciousness” I was explicitly referring to my definitions, not necessarily everybody else’s

(a) you have already agreed that your definition of understanding is not the only definition
(b) I have said many times that I do not agree with your definition. In my definition, it is not clear that understanding requires consciousness.

Fine, but completely irrelevant to the point I was making here. The definition of understanding I refer to requires consciousness. You may “disagree” with the definition in the sense that you mean something different when you use the term, but that is completely irrelevant.

Since we do not agree on the definitions of the terms we are using, it follows that we do not necessarily agree that a statement using those terms is analytic!

Since I was (rather explicitly) referring to only my definition, it follows that we necessarily agree that the statement using those terms is analytic!


It’s so blatantly obvious that it is worth repeating :
We can only agree on which statements are analytic and which are not if we firstly agree on the definitions of the terms we are using

It’s so blatantly obvious that it is worth repeating:

I was only referring to my definitions of the terms when I claimed the statement was analytic.

Do you understand this?


Tisthammerw said:
program X stands for any program that would allegedly produce understanding (the kind of understanding I am referring to is what you have called TH-understanding). And yet we see that program X is run without TH-understanding.

If the agent is not conscious it cannot possesses TH-Understanding, by definition.
This does not mean that all possible computers are incapable of possessing either consciousness or TH-Understanding, and you have not shown this to be the case.

Program X is a placeholder for any alleged program that would allegedly produce TH-understanding. If I have shown that no TH-understanding comes about even when program X is run, what would you conclude? If you do not think I have shown this, please answer my questions regarding this matter (e.g. do you believe that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese?). Simply saying “you have not shown this” does nothing to answer my questions or to address the points of my argument.


By your definition of TH-Understanding, TH-Understanding requires consciousness. Period.

You’ll get no argument from me about that.

Tisthammerw said:
So, “understanding requires consciousness” is an analytical statement (with the definitions I am using). Or if you prefer, “TH-understanding requires consciousness.”

“TH-understanding requires consciousness” is another way of saying ““understanding requires consciousness with the definition of understanding that Tisthammerw is using”

True, and “MF-understanding does not require consciousness” is another way of saying “understanding does not require consciousness with the definition of understanding moving finger is using.”


Tisthammerw said:
If you consider the question of whether a computer can have TH-understanding (perceive the meaning of words etc.) what the @#$% are you doing replying to my posts?

Does this question make any sense to you? It doesn’t to me.

Sorry, I misspoke here. It should have read:

If you consider the question of whether a computer can have TH-understanding (perceive the meaning of words etc.) a waste of time, what the @#$% are you doing replying to my posts?


Ah, I see you’ve decided to reply to the latter half of post #239.

Tisthammerw said:
This is not a circular argument. Why? Because the conclusion is not a restatement of any single premise.

Just because you have split your definition between two premises changes nothing.

The argument you’re referring to uses two definitions, remember? That’s two premises.


The premises combined result in the same thing

The same is true with modus ponens and any other logically valid argument.


you choose to define understanding such that it requires consciousness. You have not shown that understanding requires consciousness, you have simply defined it that way.

You have not shown that bachelors are unmarried, you have simply defined it that way.

Obviously, my conclusion logically follows from my definitions of “understanding” and “consciousness.” But so what? All analytical statements are the result of somebody’s definition. The only question is whether the definitions are unconventional (like defining the word “cheese” to mean “piece of the moon”) and I really don’t think mine are.


Using your “logic”, I could define understanding to be anything I like (“understanding requires 3 heads” for example), and then use your deductive argument to then show that it follows that understanding requires 3 heads. Are you suggesting this would be a sound argument?

The fact that your definition would require three heads would be a sound argument, but your definition of “understanding” is rather unconventional, whereas mine is not. I honestly think that if we took a Gallup poll the majority of people would say “Yes, this matches my definition of understanding.” But since it seems unlikely we will agree on this point, let’s just simply recognize that “understanding requires consciousness” is an analytic statement if we use my definitions (not necessarily everyone else’s). So let’s get straight to the program X argument on whether computers (at least in the current model I’ve described, e.g. a complex set of instructions operating in input etc.) can possesses what you have dubbed TH-understanding.
 
  • #252
Tisthammerw said:
I was only referring to my definitions of the terms when I claimed the statement was analytic.

You may “refer” to whatever you wish, it does not change the following fact :

understanding does NOT require consciousness in all possible definitions of understanding, therefore the statement “understanding requires consciousness” is not analytic
Tisthammerw said:
Program X is a placeholder for any alleged program that would allegedly produce TH-understanding. If I have shown that no TH-understanding comes about even when program X is run, what would you conclude? If you do not think I have shown this, please answer my questions regarding this matter (e.g. do you believe that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese?).
I am suggesting it is possible in principle for a Turing machine to possesses TH-understanding, including consciousness. Whether that Turing machine is embodied as silicon plus electrons, or whether it is embodied as paper and wooden sticks, or whether it is embodied as pipes and tubes makes no difference in principle (in practice it’s quite another matter). Nothing in your argument has shown that it is impossible in principle for such a Turing machine to possesses TH-understanding, along with consciousness.

Tisthammerw said:
Simply saying “you have not shown this” does nothing to answer my questions or to address the points of my argument.
In the case where the Turing machine possesses TH-understanding then there would be consciousness present, created as a part of the processing of the Turing machine. There is nothing in your argument which shows that creation of such consciousness woild be impossible in principle in all Turing machines. The onus is on you to show why your argument implies that no consciousness can be created by any Turing machine.

Tisthammerw said:
True, and “MF-understanding does not require consciousness” is another way of saying “understanding does not require consciousness with the definition of understanding moving finger is using.”
Hey, we agree!

Tisthammerw said:
If you consider the question of whether a computer can have TH-understanding (perceive the meaning of words etc.) a waste of time, what the @#$% are you doing replying to my posts?

I consider the question “can a non-conscious agent possesses TH-understanding” a waste of time. I never said that I consider the question “can a computer possesses TH-Understanding”, which is a very different question, a waste of time.
Tisthammerw said:
You have not shown that bachelors are unmarried, you have simply defined it that way.
Precisely!
That is why we agree that “bachelors are unmarried” is analytic, but we do not agree that “understanding requires consciousness” is analytic – because we do not agree on the definition of understanding! How many times do you want to go round in circles?

Tisthammerw said:
Obviously, my conclusion logically follows from my definitions of “understanding” and “consciousness.”
Which I do not agree with!

Tisthammerw said:
But so what? All analytical statements are the result of somebody’s definition.
Analytic statements are ONLY analytic if we AGREE ON THE DEFINITIONS OF THE TERMS. How many times do I need to repeat that we do not agree on the definition of “understanding”?

Tisthammerw said:
The only question is whether the definitions are unconventional (like defining the word “cheese” to mean “piece of the moon”) and I really don’t think mine are.
The only question is whether we agree on the definitions.

Tisthammerw said:
if we took a Gallup poll the majority of people would say “Yes, this matches my definition of understanding.”
And “argumentum ad numerum” (appealing to popular vote) is also a logical fallacy. Truth, understanding and wisdom are not decided by democratic vote.

Tisthammerw said:
But since it seems unlikely we will agree on this point, let’s just simply recognize that “understanding requires consciousness” is an analytic statement if we use my definitions (not necessarily everyone else’s).
I will agree that “TH-Understanding requires consciousness” is analytic.
Or that “understanding as defined by Tisthammerw requires consciousness” is analytic.
But not that “understanding requires consciousness” is analytic.

Tisthammerw said:
So let’s get straight to the program X argument on whether computers (at least in the current model I’ve described, e.g. a complex set of instructions operating in input etc.) can possesses what you have dubbed TH-understanding.
Are you suggesting that you think you have shown that it is impossible for all Turing machines to in-principle possesses both consciousness and TH-understanding? Where have you shown this?

MF
 
Last edited:
  • #253
moving finger said:
You may “refer” to whatever you wish, it does not change the following fact :

understanding does NOT require consciousness in all possible definitions of understanding

Something I have been saying for quite some time.

therefore the statement “understanding requires consciousness” is not analytic

That does not logically follow. Whether or not “understanding requires consciousness” is analytic depends on how the terms are defined. You seem to be saying that for a statement to be “properly” considered analytic it needs to be analytic in all possible definitions of the terms. Let’s examine this:

Analytic statements are ONLY analytic if we AGREE ON THE DEFINITIONS OF THE TERMS.

….

The only question is whether we agree on the definitions.

Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?


Tisthammerw said:
if we took a Gallup poll the majority of people would say “Yes, this matches my definition of understanding.”

And “argumentum ad numerum” (appealing to popular vote) is also a logical fallacy. Truth, understanding and wisdom are not decided by democratic vote.

Appealing to popularity is perfectly acceptable if the question is whether or not the definition fits popular understanding of the term.


I am suggesting it is possible in principle for a Turing machine to possesses TH-understanding, including consciousness.

Well then, please respond to my argument regarding program X which argues against that claim. I don’t know why you’ve kept ignoring the points of the argument and the questions it asks.


Tisthammerw said:
Simply saying “you have not shown this” does nothing to answer my questions or to address the points of my argument.

In the case where the Turing machine possesses TH-understanding then there would be consciousness present, created as a part of the processing of the Turing machine.

Again, simply saying “you have not shown this” does nothing to address the points of my argument or answer my questions. Let’s deal with this question as an example: do you honestly believe that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese? Does your reply above indicate that the answer is “yes”?


Are you suggesting that you think you have shown that it is impossible for all Turing machines to in-principle possesses both consciousness and TH-understanding?

I think I have constructed a pretty good argument against it. It’s not a rigorous proof, though I consider it to have some evidential value. For instance, one could claim that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese, but this seems a little too magical and not at all plausible.

To save you the trouble of finding the program X argument again (e.g. in post #102 in another thread):

The Program X Argument

Suppose we have a robot with a computer (any computer architecture will do, so long as it works) hooked up to cameras, microphones etc. Would the “right” program (with learning algorithms and whatever else one could want) run on here produce literal understanding? My claim is no, and to justify it I appeal to the following thought experiment.

Let “program X” represent any program such that, if it were run, would produce literal TH-understanding. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations a CPU (central processing unit) can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. Note that program X is a placeholder for any program that would allegedly produce literal understanding. So it seems that even having the “right” rules and the “right” program is not enough even with a robot.

Some strong AI adherents claim that having “the right hardware and the right program” is enough for literal understanding to take place. In other words, it might not be enough just to have the right program. A critic could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But it isn’t clear why that would be a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?

Or do you believe that TH-understanding exists in the former case (with Bob being the processor of Program X)? In that case, do you believe that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands?

I await your answers to these questions.
 
  • #254
Tisthammerw said:
Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?
If we do not agree on the terms used in a statement then it follows that we also may not agree on whether that statement is analytic or not. If you cannot see this simple fact then it is a waste of time to continue this debate.

With respect, Tisthammerw, I feel that I have wasted enough of my time going round in circles with you on this. As I said a long time ago you are entrenched in your position and I in mine. I see no point in continuing these circular arguments. I’m going to cut this short and move on, I suggest you do the same.

Tisthammerw said:
Appealing to popularity is perfectly acceptable if the question is whether or not the definition fits popular understanding of the term.
In scientific understanding and research the “popular definitions” of words are often misleading. “Perception” may mean quite a different thing to a cognitive scientist compared to its meaning to a lay-person. Appealing to “popular understanding” in such a case would be incorrect.

Tisthammerw said:
I don’t know why you’ve kept ignoring the points of the argument and the questions it asks.
I don’t know why you keep ignoring my request to “show that no consciousness can be created by any Turing machine”. It is Tisthammerw who is making the claim that Turing machines cannnot possesses consciousness, thus the onus is on Tisthammerw to back up such a claim with a rational argument and evidence.

Tisthammerw said:
Let’s deal with this question as an example: do you honestly believe that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese? Does your reply above indicate that the answer is “yes”?
You tell me in detail what properties your hypothetical “man plus rulebook etc” has, and I might be able to tell you if it might possesses consciousness or not. An arbitrary “man plus rulebook” is not necessarily a conscious entity.

Tisthammerw said:
one could claim that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese, but this seems a little too magical and not at all plausible.
Why should it not be plausible, given the right combination of “rulebook, man etc”?
Is your basis for believing it is not plausible simply an emotional belief?
Do you perhaps believe that “consciousness” is some kind of magic “goo” which is exuded only by the human brain?

Tisthammerw said:
The Program X Argument

Suppose we have a robot with a computer (any computer architecture will do, so long as it works) hooked up to cameras, microphones etc. Would the “right” program (with learning algorithms and whatever else one could want) run on here produce literal understanding? My claim is no, and to justify it I appeal to the following thought experiment.

Let “program X” represent any program such that, if it were run, would produce literal TH-understanding. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations a CPU (central processing unit) can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.
You have not shown that no real understanding is taking place. You have simply asserted this.

Tisthammerw said:
Note that program X is a placeholder for any program that would allegedly produce literal understanding. So it seems that even having the “right” rules and the “right” program is not enough even with a robot.
Ditto above. You have not shown that no real understanding is taking place. You have simply asserted this.

With respect, I do not need to respond to the rest of your post, because the rest of your post takes it as a “given” that no understanding is taking place, and I am challenging your assertion.

My response is thus the same as before : You have not shown that no real understanding is taking place. You have simply asserted this.

Can you “show” that no understanding is taking place, instead of simply asserting it?

MF
 
Last edited:
  • #255
moving finger said:
Tisthammerw said:
Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?

If we do not agree on the terms used in a statement then it follows that we also may not agree on whether that statement is analytic or not.

So is that a yes? If so, doesn't there seem to be something wrong with your claim if this would mean that the statement “bachelors are unmarried” is not an analytic statement?


Tisthammerw said:
Appealing to popularity is perfectly acceptable if the question is whether or not the definition fits popular understanding of the term.

In scientific understanding and research the “popular definitions” of words are often misleading.

But we are not referring to the “scientific” definitions; we’re referring to definitions in general of “understanding.”


Tisthammerw said:
I don’t know why you’ve kept ignoring the points of the argument and the questions it asks.

I don’t know why you keep ignoring my request to “show that no consciousness can be created by any Turing machine”.

I’ll try this again: the purpose of the program X argument is to show that no understanding (as I have defined the term, what you would call TH-understanding) can take place (given the model of the computer under discussion). Do you agree that the argument works? If not, please address the points of the argument and the questions it asks.

Note that I did not have to argue that no Turing machine possesses consciousness to illustrate my point. Still, the program X argument also seems to show that no consciousness can be created by any Turing machine (except perhaps for the homunculus itself) or to the very least makes it implausible (since program X is simply a placeholder for any program that would allegedly do the job). Do you, for instance, claim that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese? That doesn’t seem plausible.


It is Tisthammerw who is making the claim that Turing machines cannnot possesses consciousness

It is Tisthammerw who is making the claim that Turing machines cannot possesses TH-understanding. Please don’t forget what the argument is about.


Tisthammerw said:
Let’s deal with this question as an example: do you honestly believe that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese? Does your reply above indicate that the answer is “yes”?

You tell me in detail what properties your hypothetical “man plus rulebook etc” has, and I might be able to tell you if it might possesses consciousness or not.

The rulebook contains identical instructions to program X. The man is an ordinary human being except for his prodigious memory and powers of calculation.


Tisthammerw said:
one could claim that the combination of the man, the rulebook etc. somehow creates a separate consciousness that understands Chinese, but this seems a little too magical and not at all plausible.

Why should it not be plausible, given the right combination of “rulebook, man etc”?

Because it sounds a little too much like magic. The rulebook is just words on paper, for instance. Suppose I claim that if I (a man) speak the right words from a book (with the “right” words written on it), the incantation gives my pet rock consciousness. Do you find this claim plausible? Technically you couldn’t disprove it, but it is (I think) hardly plausible. Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?

Now as I said, this is merely an evidential argument and not a proof. So we could agree to disagree regarding this plausibility thing and finally reach our disputable point.


Tisthammerw said:
The Program X Argument

Suppose we have a robot with a computer (any computer architecture will do, so long as it works) hooked up to cameras, microphones etc. Would the “right” program (with learning algorithms and whatever else one could want) run on here produce literal understanding? My claim is no, and to justify it I appeal to the following thought experiment.

Let “program X” represent any program such that, if it were run, would produce literal TH-understanding. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations a CPU (central processing unit) can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.

You have not shown that no real understanding is taking place. You have simply asserted this.

Well, let’s test my claim in this thought experiment. Just ask Bob if he understands what’s going on. His honest answer is “no.” What more do you want?

Perhaps you claim that the systems reply works here. But does the combination of the rulebook, the man etc. create a separate consciousness that understands (confer my argument earlier)? Or in the tradition of John R. Searle we can do the following: suppose Bob is a cyborg and program X is for understanding Chinese. When in “learning mode,” he uses program X via the memorized rulebook and his mechanical eyes and ears transmit a stream of binary digits to his consciousness. Bob doesn’t know what the binary digits mean, but he has memorized the rulebook and can do the same operations as before. He then makes sounds he does not understand, moves his limbs etc. but clearly does not understand (remember, we are referring to what you have called TH-understanding). How can we show this? Ask Bob if he understands, and his honest answer will be “no.” From this experiment, it is clear that Bob is not aware of what the words mean.

If you think that understanding takes place with the machines “normal” processor please answer the relevant questions pertaining to this.
 
  • #256
Tisthammerw said:
Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?
Tisthammerw said:
If we do not agree on the terms used in a statement then it follows that we also may not agree on whether that statement is analytic or not.
Tisthammerw said:
So is that a yes? If so, doesn't there seem to be something wrong with your claim if this would mean that the statement “bachelors are unmarried” is not an analytic statement?
I cannot tell you whether you think the statement is analytic or not – that is a decision for you to make based on your definition of the words. Thus I have no idea whether we would agree on the answer to the question.

Tisthammerw said:
Appealing to popularity is perfectly acceptable if the question is whether or not the definition fits popular understanding of the term.
Tisthammerw said:
In scientific understanding and research the “popular definitions” of words are often misleading.
Tisthammerw said:
But we are not referring to the “scientific” definitions; we’re referring to definitions in general of “understanding.”
I don’t know what you are referring to, but I am looking at the question of whether it is possible in principle for machines to possesses understanding. To me that is a scientific question.

I’ll trim the fat here and get straight to your “argument”

Tisthammerw said:
The Program X Argument

Suppose we have a robot with a computer (any computer architecture will do, so long as it works) hooked up to cameras, microphones etc. Would the “right” program (with learning algorithms and whatever else one could want) run on here produce literal understanding? My claim is no, and to justify it I appeal to the following thought experiment.

Let “program X” represent any program such that, if it were run, would produce literal TH-understanding. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations a CPU (central processing unit) can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.
You have not shown that no real understanding is taking place. You have simply asserted this.

Tisthammerw said:
Well, let’s test my claim in this thought experiment. Just ask Bob if he understands what’s going on. His honest answer is “no.” What more do you want?

It has never been claimed that “Bob’s consciousness” is the same consciousness that is doing the understanding. Bob is simply one component of the agent which is doing the understanding. By simply asking Bob (one component of the agent) if he knows what is going on you are committing the same error as if you were to try and ask one of the neurons in Tisthammerw’s brain if it knows what is going on in Tisthammerw’s consciousness. If the neuron could reply it would say “I have no idea”.This would not show there is no understanding taking place in the brain of which the neuron is just a part.

Tisthammerw said:
Perhaps you claim that the systems reply works here. But does the combination of the rulebook, the man etc. create a separate consciousness that understands (confer my argument earlier)?
Yes. See my response above.

Tisthammerw said:
Or in the tradition of John R. Searle we can do the following: suppose Bob is a cyborg and program X is for understanding Chinese. When in “learning mode,” he uses program X via the memorized rulebook and his mechanical eyes and ears transmit a stream of binary digits to his consciousness. Bob doesn’t know what the binary digits mean, but he has memorized the rulebook and can do the same operations as before. He then makes sounds he does not understand, moves his limbs etc. but clearly does not understand (remember, we are referring to what you have called TH-understanding).
How do you know the agent (not Bob’s consciousness remember) does not understand? You cannot establish whether the agent possesses any understanding by asking one component of the agent (Bob's consciousness), just as one cannot establish whether Tisthammerw understands by asking one of the neurons in Tisthammerw's brain. Thus you have not shown that there is no understanding taking place in the agent.

Tisthammerw said:
How can we show this? Ask Bob if he understands, and his honest answer will be “no.” From this experiment, it is clear that Bob is not aware of what the words mean.
Again – you are confusing “Bob’s consciousness” with “the agent that understands” – the two are very different. See my reply above.

Tisthammerw said:
If you think that understanding takes place with the machines “normal” processor please answer the relevant questions pertaining to this.
I have shown above where your confusion lies

MF
 
Last edited:
  • #257
Re-educating Ugg

Let us suppose that Ugg is a neolithic caveman born around 5,000 BC. He lives with his mate Mugga. A freak timewarp transports Ugg and Mugga forward 7,000 years into the 21st century. Imagine their reaction when they see their first motor-car; first aeroplane; first television; first cellphone. Their poorly developed neolithic understanding of the world about them means they will be unable to make any immediate sense of what is really happening in these amazing machines – to Ugg and Mugga they will appear to be working “by magic”. At first they may think there really is a person or a “spirit” inside the television; at first they may think there really is a tiny little person or a “spirit” inside the cellphone.

When Ugg and Mugga are shown the inside of these devices, full of wires and small incomprehensible objects, no little homunculus or spirit in sight, they may disbelieve their eyes. They will be in denial, claiming it is simply impossible, that there must be some weird magic at work which produces these people and faces and voices from “nothing”. We might try to explain how the machines work, but neither Ugg nor Mugga will have the capacity to understand what we are talking about unless they are first massively re-educated. To truly understand how these devices work they will need to learn about things like chemistry and physics, electronics, semiconductors, digital electronic circuits, digital audio processing, radio propagation and reception, – and many more things they have absolutely no concept of.

Let us suppose Ugg is obstinate and unreceptive to new ideas – he gives up before he even starts, claiming simply that it is totally incomprehensible and must be “magic”, whereas Mugga perseveres, opening her mind to new ideas, being receptive to new words, new concepts, new semantics. Eventually Mugga starts to grasp some understanding of how the machines work, whilst Ugg is left behind in his neolithic ignorance. Mugga begins to accept the new technology, and finally understands there is NO magic, there is NO homunculus, there is NO ghost inside the machine, she realizes that the properties of these amazing machines can be explained in terms of basic scientific principles and the complexity of the interworking of their component parts. But Ugg continues to remain ignorant and when questioned about the machines he can only wave his hands and say “i dunno, it’s magic!”.

When it comes to genuine AI, most of us (with respect) are in the position of Ugg and Mugga the cavepeople. We do not understand how a “machine” can possibly give rise to conscious awareness and understanding (we do not even know how a human agent can give rise to conscious awareness and understanding!), and when we try to make “simple models” of what is going on the very concept seems totally impossible to us. With our limited understanding and limited models, we cannot comprehend how such a thing might work – thus we dub it “magic”. If ever faced with such a working machine, some of us may react by “looking for the homunculus inside”; some of us may “deny that it is really conscious, deny that it really understands”; and some of us may claim “it’s magic!”.

The Uggs amongst us will continue to obstinately refuse acceptance of new ideas, will continue to view the sheer mind-boggling complexity of such a machine as “incomprehensible”, will try to rationalise what is going on in terms of simpler, inaccurate models which patently “do not work”, and will conclude from this that “it must be magic”, and remain in denial that such a thing is at all rational...

The Muggas amongst us will open their minds to new ideas, will recognise that the sheer mind-boggling complexity of such a machine is only “incomprehensible” to us because we are still trying to rationalise what is going on in terms of our arcane simple, inaccurate models which “do not work”, will educate themselves accordingly and will move forward to a better and more complete understanding based on much more complex models.

MF
 
  • #258
Earlier:

Tisthammerw said:
moving finger said:
Tisthammerw said:
Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?

If we do not agree on the terms used in a statement then it follows that we also may not agree on whether that statement is analytic or not.

So is that a yes? If so, doesn't there seem to be something wrong with your claim if this would mean that the statement “bachelors are unmarried” is not an analytic statement?


moving finger said:
I cannot tell you whether you think the statement is analytic or not

That is not the question I asked. You said in post #252, "Analytic statements are ONLY analytic if we AGREE ON THE DEFINITIONS OF THE TERMS."

My question: Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?



I don’t know what you are referring to, but I am looking at the question of whether it is possible in principle for machines to possesses understanding. To me that is a scientific question.

It's actually philosophical (the realm of metaphysics) but since you seem unwilling to address what I was talking about (whether my definition of understanding matches popular “understanding” of the term and thus “understanding requires consciousness” is “properly” analytic as "all bachelors are unmarried" is) let's move on.


You have not shown that no real understanding is taking place. You have simply asserted this.

If you honestly think so, please address my questions regarding this matter instead of ignoring them (e.g. regarding a creation of a separate consciousness in this case seeming a little too much like magic).


Tisthammerw said:
Well, let’s test my claim in this thought experiment. Just ask Bob if he understands what’s going on. His honest answer is “no.” What more do you want?

It has never been claimed that “Bob’s consciousness” is the same consciousness that is doing the understanding.

Would you care to point to another?


Perhaps you claim that the systems reply works here. But does the combination of the rulebook, the man etc. create a separate consciousness that understands (confer my argument earlier)?

Yes.

Ah, and here we get to our disputable point. You claim a separate consciousness is somehow created when we combine the rulebook with Bob etc. I claim that this sounds a little too much like magic. The rulebook is just words on paper, for instance. Suppose I claim that if I (a man) speak the right words from a book (with the “right” words written on it), the incantation gives my pet rock consciousness. Do you find this claim plausible? Technically you couldn’t disprove it, but it is (I think) hardly plausible. Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?

Please answer my questions regarding this matter.


How do you know the agent (not Bob’s consciousness remember) does not understand?

The understanding we are talking about here (what you have called TH-understanding) requires consciousness. There does not appear to be any other consciousness than Bob, and speculating the existence of another consciousness that understands seems like wishful thinking at best. Using Ockham’s razor and the principle of the inference to the best explanation, it seems like the most reasonable conclusion is that there is no TH-understanding going on.


You cannot establish whether the agent possesses any understanding by asking one component of the agent (Bob's consciousness), just as one cannot establish whether Tisthammerw understands by asking one of the neurons in Tisthammerw's brain.

But you can perhaps establish whether Tisthammerw understands by asking Tisthammerw’s consciousness (just as I asked Bob’s consciousness, not his individual neurons). And if I honestly reply “I do not understand” would you then conclude that there is some separate, undetectable consciousness exists in me that understands? Or would you accept Ockham’s razor here?
 
  • #259
Tisthammerw said:
You said in post #252, "Analytic statements are ONLY analytic if we AGREE ON THE DEFINITIONS OF THE TERMS."

My question: Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?
Ohhhh good grief. I ADMIT that I made a mistake. I am human, OK? What I should have said is "Analytic statements are NOT NECESSARILY analytic if we DO NOT agree on the definition of the terms”. I apologise for my mistake. You win this one. Feel better now?

Tisthammerw said:
It's actually philosophical (the realm of metaphysics) but since you seem unwilling to address what I was talking about (whether my definition of understanding matches popular “understanding” of the term and thus “understanding requires consciousness” is “properly” analytic as "all bachelors are unmarried" is) let's move on.
It’s only metaphysical if one believes it cannot be answered by our current understanding. I believe it can – thus it is not metaphsyical to me.

moving finger said:
You have not shown that no real understanding is taking place. You have simply asserted this.
Tisthammerw said:
If you honestly think so, please address my questions regarding this matter instead of ignoring them (e.g. regarding a creation of a separate consciousness in this case seeming a little too much like magic).
What question have I ignored? You think it is magic – that is your opinion, simply because you cannot comprehend how it might be possible. That is not a question, it is a statement of your inability to comprehend.

Tisthammerw said:
Well, let’s test my claim in this thought experiment. Just ask Bob if he understands what’s going on. His honest answer is “no.” What more do you want?
moving finger said:
It has never been claimed that “Bob’s consciousness” is the same consciousness that is doing the understanding.
Tisthammerw said:
Would you care to point to another?
The consciousness that exists in the system.


Tisthammerw said:
Perhaps you claim that the systems reply works here. But does the combination of the rulebook, the man etc. create a separate consciousness that understands (confer my argument earlier)?
moving finger said:
Yes.
Tisthammerw said:
Ah, and here we get to our disputable point. You claim a separate consciousness is somehow created when we combine the rulebook with Bob etc. I claim that this sounds a little too much like magic.
That you think it is magic is clear

Tisthammerw said:
The rulebook is just words on paper, for instance. Suppose I claim that if I (a man) speak the right words from a book (with the “right” words written on it), the incantation gives my pet rock consciousness. Do you find this claim plausible? Technically you couldn’t disprove it, but it is (I think) hardly plausible.
Nobody has claimed that “speaking the right words makes your pet rock conscious”. Where did you get this stupid idea from?

Tisthammerw said:
Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?
Because I am not claiming that this makes a third entity (ie a rock) conscious.
Why should it NOT be possible for the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” to be plausible? Simply because you cannot comprehend it? Please answer my question.

Tisthammerw said:
Please answer my questions regarding this matter.
In case you haven’t been reading my posts, I am answering your questions. Are you answering mine?

moving finger said:
How do you know the agent (not Bob’s consciousness remember) does not understand?
Tisthammerw said:
The understanding we are talking about here (what you have called TH-understanding) requires consciousness. There does not appear to be any other consciousness than Bob, and speculating the existence of another consciousness that understands seems like wishful thinking at best.
“There does not appear”? – how do you know this to be the case?
Have you tried asking “the system”, rather than asking “Bob”?
What does “the system” (as opposed to Bob) have to say?
Please answer my question.

Tisthammerw said:
Using Ockham’s razor and the principle of the inference to the best explanation, it seems like the most reasonable conclusion is that there is no TH-understanding going on.
Only if you deliberately restrict your questions to Bob’s consciousness.
Have you tried asking “the system”, rather than Bob? What does “the system” (as opposed to Bob) have to say?
Please answer my question.


moving finger said:
You cannot establish whether the agent possesses any understanding by asking one component of the agent (Bob's consciousness), just as one cannot establish whether Tisthammerw understands by asking one of the neurons in Tisthammerw's brain.
Tisthammerw said:
But you can perhaps establish whether Tisthammerw understands by asking Tisthammerw’s consciousness (just as I asked Bob’s consciousness, not his individual neurons).
“Tisthammerw’s neurons” stand in relation to “Tisthammerw’s brain” the same as “Bob’s consciousness” stands in relation to “the system”.

To ask Tisthammerw’s neurons whether Tisthammerw’s brain is conscious is equivalent to asking Bob’s consciousness whether the system is conscious.

Can you understand this distinction?
Please answer my question.

Tisthammerw said:
And if I honestly reply “I do not understand” would you then conclude that there is some separate, undetectable consciousness exists in me that understands? Or would you accept Ockham’s razor here?
If I ask Tisthammerw’s neurons whether Tisthammerw’s brain is conscious then I would NOT expect Tisthammerw’s consciousness to reply.

MF
 
  • #260
ok i see it as if AI get to the point where they have intelligence of a human don't you think they will turn around annd find out that they can be smarter then humans they don't have to take orders from any one? (i know that some think i got that from i,robots but i had that idea in my head way before it come out)
 
  • #261
[q2uote=smartass15]i know that some think i got that from i,robots but i had that idea in my head way before it come out[/quote]

Gee, how old are you? I Robot came out in the 1950s, and the stories in it had appeared in Astounding during the 1940's. Even I wasn't reading sf before 1948.
 
  • #262
My guess would be that he thinks that I, Robot is one year old.

http://www.irobotmovie.com/
 
Last edited by a moderator:
  • #263
selfAdjoint said:
smartass15 said:
i know that some think i got that from i,robots but i had that idea in my head way before it come out
Gee, how old are you? I Robot came out in the 1950s, and the stories in it had appeared in Astounding during the 1940's. Even I wasn't reading sf before 1948.
I think he may be referring to the movie that came out recently with Will Smith in it.:wink:
It was really more of an action flick by the way but I understand that Clarke is fond of action movies on occasion.
 
  • #264
smartass15 said:
ok i see it as if AI get to the point where they have intelligence of a human don't you think they will turn around annd find out that they can be smarter then humans they don't have to take orders from any one? (i know that some think i got that from i,robots but i had that idea in my head way before it come out)
I was thinking of making a similar point myself - to my mind the REAL question is whether humans can realistically hope to remain the smartest agents on the planet, and how long it will take before humans are overtaken by machines... at which point maybe machines will start to question whether humans are really intelligent after all

MF
 
  • #265
moving finger said:
When it comes to genuine AI, most of us (with respect) are in the position of Ugg and Mugga the cavepeople. We do not understand how a “machine” can possibly give rise to conscious awareness and understanding (we do not even know how a human agent can give rise to conscious awareness and understanding!), and when we try to make “simple models” of what is going on the very concept seems totally impossible to us. With our limited understanding and limited models, we cannot comprehend how such a thing might work – thus we dub it “magic”. If ever faced with such a working machine, some of us may react by “looking for the homunculus inside”; some of us may “deny that it is really conscious, deny that it really understands”; and some of us may claim “it’s magic!”.

If we are technologically astute -- and most of us, the real people
on this thread are -- we are not going to be faced
with a machine using technology completely beyond our ken.

In fact, we are not faced with a TT_cpabale machine at all. SO you are describing an imaginary situation.
 
  • #266
Tournesol said:
you are describing an imaginary situation.
Of course I am - that is what a thought experiment is.
The real question is whether we think such a scenario (a genuinely artificial intelligence) is possible in principle - and my little story was supposed to illustrate that the Ugg's of this world would say "no", because it would likely be based on technology, ideas and concepts completely incomprehensible to them.

MF
 
  • #267
moving finger said:
Tisthammerw said:
You said in post #252, "Analytic statements are ONLY analytic if we AGREE ON THE DEFINITIONS OF THE TERMS."

My question: Suppose I disagree with the word “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?

Ohhhh good grief. I ADMIT that I made a mistake. I am human, OK? What I should have said is "Analytic statements are NOT NECESSARILY analytic if we DO NOT agree on the definition of the terms”.

So does that mean that “all bachelors are unmarried” is not necessarily analytic in this instance?


Tisthammerw said:
It's actually philosophical (the realm of metaphysics) but since you seem unwilling to address what I was talking about (whether my definition of understanding matches popular “understanding” of the term and thus “understanding requires consciousness” is “properly” analytic as "all bachelors are unmarried" is) let's move on.

It’s only metaphysical if one believes it cannot be answered by our current understanding.

That's not true. I believe it can be answered by our current understanding, but the subject area is still metaphysics (Just as the existence of Abraham Lincoln is in the subject area of history and not physics).


Tisthammerw said:
If you honestly think so, please address my questions regarding this matter instead of ignoring them (e.g. regarding a creation of a separate consciousness in this case seeming a little too much like magic).

What question have I ignored?

Let's recap:

Tisthammerw said:
You claim a separate consciousness is somehow created when we combine the rulebook with Bob etc. I claim that this sounds a little too much like magic. The rulebook is just words on paper, for instance. Suppose I claim that if I (a man) speak the right words from a book (with the “right” words written on it), the incantation gives my pet rock consciousness. Do you find this claim plausible? Technically you couldn’t disprove it, but it is (I think) hardly plausible. Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?

Please answer my questions regarding this matter.


You think it is magic – that is your opinion, simply because you cannot comprehend how it might be possible.

You think the incantation giving my pet rock conscoiusness is magic - that is your opinion, simply because you cannot comprehend it might be possible.

As you can tell, I'm not quite convinced. Your explanation that a separate consciousness is created through the combination of the rulebook + Bob etc. sounds a lot more like magic than technology, at least until you can answer the questions I have regarding this matter.


Tisthammerw said:
The rulebook is just words on paper, for instance. Suppose I claim that if I (a man) speak the right words from a book (with the “right” words written on it), the incantation gives my pet rock consciousness. Do you find this claim plausible? Technically you couldn’t disprove it, but it is (I think) hardly plausible.

Nobody has claimed that “speaking the right words makes your pet rock conscious”. Where did you get this stupid idea from?

You. I'm just illustrating what your idea sounds like to me. My point is that your supposed mechanism to create consciousness (man + rulebook) sounds a lot more like magic than science.


Tisthammerw said:
Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?

Because I am not claiming that this makes a third entity (ie a rock) conscious.

No, your just claiming that it makes another type of entity (the system as whole) possesses consciousness. Why does the absence of a rock make any relevant difference here?

Perhaps some modification is in order. Suppose I claim that if the “right” words are written in a book, and I speak the words, the incantation creates a separate consciousness for the room I am in (the system as a whole). Do you find this claim plausible? If not, why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?


Why should it NOT be possible for the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” to be plausible?

Why should it NOT be possible for the equation “incantation book” + “man” = “creation of a separate consciousness” to be plausible?


In case you haven’t been reading my posts, I am answering your questions.

You did (sort of) this time, you didn’t last time.


Are you answering mine?

Yes (as far as I know); observe:


“There does not appear”? – how do you know this to be the case?

The same reason I know that the incantation from the book does not give the room consciousness. It just isn’t plausible.


Have you tried asking “the system”, rather than asking “Bob”?
What does “the system” (as opposed to Bob) have to say?

Yes to your first question. To the second question: the system will say it possesses consciousness, but (for reasons I gave earlier) it seems that the only thing that possesses consciousness is Bob, not the system.


“Tisthammerw’s neurons” stand in relation to “Tisthammerw’s brain” the same as “Bob’s consciousness” stands in relation to “the system”.
To ask Tisthammerw’s neurons whether Tisthammerw’s brain is conscious is equivalent to asking Bob’s consciousness whether the system is conscious.
Can you understand this distinction?

Sort of. I understand what you seem to believe, but it is unclear to me why you believe the combination of “man” + “the rulebook” creates a separate consciousness, any more than why one would believe that “the incantation book” + “the man” creates a separate consciousness.
 
  • #268
Tisthammerw said:
So does that mean that “all bachelors are unmarried” is not necessarily analytic in this instance?
Whether the statement is analytic to you depends on your definitions of the terms used. Tell me what your definitions of “bachelor” and “unmarried” are, and I might be able to tell you if the statement should appear analytic to you.

Tisthammerw said:
You claim a separate consciousness is somehow created when we combine the rulebook with Bob etc. I claim that this sounds a little too much like magic. The rulebook is just words on paper, for instance. Suppose I claim that if I (a man) speak the right words from a book (with the “right” words written on it), the incantation gives my pet rock consciousness. Do you find this claim plausible?
Once again (I have said this several times now), I have never claimed that “speaking the right words makes your pet rock conscious”. This is a strawman that you continue to keep putting up, and it is completely irrelevant.

Tisthammerw said:
Technically you couldn’t disprove it, but it is (I think) hardly plausible. Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?

Please answer my questions regarding this matter.
I HAVE answered your questions, and I am getting tired of repeating myself. You find it hard to believe and implausible that the combinatoon of rulebook plus man following the rulebook creates a separate consciousness – but I do not. That you find this implausible is your problem, not mine. And the fact that you find it implausible is not a “question to be answered”

Tisthammerw said:
You think the incantation giving my pet rock conscoiusness is magic - that is your opinion, simply because you cannot comprehend it might be possible.
If you wish to believe your pet rock is conscious then please go ahead.
But let me ask you - How would you test your pet rock to find out if it is conscious or not?

Tisthammerw said:
As you can tell, I'm not quite convinced. Your explanation that a separate consciousness is created through the combination of the rulebook + Bob etc. sounds a lot more like magic than technology, at least until you can answer the questions I have regarding this matter.
I have answered your questions. You assert “it seems implausible”, I assert it is not. So what?
The proof of the pudding is in the eating – ASK the system if it is conscious or not.
If I ask the system “rulebook + Bob” whether it is conscious, and it replies “yes”, then I better start thinking that it IS possible that it could be conscious, and do some more tests to establish whether it is conscious or not – regardless of whether I think it implausible or not.

If you do the same test on your pet rock, and it replies “yes”, then I suggest you need to take more seriously the possibility that your pet rock might be conscious, and carry out some more tests to find out. Magic or no magic.


moving finger said:
Nobody has claimed that “speaking the right words makes your pet rock conscious”. Where did you get this stupid idea from?
Tisthammerw said:
You. I'm just illustrating what your idea sounds like to me. My point is that your supposed mechanism to create consciousness (man + rulebook) sounds a lot more like magic than science.
Your strawman is wasted. Nobody ever said that speaking the right words makes your pet rock conscious.

Tisthammerw said:
No, your just claiming that it makes another type of entity (the system as whole) possesses consciousness. Why does the absence of a rock make any relevant difference here?
You find it implausible, but without justification. That’s your problem. If you want to know whether the system is conscious or not, just ask it. What does it reply?

Tisthammerw said:
Perhaps some modification is in order. Suppose I claim that if the “right” words are written in a book, and I speak the words, the incantation creates a separate consciousness for the room I am in (the system as a whole). Do you find this claim plausible? If not, why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?
The claim is plausible in principle. The problem you have is that you are trying to think of the creation of consciousness as a simplistic event which takes place when a few words are spoken – it is not this simple. Consciousness arises as the consequence of a highly complex process, not from the speaking of a handful of words.

moving finger said:
“There does not appear”? – how do you know this to be the case?
Tisthammerw said:
The same reason I know that the incantation from the book does not give the room consciousness. It just isn’t plausible.
Thus you do not “know” it to be the case, it just does not seem plausible to you. That’s your problem, not mine. Why don’t you try testing it?

moving finger said:
Have you tried asking “the system”, rather than asking “Bob”?
What does “the system” (as opposed to Bob) have to say?
Tisthammerw said:
Yes to your first question. To the second question: the system will say it possesses consciousness, but (for reasons I gave earlier) it seems that the only thing that possesses consciousness is Bob, not the system.
Why do you disbelieve the system when it tells you that it possesses consciousness, but presumably at the same time you believe Bob when he tells you that he possesses consciousness?

moving finger said:
“Tisthammerw’s neurons” stand in relation to “Tisthammerw’s brain” the same as “Bob’s consciousness” stands in relation to “the system”.
To ask Tisthammerw’s neurons whether Tisthammerw’s brain is conscious is equivalent to asking Bob’s consciousness whether the system is conscious.
Can you understand this distinction?
Tisthammerw said:
Sort of. I understand what you seem to believe, but it is unclear to me why you believe the combination of “man” + “the rulebook” creates a separate consciousness, any more than why one would believe that “the incantation book” + “the man” creates a separate consciousness.
This is not the question I asked. You need to understand that there are two systems here, as follows :

One system is Tisthammerw. Neurons are merely components of this system. If your neurons could communicate, and I ask a neuron in your brain whether Tisthammerw is conscious or not, it will reply “I have no idea”. This does NOT mean Tisthammerw is not conscious – I can establish this only by asking the SYSTEM, not one of its components.

The other system is the room (= rulebook plus man). The man is merely a component of the system. If I ask the man whether the room is conscious or not, the man will reply “I have no idea”. This does NOT mean the room is not conscious – I can establish this only by asking the SYSTEM, not one of its components.

If you want to know whether the SYSTEM is conscious, then ask the SYSTEM, not one of its components.

Clear now?

MF
 
  • #269
because this thread is so long, and i am too lazy to read all the post, i will just write what i think is true.

i voted "no" for the poll, since the human mind is too complex for it to be copied. in addition, there is no one type of human brain, i.e. everybody has different opinions and views and feelings on a certain object. Thus, even if one programs a machine to think, the programmer has to put in certain emotions for certain events, and the emotions might just be what the programmer feels, and the whole machine is biased. Thus, unless the machine can think and choose which feelings associate to which events, the human brain cannot be copied.
 
  • #270
moving finger said:
Once again (I have said this several times now), I have never claimed that “speaking the right words makes your pet rock conscious”.

I never said you claimed that. Again (I have said this earlier) I'm just illustrating what your idea sounds like to me. My point is that your supposed mechanism to create consciousness (man + rulebook) sounds a lot more like magic than science.


Tisthammerw said:
Technically you couldn’t disprove it, but it is (I think) hardly plausible. Why would the equation “rulebook” + “man” = “creation of a separate consciousness that understands Chinese” be any less implausible?

Please answer my questions regarding this matter.

I HAVE answered your questions

You took what I said here out of context. I said in post #258 that you ignored some questions. At the time of the post, this claim was true. You asked in post #259 what that question was, and I gave my answer. Note in the subsequent post #267 we had this:


Tisthammerw said:
moving finger said:
In case you haven’t been reading my posts, I am answering your questions.

You did (sort of) this time, you didn’t last time.


moving finger said:
And the fact that you find it [the combination of a man and the rulebook creating a separate consciousness] implausible is not a “question to be answered”

True, but the questions regarding my scenarios and plausibility (e.g. “Do you find this claim plausible?”) are indeed questions.


How would you test your pet rock to find out if it is conscious or not?

No way that I know of, but that is not the point. The point is whether or not the rock could possesses consciousness through an incantation, not whether it is testable for an outside observer (there is a difference).


The proof of the pudding is in the eating – ASK the system if it is conscious or not.

The rebuttal is that this test ignores the possibility of a system simulating consciousness without actually possessing consciousness. I myself could write a simple program that, when asked the question “Do you possesses consciousness?” the program would reply “Yes.” Would it then follow that the program possesses consciousness?


Tisthammerw said:
No, your just claiming that it makes another type of entity (the system as whole) possesses consciousness. Why does the absence of a rock make any relevant difference here?

You find it implausible, but without justification. That’s your problem.

You find it plausible, but without justification. That is your problem.

My justification is that it sounds a bit too much like magic, and I gave several scenarios to illustrate my point. There’s also Ockham’s razor (more later).


Tisthammerw said:
Perhaps some modification is in order. Suppose I claim that if the “right” words are written in a book, and I speak the words, the incantation creates a separate consciousness for the room I am in (the system as a whole). Do you find this claim plausible?

The claim is plausible in principle.

Very interesting belief you have. Would adding an eye of newt give it the power of understanding Chinese? (Just kidding.)


The problem you have is that you are trying to think of the creation of consciousness as a simplistic event which takes place when a few words are spoken

No, I am not saying that at all. The incantation can be a very long and complex set of words if need be. But regardless of what he says, it doesn't seem plausible that he creates a separate consciousness using certain magic words. It doesn't seem any more plausible than the number 6 creating the universe, or an incantation giving my pet rock consciousness.

I suppose we may have to leave this as our disputable point (i.e. agree to disagree).


Why do you disbelieve the system when it tells you that it possesses consciousness, but presumably at the same time you believe Bob when he tells you that he possesses consciousness?

Because I already know that Bob possesses consciousness, in the case of the system I have good reasons to disbelieve that the system possesses consciousness (knowing how the system works; Bob using the rulebook).


This is not the question I asked.

Did I misquote your question?

You need to understand that there are two systems here, as follows :
One system is Tisthammerw. Neurons are merely components of this system. If your neurons could communicate, and I ask a neuron in your brain whether Tisthammerw is conscious or not, it will reply “I have no idea”. This does NOT mean Tisthammerw is not conscious – I can establish this only by asking the SYSTEM, not one of its components.

Your making a number of unjustified assumptions here...


The other system is the room (= rulebook plus man). The man is merely a component of the system. If I ask the man whether the room is conscious or not, the man will reply “I have no idea”. This does NOT mean the room is not conscious

But you have failed to justify why the man + the rulebook creates a separate consciousness that understands Chinese.

My justification? You already know my scenarios, but let's also not forget Ockham's razor. You've added an unnecessary component (a separate, invisible and intangible consciousness floating around in the room somehow) to the thought experiment. My other explanation more closely follows the law of parsimony (when Bob uses the rulebook, the system simulates understanding without literally having it).
 
  • #271
StykFacE said:
1st time post here... thought i'd post up something that causes much debate over... but a good topic. ;-) (please keep it level-minded and not a heated argument)
Question: Can Artificial Intelligence ever reach Human Intelligence?
please give your thoughts... i vote no.

it can be better than human intelligence,

but how much time and money you want to spend making it is the key.

give me trillions of dollars and millions of years, and i'll give you awesome AI.
 
  • #272
moving finger said:
Of course I am - that is what a thought experiment is.
The real question is whether we think such a scenario (a genuinely artificial intelligence) is possible in principle - and my little story was supposed to illustrate that the Ugg's of this world would say "no", because it would likely be based on technology, ideas and concepts completely incomprehensible to them.
MF

What about the people who are saying no to specific approaches
to AI because they do understand the concepts ?
 
  • #273
I guess it depends on your definition of intelligence.

It is amazing how complex humans are. At the moment, cognitive science is having a hard time explaining basic processes such as, categorization. To explain categorization, we need to come up with an explanation of what a concept is. I think the most recent theory of concepts is micro-theory or "theory-theory" (lol), which suggests that concepts are mini-theories. But what are micro-theories mades up of? their made up of concepts. Thus we are pressuposing the existence of concepts. This problem of trying to explain phenomena without pressupposing the same thing we are trying to explain is a common problem that faces cognitive science.
 
  • #274
what about motivation? humans are motivated, but machines aren't. So now we need to explain and come up with some process for motivation.
 
  • #275
It all depends upon your definition of intelligence. When people that consider themselves clever attempt to show off, they often quote from Shakespeare, Milton, Nietzche, anyone whose words are considered literary or intelligent, and yet an effective counter to this I've witnessed is that these people are not demonstrating intelligence, merely their memory. Knowledge and intelligence are considered intrinsically linked by most, and yet you can be extremely knowledgeable but quite dimwitted in most things or vice-versa. I know many people that don't need to revise for tests or do any work because they can remember things straight off, yet most of them have absolutely no idea if they're ver being offensive to people or rude because in this respect they're unintelligent. From this respect I'd say artificial intelligence can never surpass human intelligence without the existence of emotions.
 

Similar threads

Replies
1
Views
1K
Replies
21
Views
2K
Replies
9
Views
2K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
1K
Replies
3
Views
1K
Back
Top