Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #176
Tisthammerw said:
If an entity has a state of being such that it includes the characteristics I described, the entity has consciousness (under my definition of the term).
I understand. Your definition of consciousness is thus “any agent which possesses all of the characteristics of sensation, perception, thought and awareness is by definition conscious”, is that it?
We would then have to ask for the definitions of each of those characteristics – what exactly do we mean by “sensation, perception, thought, and awareness”? (without defining any of these words in terms of consciousness, otherwise we simply have a tautology).
Tisthammerw said:
Understanding (as how I defined it) requires that the entity be aware of what the words mean (this would also imply a form of perception, thought etc.). This would imply the existence of consciousness (under my definition of the term
If, as you say, perception, thought etc are implicitly included in “awareness”, are you now suggesting that “awareness alone necessarily implies consciousness”? Should we revise the definition of consciousness above?
If you ask the CR (in Chinese) “are you aware of what these words mean?”, it’s reply will depend on how it defines “awareness”. If awareness is defined as “conscious awareness” then (if it is not conscious) it will necessarily reply “no”. But defining awareness as “conscious awareness” makes the definition of “consciousness in terms of awareness” a tautology (“consciousness is characterised by a state of conscious awareness”) therefore not very useful in terms of our epistemology.
All we achieve with a tautology is the following :
“If I define understanding as requiring conscious awareness then it follows that undertsanding required consciousness”
This doesn’t really tell us very much does it?
The problem is that I do not agree that understanding requires conscious awareness. Thus we disagree at the level of your initial assumptions.
Tisthammerw said:
As I mentioned in post #171, I said, "But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical." To which you replied, "Imho it is completely logical." And I thus said, "Imho you need to look up the law of noncontradiction" in post #169.
In which case I humbly apologise for this error on my part. My statement “Imho it is completely logical” was intended to respond to what I took be the implication that my own argument was not logical. What I should have said is that I do not agree with your assumption “he knows the meaning of the words without knowing the meaning of the words”, therefore (since the premise is disputed) your conclusion “that isn’t logical” is invalid.

moving finger said:
With respect, you have not shown how you arrive at the conclusion “my pet cat possesses consciousness”, you have merely stated it.
Tisthammerw said:
Not at all. My argument went as follows (some premises were implicit):
1. If my cat possesses key characteristic(s) of consciousness (e.g. perception) then my cat possesses consciousness (by definition).
2. My cat does possesses those attribute(s).
3. Therefore my cat has consciousness.
Your argument is still based on an implicit assumption – step 2.
If we assume your definition of consciousness is sufficient (I dispute that it is), then how do you “know” that your cat is aware?
Your earlier argument (as I have pointed out) already implies that “perception, thought etc” are subsumed into “awareness” – thus the acid test of consciousness (according to your own definition) should be the presence not of perception alone, but of awareness alone. Can you show that your cat is indeed “aware” (you need to define aware first)?
Tisthammerw said:
So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?
Tisthammerw said:
In the physical plane, it would be the brain would it not?
It could be, but then it’s not my thought experiment. If someone tells me he has internalised the rulebook, it is surely not up to me to guess where this internalised rulebook sits, is it?
Tisthammerw said:
The part that has internalized the rulebook is his conscious self
I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not as a part of his consciousness. Consciousness is not a fixed or a physical object, it cannot "contain" anything in permanent terms, much less a rulebook or the contents of a rulebook. Consciousness is a dynamic and ever-changing process, and as such it may gain access to information contained in physical objects (such as a rulebook, or in memories, or in sense perception) but it does not contain any such objects, and it does not contain any permanent information.
Tisthammerw said:
Perhaps we are confusing each other's terms. When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives. What do you mean by it?
His consciousness “participated in” the physiacl process of internalisation of the rulebook, but the rulebook does not sit “in his consciousness”. Consciousness is a dynamic and ephemeral process, it is not something that can “internalise something within itself”. What happens if we knock Searle unconscious, is the rulebook destroyed? No, it continues to exist. When Searle regains consciousness, he can once again access the rulebook, not because his consciousness recreates it from nothing but because the rulebook now physically exists within his entity (but not in his consciousness).
moving finger said:
What part of “grasp the meaning of a word in Chinese” (ie an understanding of Chinese, by your own definition) would necessarily mean that an agent could respond to a question in English?
Tisthammerw said:
Because understanding Chinese words (as I have defined it) means he is aware of what the Chinese words mean, and thus (since he knows and understands English) he can tell me in English if he understands Chinese.
We have the same problem. By “aware” you implicitly mean “consciously aware”. If you define “awareness” as “conscious awareness” then I dispute that an agent needs to be consciously aware in order to have understanding. The internalised rulebook does NOT understand English (it is another part of Searle which “understands English”). Asking the internalised rulebook a question in English would be a test only of whether it understands English, not a test of whether it understands per se.
Tisthammerw said:
There is no part of Searle--stomach, arm, liver, or whatever--that is aware of what the Chinese words mean.
I think we keep covering the same ground. The basic problem (correct me if I am wrong) is that you define understanding as requiring conscious awareness. I dispute that. Most of our disagreement stems from that.
The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.
Tisthammerw said:
Then do you also disagree with the belief that all bachelors are unmarried? Remember what I said before about tautologies...
Are you asking whether I agree with the definitions of your terms here, or with your logic, or with your conclusion?
If we agree on the definition of terms then if we follow the same logic it is a foregone conclusion that we will agree on the conclusion. The problem is that in the case of understanding and awareness we do not agree on the definition of terms.
Tisthammerw said:
the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?
Do we agree on what exactly?
I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.

With respect

MF
 
Last edited:
Physics news on Phys.org
  • #177
Psi 5 said:
I said yes to the poll question. If we weren't created by God and there is no metaphysical component to our intelligence, if we are nothing but biological machines, then the answer is definitely yes.

Searle is not arguing that no artifical device can understand or be consicous;
he is arguing that no device can do so solely by virtue of executing rules.


I am of the belief that we operate by knowing rules. Everything we do is governed by rules. There is a group of AI researchers that believe this too and are trying to create intelligence by loading their construct with as many rules as they can. Most of what we are is rules and facts. Rules and facts can simulate whatever there is of us that isn't rules and facts and make AI appear to be self aware and intelligent (random choice for example or emotion). If you don't believe this, name something people do that isn't or couldn't be governed by rules and facts.

Well, Searle's arguemnt goes specifically against that conclusion.
 
  • #178
Tournesol said:
Searle is not arguing that no artifical device can understand or be consicous;
he is arguing that no device can do so solely by virtue of executing rules.
...

Well Searle, tell me something you or anyone else does that isn't governed by rules.
 
  • #179
It's hard to say that computers can be conscious if i can't be sure that other people are.
 
  • #180
-Job- said:
It's hard to say that computers can be conscious if i can't be sure that other people are.
This is the whole point.
Unless and until we establish a "test for X", we cannot definitively say "this agent possesses X", where X could be consciousness or intelligence or understanding.
To develop a "test for X" implicitly assumes a definition of X.
And so far we seem unable to agree on definitions.

With respect.

MF
 
  • #181
I would like to point out that "intelligence" and "consciousness" are two totally different concepts. "intelligence" can be detected behaviourally, consciousness cannot (unless you REDEFINE the concept). "consciousness" means that observations are somehow "experienced", which is something completely internal to the subject being conscious, and has no effect of the behaviour of the BODY of the conscious being.
This is what makes solipsism possible: you're the only conscious being around. All other bodies around you, which you call people, BEHAVE in a certain way which is quite equivalent to how YOUR BODY behaves, but they are not necessarily *conscious*. They are intelligent, yes, because they can solve problems (= behavioural). But they are not conscious. There's no way to find out.
You could take the opposite stance, and claim that rocks are conscious. They behave as rocks, but they are conscious of their behaviour. They experience pain when you break some of their crystals. There's no way to find out, either.
We only usually claim that people are conscious and rocks aren't, by analogy of our own, intimate, experience.
You can even have unconscious structures, such as bodies, type long texts about consciousness. That doesn't prove that they are conscious.
The problem is that many scientific disciplines have redefined consciousness into something that has behavioural aspects, such as "brain activity", or "intelligence" or other things, but that's spoiling the original definition which is the internal experience of observation.
 
  • #182
vanesch said:
I would like to point out that "intelligence" and "consciousness" are two totally different concepts.
OK. But I don't think anyone suggested that they were similar concepts, did they?
vanesch said:
"intelligence" can be detected behaviourally, consciousness cannot (unless you REDEFINE the concept). "consciousness" means that observations are somehow "experienced", which is something completely internal to the subject being conscious, and has no effect of the behaviour of the BODY of the conscious being.
This is what makes solipsism possible: you're the only conscious being around. All other bodies around you, which you call people, BEHAVE in a certain way which is quite equivalent to how YOUR BODY behaves, but they are not necessarily *conscious*. They are intelligent, yes, because they can solve problems (= behavioural). But they are not conscious. There's no way to find out.
What do we conclude from this?

Properties that we define in subjective terms, such as consciousness, cannot be objectively tested. Such properties can only be inferred or assumed.

Properties that we define in objective terms, such as intelligence, can be objectively tested.

Thus : Is understanding subjective, or objective?

MF
 
  • #183
moving finger said:
Thus : Is understanding subjective, or objective?
Again, it depends on what you mean by "understanding". If by "understanding" you mean, possessing enough organized information about it so that you can use the concept you're supposed to understand in a problem-solving task, then "understanding" is part of "intelligence" and as such a more or less objective property, which is related to behavioural properties ; behavioural properties the teacher is testing to see if his students "understand" the concepts he's teaching them. This requires no consciousness.
You can also mean by "understanding" the "aha experience" that goes with a certain concept ; this is subjective of course (and can be wrong ! You can have the feeling you understand something and you're totally off), and probably related to consciousness. But it has no functional, behavioural role and is not necessary in demonstrating problem solving skills.
 
  • #184
Psi 5 said:
Well Searle, tell me something you or anyone else does that isn't governed by rules.
Following rules is not the same as existing in virtue of follwing rules.
You are confusing a necessary conditions with sufficient conditions.
 
  • #185
Zantra said:
Tisthammerw said:
Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness
asuming suficient technological advance, we can grant any of these characteristics to a machine, inlcuding, but not exclusive to: sensation, perception(learning through observation, eg point at a chair and say "chair")

I disagree, at least with our current architecture. Confer the Chinese Room. A successful conversation all without understanding. I believe we can program a machine to say "chair" but I don't believe the computer will understand any more than the man in the Chinese Room understands Chinese. Note also the story of the robot and program X. Even when the “right” program is being run, he doesn’t see or hear anything going on in the outside world.


TH your chinese room is inflexible and does not take into account that the chinese man, as it relates to our purpose, IS capable of learning chinese.

Which is not something anybody is disputing. The point is the model of a complex set of rules acting on input etc. is not sufficient. Recall also the robot and program X counterexample. Even with the "right" program being run there is still no literal understanding (as I have defined it). Unless you can disprove this counterexample (and I don't think that can be done) the belief that the above model is capable of literal understanding has no rational basis.
 
  • #186
moving finger said:
I understand. Your definition of consciousness is thus “any agent which possesses all of the characteristics of sensation, perception, thought and awareness is by definition conscious”, is that it?

It’s amazing how quickly you can (unintentionally) distort my views. Let's look at a quote from the post you just responded to:

Tisthammerw said:
Let’s recap my definition of consciousness.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.

If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.


We would then have to ask for the definitions of each of those characteristics – what exactly do we mean by “sensation, perception, thought, and awareness”?

I'm getting a bit tired of playing the dictionary game, since we can go on like this forever (I define term A using words B, you ask what B means, I define it in terms of C, you ask what C means...). Go to www.m-w.com to look up the words. For “sensation” I mean definitions 1a and 1b. For “awareness” (look up “aware”) I mean definition 2. For “perception” (look up “perceive”) I mean definition 1a and 2. For “thought” I mean definition 1a.

Now if you still don’t know what I’m talking about even with a dictionary, I don’t know if I can help you.


If, as you say, perception, thought etc are implicitly included in “awareness”, are you now suggesting that “awareness alone necessarily implies consciousness”?

Please be careful not to distort what I am saying. I am saying that if an entity has perception, thought etc. this person has consciousness, I didn't say awareness in the context you used (though it could be argued that perception and thought implies some sort of awareness).



If you ask the CR (in Chinese) “are you aware of what these words mean?”, it’s reply will depend on how it defines “awareness”. If awareness is defined as “conscious awareness” then (if it is not conscious) it will necessarily reply “no”.

Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).


All we achieve with a tautology is the following :
“If I define understanding as requiring conscious awareness then it follows that undertsanding required consciousness”
This doesn’t really tell us very much does it?

If we define bachelors as being unmarried then it follows that all bachelors are unmarried.

Maybe it doesn't tell us much, but it doesn't change the fact that the statement is true and deductively valid. And frankly, I don't think that “knowing what the words mean” is such an unusual definition for “understanding” words.


Tisthammerw said:
Not at all. My argument went as follows (some premises were implicit):
1. If my cat possesses key characteristic(s) of consciousness (e.g. perception) then my cat possesses consciousness (by definition).
2. My cat does possesses those attribute(s).
3. Therefore my cat has consciousness.

Your argument is still based on an implicit assumption – step 2.
If we assume your definition of consciousness is sufficient (I dispute that it is), then how do you “know” that your cat is aware?

Yes, we all know the problem of other minds. I concede the possibility that all the world is an illusion etc. But we could say that our observations (e.g. of my cat's behavior) are sufficient to rationally infer consciousness unless we have good reason to believe otherwise. Because of the Chinese Room and variants thereof, we do have good reason to believe otherwise when it comes to computers.



The part that has internalized the rulebook is his conscious self

I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not as a part of his consciousness.

I don't know how you can disagree here, given what I described. Ex hypothesi he consciously knows all the rules, consciously carries them out etc. But as I said, perhaps we are confusing each other's terms. When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives.

Tisthammerw said:
Because understanding Chinese words (as I have defined it) means he is aware of what the Chinese words mean, and thus (since he knows and understands English) he can tell me in English if he understands Chinese.

We have the same problem. By “aware” you implicitly mean “consciously aware”. If you define “awareness” as “conscious awareness” then I dispute that an agent needs to be consciously aware in order to have understanding.

First, be careful what you attribute to me. Second, remember my definition of understanding. Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.


Tisthammerw said:
There is no part of Searle--stomach, arm, liver, or whatever--that is aware of what the Chinese words mean.

I think we keep covering the same ground. The basic problem (correct me if I am wrong) is that you define understanding as requiring conscious awareness.

My definition of understanding requires consciousness (or at least, consciousness as how I defined it).


I dispute that.

Then please read my posts again if dispute how I have defined it (such as https://www.physicsforums.com/showpost.php?p=791706&postcount=173"). Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that? As I said earlier:

Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

Well, do we? (If we do, then we may have little to argue about.)


The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.

It isn't a premise, it's a logically valid conclusion (given what I mean when I use the terms).


The problem is that in the case of understanding and awareness we do not agree on the definition of terms.

Well, this is what I mean when I use the term understanding. Maybe you mean something different, but this is what I mean. So please answer my question above.


the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

Do we agree on what exactly?

On what I just described, “i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”

Again, you may mean different things when you use the words “understanding” and “consciousness.” My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?


I agree with your logic, but I disagree with your definition of the term understanding

So is that a yes?
 
Last edited by a moderator:
  • #188
StykFacE said:
1st time post here... thought i'd post up something that causes much debate over... but a good topic. ;-) (please keep it level-minded and not a heated argument)
Question: Can Artificial Intelligence ever reach Human Intelligence?
please give your thoughts... i vote no.

It's as if saying Fake can almost be Real.

Artificial Intelligence can always mimic Human Intelligence but NEVER would Human Intelligence mimic an Artificial Intelligence!

Artificial Intelligence models from a Human Intelligence whereas Human Intelligence is the model of Artificial Intelligence.

People sometimes say that machines are smarter than human being but hey, who makes what? I did not say: Who makes who? since AI is certainly not a Who? Incomparable isn't it. :smile:
 
  • #189
oh...and i forgot a TINY thing! REAL can NEVER be FAKE!
 
  • #190
oh...and i forgot a TINY thing! REAL can NEVER be FAKE!
 
  • #191
sorry to post twice...my PC hangs :smile:
 
  • #192
vanesch said:
If by "understanding" you mean, possessing enough organized information about it so that you can use the concept you're supposed to understand in a problem-solving task, then "understanding" is part of "intelligence" and as such a more or less objective property, which is related to behavioural properties ; behavioural properties the teacher is testing to see if his students "understand" the concepts he's teaching them. This requires no consciousness.
I would agree with this. I see no reason why a machine necessarily could not possesses this type of understanding.

MF
 
  • #193
Tournesol said:
Following rules is not the same as existing in virtue of follwing rules.
Does a machine which follows rules necessarily "not exist in virtue of following rules"?

MF
 
  • #194
Tisthammerw said:
If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.
Ah, I see now. Therefore an agent can have the characteristic only of “sensation”, but at the same time NOT be able to perceive, or to think, or to be aware, and still (by your definition) it would necessarily be conscious?
Therefore by your definition even the most basic organism which has “sensation” (some plants have sensation, in the sense that they can respond to stimuli) is necessarily conscious? I think a lot of biologists would disagree with you.
moving finger said:
If you ask the CR (in Chinese) “are you aware of what these words mean?”, it’s reply will depend on how it defines “awareness”. If awareness is defined as “conscious awareness” then (if it is not conscious) it will necessarily reply “no”.
Tisthammerw said:
Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).
Incorrect. If the CR also defines “awareness” as implicitly meaning “conscious awareness”, and it is not conscious, it would necessarily answer “No”.
moving finger said:
His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not as a part of his consciousness.
Tisthammerw said:
he consciously knows all the rules, consciously carries them out etc.
Here you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.
Tisthammerw said:
When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives.
Again you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.
Tisthammerw said:
Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.
You define “understanding” as requiring consciousness, thus it is hardly surprising that your definition of understanding requires consciousness! That is a classic tautology.
Tisthammerw said:
Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that?
By definition, if one chooses to define understanding such that understanding requires consciousness, then it is necessarily the case that for any agent to possesses understanding it must also possesses consciousness. I see no reason why a machine should not possesses both consciousness and understanding. But this is not the point – I dispute that consciousness is a necessary pre-requisite to understanding in the first place.
Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?
The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.
moving finger said:
The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.
Tisthammerw said:
It isn't a premise, it's a logically valid conclusion (given what I mean when I use the terms).
Your definition of understanding is a premise.
You cannot show that “understanding requires consciousness” without first assuming that “understanding requires consciousness” in your definition of consciousness. Your argument is therefore a tautology.
Thus it does not really tell us anything useful.
moving finger said:
The problem is that in the case of understanding and awareness we do not agree on the definition of terms.
Tisthammerw said:
Well, this is what I mean when I use the term understanding. Maybe you mean something different, but this is what I mean. So please answer my question above.
I have answered your question. Now please answer mine, which is as follows :
Can you SHOW that “understanding” requires consciousness, without first ASSUMING that understanding requires consciousness in your definition of “understanding”?
(in other words, can you express your argument such that it is not a tautology?)
moving finger said:
Do we agree on what exactly?
Tisthammerw said:
On what I just described, “i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”
Using MY definition of “perceive” and “be aware”, yes, I believe computers can (in principle) perceive and be aware of the meaning of what words mean.
Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?
I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.
moving finger said:
I agree with your logic, but I disagree with your definition of the term understanding
Tisthammerw said:
So is that a yes?
My full reply was in fact :
“I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.”
If by your question you mean “do I agree with your conclusion?”, then I think I have made that very clear. NO.
May your God go with you
MF
 
  • #195
moving finger said:
Does a machine which follows rules necessarily "not exist in virtue of following rules"?
MF

No, not necessarily.
 
  • #196
moving finger said:
The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.

Definitions are not things which are true and false so much
as conventional or unusual.

Conventionally, we make a distinction between understanding and know-how.
A lay person might know how to use a computer, but would probably not claim
to understand it in the way an engineer does.
 
  • #197
Let's recap some terms before moving on:

Using the Chinese Room thought experiment as a case in point, let’s recap my definition of understanding.

When I say “understand” I mean “grasp the meaning of.” When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean. When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean.


Let’s recap my definition of consciousness.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.



moving finger said:
Tisthammerw said:
If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.

Ah, I see now. Therefore an agent can have the characteristic only of “sensation”, but at the same time NOT be able to perceive, or to think, or to be aware, and still (by your definition) it would necessarily be conscious?

Not quite. Go to http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=sensation to once again read definition 1b of sensation.


Therefore by your definition even the most basic organism which has “sensation” (some plants have sensation, in the sense that they can respond to stimuli) is necessarily conscious? I think a lot of biologists would disagree with you.

You have evidently badly misunderstood what I meant by sensation. Please look up the definition of sensation again (1b). In light of what I mean when I use the terms, it is clear that plants do not possesses consciousness.


Tisthammerw said:
Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).

Incorrect. If the CR also defines “awareness” as implicitly meaning “conscious awareness”, and it is not conscious, it would necessarily answer “No”.

It would necessarily answer “Yes” because ex hypothesi the program (of the rulebook) is designed to simulate understanding, remember? (Again, please keep in mind what I mean when I use the term “understanding.”)


Tisthammerw said:
he consciously knows all the rules, consciously carries them out etc.

Here you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.

It depends what you mean by “consciously understanding the rules.” He understands the rules in the sense that he knows what the rules mean (see my definition of “understanding”). He does not understand the rules in the sense that, when he applies the rules, he actually understands Chinese.


Tisthammerw said:
Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.

You define “understanding” as requiring consciousness, thus it is hardly surprising that your definition of understanding requires consciousness! That is a classic tautology.

That's essentially correct. Note however that my definition of understanding wasn't merely “consciousness,” rather it is about knowing what the words mean. At least we (apparently) agree that understanding--in the sense that I mean when I use the term--requires consciousness.


Tisthammerw said:
Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that?

By definition, if one chooses to define understanding such that understanding requires consciousness, then it is necessarily the case that for any agent to possesses understanding it must also possesses consciousness. I see no reason why a machine should not possesses both consciousness and understanding.

Well then let me provide you with a reason: the Chinese room thought experiment. This is a pretty good counterexample to the claim that a “complex set of instructions acting on input etc. is sufficient for literal understanding to exist.” Unless you wish to dispute that the man in the Chinese room understands Chinese (again, in the sense that I use it), which is pretty implausible.


But this is not the point – I dispute that consciousness is a necessary pre-requisite to understanding in the first place.

You yourself may mean something different when you use the term “understanding” and that's okay I suppose. But please recognize what I mean when I use the term.


Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.

Please see my response above. Additionally, Tournesol made a very good point when he said: “Definitions are not things which are true and false so much as conventional or unusual.” We both may mean something different when we use the term “understanding,” but neither of our definitions is necessarily “false.” And this raises a good question: I have defined what I mean when I use the term “understanding,” so what’s your definition?

By the way, you haven't really answered my question here. Given what my definition of understanding, is it the case that computers cannot have understanding in this sense of the word? From your response regarding understanding and consciousness regarding machines, the answer almost seems to be “yes” but it’s a little unclear.


I have answered your question.

You didn't really answer the question here, at least not yet (you seem to have done it more so later in the post).


Now please answer mine, which is as follows :
Can you SHOW that “understanding” requires consciousness, without first ASSUMING that understanding requires consciousness in your definition of “understanding”?

Remember, tautologies are by definition true.

Can I show that understanding requires consciousness? It all depends on how you define “understanding.” Given my definition, i.e. given what I mean when I use the term, we seem to agree that understanding requires consciousness. (Tautology or not, the phrase “understanding requires consciousness” is every bit as sound as “all bachelors are unmarried”). You may use the term “understanding” in a different sense, and I'll respect your own personal definition. Please respect mine.

Now, to the question at hand:

(in other words, can you express your argument such that it is not a tautology?)

I don't know of a way how to, but I don't think it matters. Why not? The argument is still perfectly sound even if you don't like how I expressed it. What more are you asking for?


Tisthammerw said:
“i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”

Using MY definition of “perceive” and “be aware”, yes, I believe computers can (in principle) perceive and be aware of the meaning of what words mean.

Well, how about my definitions of those terms? I made some explicit citations in the dictionary if you recall.


Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?

I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.

The Chinese room thought experiment, the robot and program X are very good reasons since they serve as effective counterexamples (again, using my definitions of the terms).


Tisthammerw said:
So is that a yes?

My full reply was in fact :
“I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.”

I read your reply, but that reply did not give a clear “yes” or “no” to my question. So far, your answer seems to be “No, it is not the case that a computer cannot perceive the meaning of words...” but this still isn't entirely clear since you said “No” in the following context:


Using MY definition of “perceive” and “be aware”

I was asking the question using my definitions of the terms, not yours. Given the terms as I have defined them, is the answer yes or no? (Please be clear about this.) You said:

moving finger said:
Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?

I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.

So is the answer a “No” as it seems to be? (Again, please keep in mind what I mean when I use the terms.)
 
  • #198
As someone with basic AI programming experience, my vote goes to the no camp.

An intelligence is not defined by knowledge, movement or interaction. An intelligence is defined by the ability to understand, to comprehend.

I have never seen nor heard of an algorythm that claims to implement understanding. I have thought about that one for years and I still don't know where I would even begin.
 
  • #199
As someone with AI programming experience as well I'd have to say yes, though I don't think we're on the right path in the industry at the moment.

Programmers and those who understand human emotion are almost mutually exclusive. That's the real reason we've not seen artificial intellegence become real intellegence yet IMHO. Most people excel at either emotional or logical pusuits and believe their method superior to the other. Software engineers lean toward logic.

IMO emotion is the key to actual intellegence.

To think that somehow no other intellegence can arise is just a vestige of geocentrism or otherwise human centric beliefs that have been around since man fist walked the earth. "Nothing can be as good as us, ever."

Basically this argument is almost religeous in nature. Are we just machines made from different material than we're used to seeing machines made of or are we somehow special?

Are we capable of creating AI that is no longer artificial in the sense that it can match some insect intellegence? Yes we can. Can we see examples of intellegence that are at every stage between insect and human? Yes we can, if you keep up with scientific news.

So someone tell me how this is not just a question of: Are humans super special in the universe or just another animal? Just another complex meat machine...

Know your own motivations behind your beliefs and you may find your beliefs changing.



Oh and by the way, I do have a vague idea where to start. Pleasure and displeasure. We have to set up what millions of years of survival of the fittest have boiled down to a single sliding scale first. The basis of motivation. A computer has no motivation.
The ability to change certain parts of self would be part of the next step. (while abhorrence to changing the core must be high on the displeasure list)

Truth tables in which things link together and links of experience or trusted sources become a sliding scale of truth or falsehood.

Faith, the ability to test and use something that is not fully truth as though it were.

The reason gambling and any other random success situations become obsessive is because intellegence constantly searches for black and white. To be able to set an 83% to a virtual 100%
Black and white search is the reason for the "terrible two's" in children. They simply want to set in stone the truth of what they can and cannot do. They need that solid truth to make the next logical leap. (You have to take for granted that a chair will hold you before you can learn how to properly balance to stand in a chair.) To make tests that stand upon what they consider "facts" (virtual 100%)though nothing is ever truly 100% truth. When parents reward and dicipline at random, the child must hold as truth the only reliable thing it has. It's own feelings. The child's mind is forever scarred with the inability to grasp truth that lies outside itself. (and those of you not overly politically correct will notice the intellegence gap in children that are poorly trained)

Pigeons given a item that realeases food every time, they peck it will peck it only when they need food. Given the same situation except that it drops food at random, the bird will become obsessed and create a pile of food as it tries to determine reliability and truth.


Human and animal intellegence is the model, we just haven't identified all the pieces. We haven't fully quantified what emotion is and does and why it was developed. (though I have some good conjecture I'll keep to myself)
 
  • #200
You are getting down to the definition of self and self-awareness.

Emotion perhaps is a method of generating an intelligence, however, it is still only a 'responsive mechanism'. That is, emotion change represents a reaction to something.

I think the layer we would be interested in would be above that, which can comprehend something that will cause an emotional change.

So, I would have to say that emotions are not going to lead to that breakthrough, as emotion and intelligence are radically different concepts.

AI is trying to create a self-awareness, this must be able to self-analyse in the third person and comprehend it. Such a thing is not possible, even using neural nets and fuzzy logic I have never even seen a simplistic algorythm.

I feel that the main problem with AI is that they have never really answered a basic set of questions:

1. What is intelligence?
2. What is the role of the universal framework (physics, etc) in the manifestation of intelligence?
3. What is self?
4. How do I recognise what self is?
5. How can I mimic it?

Unless accurate answers are established for the basic questions, any further research is just shots in the dark.
 
  • #201
TheAntiRelative said:
So someone tell me how this is not just a question of: Are humans super special in the universe or just another animal? Just another complex meat machine...

See the Chinese room thought experiment, in addition to the story of the robot and program X (explained earlier in this thread).


Oh and by the way, I do have a vague idea where to start. Pleasure and displeasure.

Good luck creating an algorithm that implements consciousness, understanding, pleasure etc.

It's not that it's impossible to artificially create something with literal understanding. I imagine we humans could genetically engineer new organisms that possesses consciousness, for instance. But some methods (like "having the right program") just don't seem to work.
 
  • #202
Tisthammerw said:
Let's recap ...
You are covering old ground here, Tisthammerw.

We can never agree on your conclusions, because we do not agree on your premise.

Like you, I can also construct a tautological argument which shows exactly what I want it to show, but that proves nothing useful (indeed is a waste of time).

With respect, if you want to be taken seriously you need to SHOW that understanding requires consciousness, without using a tautological argument, and without assuming your conclusion in your definition.

If you can do this, we might get closer to agreement on your conclusions.

If you cannot do this, all you have is a tautological argument, which tells us nothing useful, and is wasting both your time and mine.

Until then...

May your God go with you

MF
 
  • #203
MooMansun said:
As someone with basic AI programming experience, my vote goes to the no camp.
An intelligence is not defined by knowledge, movement or interaction. An intelligence is defined by the ability to understand, to comprehend.
I have never seen nor heard of an algorythm that claims to implement understanding. I have thought about that one for years and I still don't know where I would even begin.
I think that understanding and comprehending is merely adding to the rule set and knowledge base using the rules already known. Those that understand and comprehend better just have a better rule set to start with. Comparing the human brain to current AI using current technology is like comparing a 3.3G P4 to a Z80, only much more so.
Look at Sherlock Holmes for example (yes I know it's a fictional character but real crime solvers work the same way). He solved his cases by having a tremendous knowledge base, not by intuitive leaps of understanding. To make a hardware analogy, he was operating with a 3.3G P4 with 2G of RAM and a couple of Terabytes of memory while eveyone else was using a Z80 with 256k of memory. You people doing AI are in effect using Z80's and trying to emulate a P4, so don't expect a Sherlock Holmes.
 
Last edited:
  • #204
moving finger said:
You are covering old ground here, Tisthammerw.
We can never agree on your conclusions, because we do not agree on your premise.

And what premise would that be? The definition of understanding? Again, you may mean something different when you use the term. And as I said earlier, Tournesol made a very good point when he said: “Definitions are not things which are true and false so much as conventional or unusual.” We both may mean something different when we use the term “understanding,” but neither of our definitions is necessarily “false.”

I am arguing that a computer (via complex set of instructions acting on input etc.) cannot have literal understanding in the sense that I have defined it. Do you disagree or not?


With respect, if you want to be taken seriously you need to SHOW that understanding requires consciousness, without using a tautological argument

I did show that understanding requires consciousness, at least “understanding” as I use the term. The kind of understanding I am talking about requires consciousness (admittedly, for some other definitions of understanding this is perhaps not the case). Is my argument a tautology? It's every bit the tautology that “all bachelors are unmarried” is. But that still doesn't change the fact that my argument is logically sound (just as the phrase “all bachelors are unmarried” is). Understanding in the sense that I am referring to clearly requires consciousness, and I have demonstrated this. So why are you complaining?
 
  • #205
Tisthammerw said:
And what premise would that be?
Your premise that understanding requires consciousness
Tisthammerw said:
I am arguing that a computer (via complex set of instructions acting on input etc.) cannot have literal understanding in the sense that I have defined it. Do you disagree or not?
I have answered this several times already, but you seem not to understand. I disagree with your conclusion because I disagree with your premise. Period.

What part of "I disagree with your conclusion" do you not understand?

Tisthammerw said:
Is my argument a tautology? ... But that still doesn't change the fact that my argument is logically sound

A logically sound argument does not necessarily make for a true conclusion. The premises also need to be true. And you have not shown the premises to be necessarily true, except by "definition".

A logically sound argument is nevertheless fallacious if it is an example of "circulus in demonstrando", which basically means one assumes as a premise the conclusion which one wishes to reach. Your argument may be as logical as you like, but if your conclusion is already contained in one of your premises then all you have achieved is "circulus in demonstrando".

As I said, I can play that game as well, but it is pointless, and I have better things to do.

May your God go with you

MF
 
Last edited:
  • #206
Tournesol said:
Definitions are not things which are true and false so much
as conventional or unusual.
A premise is either true or false.
When a definition takes the form of a premise in a logical argument then it is necessary that that premise (definition) be accepted as either true or false.
I dispute the truth of the premise "understanding requires consciousness".

Can anyone show this premise is necessarily true (without using a tautological argument)?

A tautological argument is an example of "circulus in demonstrando", which basically means the argument is fallacious because one assumes as a premise the conclusion which one wishes to reach.

MF
 
Last edited:
  • #207
Tisthammerw said:
I'll try again.
The Chinese Room
Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.
The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.
The Chinese room shows that having a complex system of rules acting on input is not sufficient for literal understanding to exist. We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?
(Remember, variants of the Chinese room include the system of rules being complex, rewritable etc. and yet the man still doesn’t understand a word of Chinese.)
I believe that literal understanding (in addition to free will) requires something fundamentally different--to the extent that the physical world cannot do it. The soul is and provides the incorporeal basis of oneself.
Grasping the meaning of the information. It is clear from the Chinese room that merely processing it does not do the job.
By all means, please tell me what else a potential AI has other than a complex set of instructions to have literal understanding.
I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...
And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.
...

This thought experiment doesn't seem to mean much. Like a Zeno 'paradox', you have set out rules designed to cause failure (lack of understanding). A human in that room wouldn't learn much more than an AI. This is not how we learn. For a human to learn chinese or any language, he is shown what the word means or does. If the word is chair, he is shown several. He is then shown that many chairs may have different names as well such as sofa or rocker. Then he is shown that the rocker rocks and so not only learns the name rocker but also the verb rock.

All of this creates visual memories associated with words. Other words like 'of' and 'to' are basically meaningless but are learned to be used properly in context by rules. This is why I say that AI using current technology is extremely primitive compared to the human brain. We have the ability to store vast amounts of data in the form of image recognition in our brains and this is a major component of understanding and it too can be simulated with a computer if it's good enough (read the book about Helen Keller or watch the movie to understand this). They aren't even close to being that powerful yet. We are still trying to compare apples and oranges here. Current computational power is not only vastly inferior in capacity, it is probably still vastly different in kind as well. But the brain is still a computer and we will be able to simulate it in hardware eventually and then AI will start to be more human. If you don't believe this just try to imagine what someone would have said 40 years ago if you described a Pentium 4 to them and told them it was smaller than the palm of their hand and less than a 10th of an inch thick yet has many millions of transistors on it in a 2 dimensional array. The human brain isn't 2 dimensional, it's array of transistors is 3 dimensional. That is why I say that current technology is also different in kind.
 
Last edited:
  • #208
The Chinese Room
Let’s show exactly where the argument falls down :
Tisthammerw said:
Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.
The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.
The Chinese room shows that having a complex system of rules acting on input is not sufficient for literal understanding to exist.
This is the point at which the argument becomes fallacious.
It has in fact not been “shown” that understanding of Chinese does not exist in the system “The Chinese Room”. In the argument as presented it is merely assumed that understanding of Chinese does not exist in the system “The Chinese Room” (presumably the author assumes this because the man, which is but one component of the system, does not understand Chinese).
Tisthammerw said:
We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes.
We have already shown above that the supposed “point” (ie that there is no understanding in the system “The Chinese Room”) is assumed and is not shown to be necessarily the case.
Tisthammerw said:
It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?
As we have seen, it has not been shown that there is necessarily no understanding in the system “The Chinese Room”, therefore it is not clear that anything else is in fact needed.
Tisthammerw said:
(Remember, variants of the Chinese room include the system of rules being complex, rewritable etc. and yet the man still doesn’t understand a word of Chinese.)
The fact that the man (one component of the system) does not understand Chinese is, as we have seen, not relevant when considering the question “is there understanding of Chinese in the system “The Chinese Room”?”
Tisthammerw said:
I believe that literal understanding (in addition to free will) requires something fundamentally different--to the extent that the physical world cannot do it.
“I believe” is a statement of opinion rather than of fact, it does not constitute an acceptable part of the logical argument presented, therefore this statement can be ignored.
With respect
MF
 
  • #209
Tisthammerw said:
When I say “understand” I mean “grasp the meaning of.” When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean. When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean.

I think you could make the same point much more clearly by saying
"he knows what the chinese words mean (uses them correctly) and
he knows that he knows".
 
  • #210
moving finger said:
Tisthammerw said:
And what premise would that be?

Your premise that understanding requires consciousness

That's not really a premise; it's a conclusion. And it's not really a “premise” to be disputed because it is an analytic statement (given what I mean when I use the terms). My definition of understanding requires consciousness. Do we agree? Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that. Does your definition of understanding require consciousness? I'm not claiming that either. But understanding in the sense that I use it would seem to require consciousness. Do we agree? It seems that we do. So why are we arguing?


Tisthammerw said:
I am arguing that a computer (via complex set of instructions acting on input etc.) cannot have literal understanding in the sense that I have defined it. Do you disagree or not?

I have answered this several times already, but you seem not to understand. I disagree with your conclusion because I disagree with your premise. Period.

That really doesn't answer my question (I’m assuming you’re not so foolish as to disagree with an analytic statement). Is it the case that computers cannot understand in the sense that I am using the term? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.


What part of "I disagree with your conclusion" do you not understand?

It's pretty unclear why you disagree with it (if you really do). Can computers understand in the sense that I mean when I use the term? Again, simply claiming that “I use the word ‘understanding’ in a different sense” does nothing to answer my question here.


Tisthammerw said:
Is my argument a tautology? It's every bit the tautology that “all bachelors are unmarried” is. But that still doesn't change the fact that my argument is logically sound (just as the phrase “all bachelors are unmarried” is).

A logically sound argument does not necessarily make for a true conclusion.

Okay, obviously you don't understand the terminology here. An argument being deductively valid means that if the premises are true then the conclusion must be true also. It is impossible for a valid argument to have true premises and a false conclusion. An argument being deductively invalid means that the conclusion doesn't logically follow from the premises; the conclusion can still be false even if all the premises are true. Another term for the conclusion not logically following from the premises is non sequitur. A sound argument is a deductive argument that is both valid and has all its premises being true. Thus, a logically sound argument necessarily makes for a true conclusion by definition.

Actually, it’s not even much of a deductive argument (at least not in the usual sense) because “understanding requires consciousness” is an analytic statement (given my definitions).


The premises also need to be true. And you have not shown the premises to be necessarily true, except by "definition".

Well, if the premises are true by definition then they are necessarily true.


A logically sound argument is nevertheless fallacious if it is an example of "circulus in demonstrando", which basically means one assumes as a premise the conclusion which one wishes to reach. Your argument may be as logical as you like, but if your conclusion is already contained in one of your premises then all you have achieved is "circulus in demonstrando".

...

A tautological argument is an example of "circulus in demonstrando", which basically means the argument is fallacious because one assumes as a premise the conclusion which one wishes to reach.

Please understand what's going on here. Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"? Obviously not. Again, tautologies are by definition true, so it hardly makes sense to oppose one. Analytic statements are not fallacious.
 
Last edited:

Similar threads

Replies
1
Views
1K
Replies
21
Views
2K
Replies
9
Views
2K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
2K
Replies
3
Views
1K
Back
Top