Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #106
TheStatutoryApe said:
Tist, You seem to make quite a few assumtions without much other reason than "There must be something more."
Why exactly must it be that there is something more?
Why is a complex, mutable, and rewritable system of rules not enough to process information like a human?

I'll try again.

The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.

The Chinese room shows that having a complex system of rules acting on input is not sufficient for literal understanding to exist. We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

(Remember, variants of the Chinese room include the system of rules being complex, rewritable etc. and yet the man still doesn’t understand a word of Chinese.)


What does the soul do that is different than this?

I believe that literal understanding (in addition to free will) requires something fundamentally different--to the extent that the physical world cannot do it. The soul is and provides the incorporeal basis of oneself.


Lets hit that first. What is "understanding" if not a manner of processing information?

Grasping the meaning of the information. It is clear from the Chinese room that merely processing it does not do the job.


To this I am sure that you will once again invoke your chinese room argument, but your chinese room does not allow the potential AI any of the freedoms of a human being.

By all means, please tell me what else a potential AI has other than a complex set of instructions to have literal understanding.


You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why?

I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.


Note: the text below goes off topic into the realm of the soul

Simply because the AI homunculus doesn't possesses the same magical ball of yarn that your soul homunculus has?

Actually, my point is that the soul is the figurative "magical ball of yarn." Physical processes seem completely incapable of producing real understanding; something fundamentally different is required.


Does it somehow already supernaturally know how to understand brainspeak?

This is one of the reasons why I believe God is the best explanation for the existence of the soul; the incorporeal would have to successfully interact with a highly complex form of matter (the brain). The precise metaphysics may be beyond our ability to discern, but I believe that this how it came to be.


What is the fundamental difference between the situations of the chinese room homunculus and the soul homunculus?

The soul provides that “something else” that mere computers don't have.
 
Last edited:
Physics news on Phys.org
  • #107
Zantra said:
Ape seems to have beaten me to the punch. If we show the man in the room how to translate chinese that says to me that he is able to understand the language he is working with. No further burden of proof is required.

I've already responded to this. While your idea may sound good on paper, watch what happens when we try to instantiate this analogy into a real computer.

You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.


Your assumption is that the rules are static and can't be changed.

Not at all. Variants of the Chinese room include learning algorithms and the creation of different procedures (the man has extra paper to write down more information etc.) as I illustrated before (when the Chinese room "learns" a person's name).


That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):

And in response to that I say that the human mind is nothing more than an organic mirror of it's cpu conterpart: processing input, interpreting the data and outputting a response.

...

Essentially a CPU emulates the human brain in terms of processing information.

And is still question begging based on what we've learned from the Chinese room, and still doesn't answer my question of "what else" a computer has besides using a complex set of rules acting on input in order to literally understand.


It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

The room understands that his name is Bob. What more needs to be known about bob? That's an example of a current AI program. I can problably find something like that online. But what if the conversation went a little differently-ie:

Human: how are you today?
Room: I'm lonely. What is your name?
Human: My name is Bob. Why are you lonely?
Room: Nice to meet you Bob. You are the first person I have met in 2 years.
HUMAN: I can understand why you are lonely. Would you like to play a game with me?
Room: I would like that very much.

The computer appears to have more of a "soul".

And so does the room. Nonetheless, the person in the room doesn't know the man's name is Bob, isn't necessarily feeling lonely, doesn't even understand Bob's words at all etc. We still just have a complex set of rules operating on input, which I've shown is insufficient for literal understanding to exist.


I use no special definition. Understanding means "to grasp the meaning of."

Ok then by that definition, a computer is fully capable of understanding.

The Chinese room thought experiment would seem to disprove that statement--unless you can show me what else a computer has besides a complex set of rules etc. that would make it literally understand.


The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

You keep alluding to your own magic ball of yarn. What is this magical property that you keep hinting at but never defining?

I have repeatedly pointed out that computers manipulating input via a set of instructions is not sufficient to produce understanding. My question: "what else do you have?" That's for you to answer, not me. I claim there is nothing you can add to the computer to make it literally understand.


Note: going off topic to the soul realm

What is this thing that humans have that machines cannot posess?

A soul.


Are you talking about a soul?

Yes.


What is a soul exactly?

The incorporeal basis of oneself.


How about curiosity? If we design a machine that is innately curious, doesn't that make him strikingly human in nature?

I believe we can make a machine "strikingly human in nature" in the sense that the machine can mimic human behavior--just as the Chinese room can mimic a person fluent in Chinese. But that does not imply the existence of literal understanding.


You have to change your way of thinking. sentience can be had in a machine.

Rather question begging in light of the Chinese room, especially when you can't answer my question: what else could a computer possibly add for it to possesses literal understanding?

Apparently nothing.
 
Last edited:
  • #108
what complex rule? Learnign algorithms dont' use logic rules in the sense of language.
 
  • #109
neurocomp2003 said:
what complex rule? Learnign algorithms dont' use logic rules in the sense of language.

Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.
 
  • #110
Your entire argument still revolves around, "There must be something more."
You still limit the homunculus in the chinese room even though your soul of which you speak is the same thing except you have given it a magic ball of yarn. Just a magic ball of yarn with no explination as to what the ball of yarn does, what it's made of, or how it works. So if I told you that all I have to do is give a computer "AI" that it would be sentient would you believe me? Wouldn't you ask me what this "AI" does and how it does it? If I simply told you that it's the fundamental element of computer sentience that gives it "free will" and "understanding" would you be satisfied?
This is as much information as you have given us regarding this soul. You simply say that it must exist for there to be "free will" and "understanding" hence since humans have "free will" and "understanding" this soul obviously exists! This argument is completely useless and a classic example of bad logic.

Do you realize that Searle, who came up with the chinese room, didn't argue for a soul? He argued what he calls intrinsic intentionality which it seems is just as vague a notion as the soul which you argue for. You would call it "free will" most likely but Searle doesn't postulate that a soul is necessary for free will.

But what about current AI computers that outpreform what most people ever thought they would be able to do? Deep Blue beat Kasparov (the world champion chess player). How does a machine do that without being genuinely intelligent? It would have to make decisions and produce meaningful output wouldn't it?
I have a cheap computer program that plays Go. That's a complex (even more so than chess) Japanese/Chinese strategy game. One day I was playing the computer and found that I had gotten the better of the computer pretty well in a certain part of the board. I decided to back track the moves and play that bit over again and see if there were possibly any better moves to be made. After playing with different options and being satisfied that I had made the most advantageous moves in that situation I tried playing the original sequence to get to where I was before I had back tracked. The computer though decided it was going to do something completely different than it had the first time. If the computer has no "understanding" what so ever of what is going on then how does it make decisions to make differing responses to the same set of circumstances? And this is just a cheap program that isn't very good.
 
  • #111
ah so your definition fo a rule is any rule...fair enough. I was under the impression your def'n of rule was something grandeur than bit logic. Anyways going back to consciousness...isn't it just the product of interactions/collisions of the physical objects that creates it ...or are you saying that there exists this mysticism known as a soul that exists outside of any physical object(known/unknown to man) that exist in our universe.
 
  • #112
Tisthammerw said:
Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.
How is this any different than the human body and brain? The signals that our brain receives isn't in english nor are the outputs that it gives. Like I've been trying to show you just put a little man inside the brain, you can even call it a soul if you'd like, and you will have the exact same situation that you have been giving us regarding the chinese room.

---edit---

I wouldn't be suprised if those who try to negate the idea of freewill and a human being more than the some of it's parts would use a version of the chinese room argument to make their case.
 
Last edited:
  • #113
Here is a short simple essay discussing the way humans think. One of Searle's arguments is that the homunculus in the chinese room can only learn syntactic rules but not semantic rules based on it's situation. After reading this article it occurred to me that Searle is, perhaps unintentionally, proposing an underlying essence to the chinese words and characters by bringing in the element of semantic rules as a base line for comprehension. If you go a layer or two deeper on the matter of semantic rules though you'll quickly realize that even the semantic rules are based on a form of syntactic rule. That is to say the syntax of experiencial information creates the semantic rule.
Semantics are in reality rooted in the syntax which Searle contends is the only thing that computers "understand". The computers capacity of only being able to "understand" syntax is the very basis of his argument. THAT is the gaping hole in Searle's chinese room argument. At it's base all cognition comes from syntax.

HA! I feel so much better now that I was finally able to pin point what it is that made the argument seem so illogical to me.
 
  • #114
TheStatutoryApe said:
Your entire argument still revolves around, "There must be something more."

Yes, and the Chinese room thought experiment (see post post #106) would seem to illustrate that point rather nicely. You still haven’t found a way to overcome that problem.


You still limit the homunculus in the chinese room even though your soul of which you speak is the same thing except you have given it a magic ball of yarn.

Not quite. The soul is the incorporeal basis for the self, consciousness, understanding, and sentience. Using our yarn metaphor, the soul is the “magic ball of yarn.”


Do you realize that Searle, who came up with the chinese room, didn't argue for a soul?

Yes I do. Searle was a physicalist. But that doesn't alter my points. It still seems that a computer lacks the means to possesses literal understanding, and it still seems that the Chinese room thought experiment is sound.


But what about current AI computers that outpreform what most people ever thought they would be able to do? Deep Blue beat Kasparov (the world champion chess player). How does a machine do that without being genuinely intelligent?

In this case, it did so using a complex set of rules (iterative deepening search algorithms with Alpha-Beta pruning etc.) acting on input. I myself have made an AI that could beat many players at a game called Nim. Nonetheless, it still doesn't overcome the point the Chinese room makes: a complex set of rules operating on input is insufficient for literal understanding. So what else do you have?


It would have to make decisions and produce meaningful output wouldn't it?
I have a cheap computer program that plays Go. That's a complex (even more so than chess) Japanese/Chinese strategy game. One day I was playing the computer and found that I had gotten the better of the computer pretty well in a certain part of the board. I decided to back track the moves and play that bit over again and see if there were possibly any better moves to be made. After playing with different options and being satisfied that I had made the most advantageous moves in that situation I tried playing the original sequence to get to where I was before I had back tracked. The computer though decided it was going to do something completely different than it had the first time. If the computer has no "understanding" what so ever of what is going on then how does it make decisions to make differing responses to the same set of circumstances?

Like many programs, it uses a complex set of instructions acting on input. Don't forget that the Chinese room can emulate these very same features (e.g. making different responses with the same question etc.) given the appropriate set of rules.
 
  • #115
neurocomp2003 said:
ah so your definition fo a rule is any rule...fair enough. I was under the impression your def'n of rule was something grandeur than bit logic. Anyways going back to consciousness...isn't it just the product of interactions/collisions of the physical objects that creates it

I believe the answer is no.


...or are you saying that there exists this mysticism known as a soul that exists outside of any physical object(known/unknown to man) that exist in our universe.

I can only speculate as to the precise metaphysics behind it, but it seems clear to me that the mere organization of matter is insufficient for producing consciousness and free will. Therefore, such things having an incorporeal basis is the only logical alternative.
 
  • #116
TheStatutoryApe said:
Tisthammerw said:
Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.

How is this any different than the human body and brain?

If you recall, I believe there is an incorporeal basis for consciousness and understanding for human beings. Otherwise I think you're right; there really is no fundamental difference. If the Chinese room thought experiment is sound, it would seem to rule out the possibility of physicalism.

One could make this argument

  1. If physicalism is true, then strong AI is possible via complex sets of rules acting on input
  2. Physicalism is true
  3. Therefore such strong AI is possible (from 1 and 2)

But premise 2 is a tad question begging, and the Chinese room seems to refute the conclusion. Therefore I could argue (if premise 1 were true)

  1. If physicalism is true, then strong AI is possible via complex sets of rules acting on input
  2. Such strong AI is not possible (Chinese room)
  3. Therefore physicalism is not true (from 1 and 2)

So the first premise doesn't really establish anything for strong AI unless perhaps one can do away with the Chinese room, and I haven't seen a refutation of it yet.
 
  • #117
TheStatutoryApe said:
Here is a short simple essay discussing the way humans think. One of Searle's arguments is that the homunculus in the chinese room can only learn syntactic rules but not semantic rules based on it's situation. After reading this article it occurred to me that Searle is, perhaps unintentionally, proposing an underlying essence to the chinese words and characters by bringing in the element of semantic rules as a base line for comprehension. If you go a layer or two deeper on the matter of semantic rules though you'll quickly realize that even the semantic rules are based on a form of syntactic rule. That is to say the syntax of experiencial information creates the semantic rule.

It is true that we humans can pick up semantic rules based on experience. It is also evident that we humans can "learn by association." Nonetheless, this type of learning presupposes consciousness etc. and it is evident from the Chinese room that a complex set of rules acting on input is insufficient for literal understanding to exist. Even when a computer "learns by association" through audio-visual input devices, literal understanding does not take place.

Note that we already discussed something similar: a computer learning by what it sees and hears. Even when based on sensory experience, it didn't work, remember? You said:

You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why?

I replied:

I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.

Obviously, something else is required besides a complex set of rules acting on input.


Semantics are in reality rooted in the syntax which Searle contends is the only thing that computers "understand". The computers capacity of only being able to "understand" syntax is the very basis of his argument. THAT is the gaping hole in Searle's chinese room argument. At it's base all cognition comes from syntax.

If you're claiming that all knowledge is ultimately based on a complex set of rules acting on input, I wouldn't say that--unless you wish to claim that the man in the Chinese room understands Chinese. It's true that we humans learn the rules of syntax for words, but it's more than that; we can literally understand their meaning. This is something a mere complex set of rules etc. can't do, as I've illustrated with the Chinese room thought experiment.


HA! I feel so much better now that I was finally able to pin point what it is that made the argument seem so illogical to me.

Start feeling bad again. The Chinese room still shows that a set of rules--however complex and layered--acting on input is insufficient for literal understanding to exist. Adding additional layers of rules still isn't going to do the job (we could add additional rules to the rulebook, as we did before in this thread with the variations of the Chinese room, but the man still doesn't understand Chinese). Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.
 
Last edited:
  • #118
so understanding lies outside the physcality of our universe...but is contained within our brain/body? so the firing of billion neurons feeding form vision/audition to memory/speech will not form understanding?
 
  • #119
neurocomp2003 said:
so understanding lies outside the physcality of our universe...but is contained within our brain/body?

If you want my theory, I believe the soul is parallel to the physical realm, acting within the brain.


so the firing of billion neurons feeding form vision/audition to memory/speech will not form understanding?

By itself, no (confer the Chinese room) since it seems that mere physical processes can't do the job.
 
  • #120
Tisthammerw said:
If you're claiming that all knowledge is ultimately based on a complex set of rules acting on input, I wouldn't say that--unless you wish to claim that the man in the Chinese room understands Chinese. It's true that we humans learn the rules of syntax for words, but it's more than that; we can literally understand their meaning. This is something a mere complex set of rules etc. can't do, as I've illustrated with the Chinese room thought experiment.
But the question is why and how do we understand. The chinese room shows that both machines and humans will be unable to understand a language without an experiencial syntax to draw from. This is how humans learn, through syntax. Through syntax we develope a semantic understanding. We do not know inately what things mean. There is no realm of platonic ideals that we tap from birth. We LEARN TO UNDERSTAND MEANING. How do you not get that? Your necessity for a magic ball of yarn is not a valid or logical argument since I might as well call your soul a magic ball of yarn and it holds about as much meaning. Tell me what the soul does, not just that it is the incorporial manifestation of self because that's entirely meaningless as well. It doesn't tell me what it does. "Freewill" and "Understanding", these things don't tell me what it does or how it does it either. You're going to have to do a hell of a lot better than that.
 
  • #121
Tisthammerw said:
Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.
My point is that nothing else is required. Just the right hardware and the right program. I denounce your need for a magic ball of yarn until you can give me some concrete property that belongs to it that helps process information. "Freewill" and "true understanding" are just more vague philosophical notions without anything to back them up or even any reason to believe that a soul is necessary for them.
I contend that a human mind starts out with nothing but it's OS and syntactic experience as a base from which it developes it's "meaningful understanding" and that a computer has the capacity for the same.
 
  • #122
Pengwuino said:
I'm pretty sure my cell phone has more intelligence then some of the people I have met...

...and I am sure that who created the concept of cp for it to be realized, is much more intellegence that any models of cp that exists... without human intellegence cp can't possibly exist.
 
  • #123
TheStatutoryApe said:
But the question is why and how do we understand. The chinese room shows that both machines and humans will be unable to understand a language without an experiencial syntax to draw from. This is how humans learn, through syntax.

Partially. The Chinese room shows that a complex set of instructions is insufficient for understanding. Real understanding may include the existence of rules, but a set of rules is not sufficient for understanding.


We LEARN TO UNDERSTAND MEANING. How do you not get that?

I understand that we humans can learn to understand meaning. My point is that something other than a set of instructions is required (see above), and the Chinese room thought experiment proves it. Note the existence of learning algorithms on computers. If the learning algorithms are nothing more than another set of instructions, the computer will fail to understand (note the variant of the Chinese room that had learning algorithms; learning the person's name and so forth).


Your necessity for a magic ball of yarn is not a valid or logical argument

My argument is that something else besides a complex set of instructions is required, and my argument is logical since I have the Chinese thought experiment to prove it. Here we have an instance of a complex set of instructions acting on input to produce valid output, yet no understanding is taking place. Thus, a set of instructions is not enough for understanding.


Tell me what the soul does

This is going off topic again but here it goes: the soul interacts with the corporeal world to produce effects via agent-causation (confer the agency metaphysical theory of free will) as well as receiving input from the outside world.


TheStatutoryApe said:
Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.

My point is that nothing else is required.

The Chinese room thought experiment disproves that statement. Here we have an instance of a complex set of rules acting on input (questions) to produce valid output (answers) and yet no real understanding is taking place.


Just the right hardware and the right program.

Suppose we have the "right" program. Suppose we replace the hardware with Bob. Bob uses a complex set of rules identical to the program. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run the program, get valid output etc., and yet no real understanding is taking place. So even having the “right” rules and the “right” program is not enough. So what else do you have?

You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?


I denounce your need for a magic ball of yarn until you can give me some concrete property that belongs to it that helps process information.

The magical ball of yarn was just a metaphor, as in when I asked the question "What else do you have besides a complex set of rules manipulating input? A magical ball of yarn?"

That last question may have been somewhat rhetorical (though the first one was not).


"Freewill" and "true understanding" are just more vague philosophical notions without anything to back them up or even any reason to believe that a soul is necessary for them.

That's not entirely true. One thing to back up the existence of “true understanding” is everyday experience: we grasp the meaning of words all the time. We have reason to believe a soul is necessary for free will (click here to see this article on that).
 
  • #124
The bottom line is that you have nothing to counter with. "something more" is not a valid argument. Define what you're referring to, or the argument is done. I know you can't. And the reason you don't know specifically is because that "something more" doesn't exist, except in our minds. If there were a human-like robot with AI advanced enough to imitate human speech and behavior, it would be indecipherable from a true human. What you're saying to me, is that even if you were fooled into believing it was a human initially, if it was then revealed that it was actually a machine, you would deem it not enough of a human to be human. You would think this because you "percieve" something that isn't there. A magical component that only human beings possesses which cannot be duplicated. However, you can't name this thing, because it's in your mind. It does not exist. You are referring to in essence a "soul" which is an ideal. Ideals can be programmed. Nothing exists in use which cannot be duplicated.

As I've already stated, in my version of the chinese room, the man is taught chinese, and so he understands the information he is processing. You refuse to accept that analogy, but it still stands. I'm satisfied this discussion is resolved. Everything else at this point is refusal to accept the truth, unless you can tell me exactly what this "something more" is. You keep referring to "understanding" but we've already defined understanding. For instance, mathmatics. I think we can generally agree that there is no room for interpretation there-you understand math, or you don't. You are right, or you are wrong. There's no subtle undertones, no underlying philosophy. Yet you claim computers cannot understand it the way you do. I didn't realize we as humans possessed some mathematical reasoning which is beyond that of a machine.

So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.
 
Last edited:
  • #125
zantra/ape: outta curiosity are you suggesting that searle's argument is only capable of
rendering the view of child/toddler learning/development (the whole syntax/semantic thing) and that it is to naive an argument to compete wtih teh complexity of the adult brain? or rather i should say the computational complexity of the brain.
 
  • #126
Zantra said:
The bottom line is that you have nothing to counter with. "something more" is not a valid argument.

You're right that "something more" is not a valid argument. But the Chinese room thought experiment is a valid argument in that it demonstrates the need for something more.

Recapping it again:

The Chinese Room thought experiment

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.

Here we have an instance of a complex set of rules acting on input (questions) yielding valid output (answers) without real understanding. (Do you disagree?) Thus, a complex set of rules is not enough for literal understanding to exist.


If there were a human-like robot with AI advanced enough to imitate human speech and behavior, it would be indecipherable from a true human.

The man in the Chinese room would be indecipherable from a person who understands Chinese, yet he does not understand the language.


As I've already stated, in my version of the chinese room, the man is taught chinese, and so he understands the information he is processing.

Except that I'm not claiming a person can't understand Chinese, I'm claiming that a machine can't. You're argument "a person can be taught Chinese, therefore a computer can too" is not a valid argument. You need to provide some justification, and you haven't done that at all.

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

TheStatutoryApe claimed just having “the right hardware and the right program” would be enough. Clearly having the “right” program doesn't work. He mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?


Everything else at this point is refusal to accept the truth, unless you can tell me exactly what this "something more" is.

That's ironic. It is you who must tell me what this "something more" a computer has for it to literally understand. The Chinese room proves that a complex set of rules acting on input isn't enough. So what else do you have?


So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.

I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).

What about your burden of proof? You haven’t justified your claim of “if a human can learn Chinese, so can a computer,” for instance. Let's see you prove your theory: show me something else (other than a complex set of instructions acting on input) a computer has that enables it to literally understand. I've made this request repeatedly, and have yet to here a valid answer (most times it seems I don't get an answer at all).
 
  • #127
tishammerw: but you see i think our proof is in the advancement of ADAPTIVE learning techniques. THat is our something more...however your something more still remains a mysticism to us and i think that was zantra's point...

as for the chinese searle room problem.
I will be arguing that the chinese searle room also argues that humans have no extra "understanding" as you suggest...that the understanding is a mere byproduct

lets say there are 3 people.2 are conversing over the phone in chinese.
One only understands chinese the other(westerner) is learning chinese. the 3rd person is an english2chinese teacher and is only allowed to converse with the westerner for
5min and cannot converse with the chineses person. How much comprehension of chinese do you think the westerner can get within 5 mins?
 
  • #128
Tisthammer said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis. It's sensory information which is syntactic. The man's brain takes in syntactic information, that is information that has no more meaning than it's pattern structure and context with no intrinsic meaning to be understood, and it decifers the information without any meaningful thought and understanding what so ever in order to produce those chinese characters that he's looking at. The understanding of what the "picture" represents is an entirely different story but just ataining the "picture" that is sensory information is easily done by the processes the mans brain is already doing that are not requiring meaningful thoughts or output of him as a human. So I don't see the problem with allowing the man sensory input from outside. This is all that the man in the box has access to is the syntax of the information being presented. So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books? It all depends on the complexity of the language being used. Any spoken human language is incredibly complex and takes a vast reserve of experiencial data (learned rules of various sorts) to process, and experiencial data is syntactic as well.
Give the man in the room a more simple language to work with then. Start asking the man in the room math questions. What is one plus one? What is two plus two? The man in the room will be able to understand math given enough time to decifer the code and be capable of applying it.

Tisthammer said:
I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).
The use of spoken human language in this thought experiment is cheating. The man in the box obviously hasn't enough information to process by which to gain an understanding. If, as I stated earlier, you used math instead which is entirely selfreferencial and sytactic then the man would have all the information he needed to understand the mathematical language right there in front of him.

[quote="Tisthammerw]What about your burden of proof? You haven’t justified your claim of “if a human can learn Chinese, so can a computer,” for instance. Let's see you prove your theory: show me something else (other than a complex set of instructions acting on input) a computer has that enables it to literally understand. I've made this request repeatedly, and have yet to here a valid answer (most times it seems I don't get an answer at all).[/quote]
I contend that all of the information processing that a human does is at it's base syntactic and that we learn from syntactic information in order to build a semantic understanding. I say that if a computer is capable of learning from syntactic information, which are the only rules in the chinese room that the man is allowed to understand, that the computer can eventually build a semantic understanding in the same manner which a human does. NO "SOMETHING MORE" NEEDED.

The chinese room is far too simple and very much misleading. It forces the man in the box to abide by it's rules without establishing that it's rules are even valid.
 
  • #129
neurocomp2003 said:
zantra/ape: outta curiosity are you suggesting that searle's argument is only capable of
rendering the view of child/toddler learning/development (the whole syntax/semantic thing) and that it is to naive an argument to compete wtih teh complexity of the adult brain? or rather i should say the computational complexity of the brain.
Yes, that's more or less my point. The argument is far too simple and jumps orders of magnitude in complexity of a real working system as if they don't exist.
 
  • #130
neurocomp2003 said:
tishammerw: but you see i think our proof is in the advancement of ADAPTIVE learning techniques. THat is our something more

But if these adaptive learning algorithms are simply another complex set of instructions, this will get us nowhere. Note that I also used a variant of the Chinese room that had learning algorithms that adapted to the circumstances, and still no understanding took place.


as for the chinese searle room problem.
I will be arguing that the chinese searle room also argues that humans have no extra "understanding" as you suggest

Please do.


lets say there are 3 people.2 are conversing over the phone in chinese.
One only understands chinese the other(westerner) is learning chinese. the 3rd person is an english2chinese teacher and is only allowed to converse with the westerner for
5min and cannot converse with the chineses person. How much comprehension of chinese do you think the westerner can get within 5 mins?

This really doesn't prove that a complex set of rules (as for a program) is sufficient for understanding. Note that I'm not claiming a person can't learn another language. We humans can. My point is that this learning requires something other then a set of rules. Rules may be part of the learning process, but a set of instructions is not sufficient for understanding as the Chinese room indicates (we have a set of instructions, but no understanding).
 
  • #131
TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.

People are capable of understanding; no one is disputing that. However, my claim is that a complex set of instructions--while perhaps necessary--is not sufficient for understanding. Searle for instance argued that our brains have unique causal powers that go beyond the execution of program-like instructions. You may doubt the existence of such causation, but notice the thought experiment I gave. This is a counterexample proving that merely having the "right" program is not enough for literal understanding to take place. Would you claim, for instance, that this man executing the program understands binary when he really doesn't?


Tisthamemrw said:
TheStatutoryApe said:
So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.

I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).

Your reply:

The use of spoken human language in this thought experiment is cheating.

I don't see how. You asked, and I answered. Spoken human language appears to be something a computer cannot understand.


I contend that all of the information processing that a human does is at it's base syntactic and that we learn from syntactic information in order to build a semantic understanding.

Syntax rules like the kind a program runs may be necessary, but as the Chinese room experiment shows it is not sufficient--unless you wish to claim that the man in the room understands Chinese. As I said, rules may be part of the process, but they are not sufficient. My thought experiments prove this: they are examples of complex sets of instructions executing without real understanding taking place.

You could claim that the instructions given to the man in the Chinese room are not of the right sort, and that if the “right” program were run on a computer literal understanding would take place. But if so, please answer my questions regarding the robot and program X (see below).


I say that if a computer is capable of learning from syntactic information, which are the only rules in the chinese room that the man is allowed to understand, that the computer can eventually build a semantic understanding in the same manner which a human does. NO "SOMETHING MORE" NEEDED.

But if this learning procedure is done solely by a complex set of instructions, merely executing "right" program (learning algorithms and all) is not sufficient for understanding. By the way, you haven't answered my questions regarding my latest thought experiment (the robot and program X). Let's review:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the “right” program with learning algorithms etc. (let's call it “program X”) there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

You claimed that just having “the right hardware and the right program” would be enough. Clearly, just having the “right” program doesn't work. You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

I await your answers.


The chinese room is far too simple and very much misleading. It forces the man in the box to abide by it's rules without establishing that it's rules are even valid.

The rules are indeed valid: they give correct and meaningful answers to all questions received. In other words, the man has passed the Turing test.

And it isn't clear why the thought experiment is too “simple.” The man is using a complex set of instructions to do his work after all.
 
  • #132
I think the major premise of the Searle argument has been bypassed. He argued that semantics was essential to conciousness and that syntax could not generate semantics. The Chinese room was just an attempt to illustrate this position. At the time, decades ago, it was a valid criticism of AI, which had focussed on more and more intricate syntax.

But the AI community took the criticism to heart and has spent those decades investigating the representation of semantics, they have used more general systems than syntactic ones to do it, such as neural nets. So the criticism is like some old argument against Galilian dynamics; whatever you could say for it in terms of the knowledge of the time, by now it's just a quaint historical curiosity.
 
  • #133
selfAdjoint said:
I think the major premise of the Searle argument has been bypassed.

How so?


He argued that semantics was essential to conciousness and that syntax could not generate semantics. The Chinese room was just an attempt to illustrate this position. At the time, decades ago, it was a valid criticism of AI, which had focussed on more and more intricate syntax.

But the AI community took the criticism to heart and has spent those decades investigating the representation of semantics, they have used more general systems than syntactic ones to do it, such as neural nets.

The concept of neural networks in computer science is still just another complex set of instructions acting on input (albeit formal instructions of a different flavor than the days of yore); so it still doesn't really answer the question of "what else do you have?" Nor does it really address my counterexample of running the "right" program (the robot and program X; see post #131).

But perhaps you're thinking of something else: are you proposing the following:
Creating a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands questions in Chinese and gives answers to them. Surely then we would have to say that the computer understands then...?
 
  • #134
Tisthamemrw said:
TheStatutoryApe said:
So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.
I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).
Your reply:
TheStatutoryApe said:
The use of spoken human language in this thought experiment is cheating.
I don't see how. You asked, and I answered. Spoken human language appears to be something a computer cannot understand.
For one you have misquoted me, the first quote there was from someone else, and while that doesn't make much difference the fact that you don't seem to be paying attention and the fact that you conveniently don't quote any of my answers to the questions you claim I am not answering constitutes a problem with having any real discussion with you. If you don't agree with my answers that's quite alright but please give me a response telling me the issues that you have with them and it would also help if you stopped simply invoking the Chinese Room as your argument when I am telling you that I do not agree with the chinese room and I do not agree that a complex set of instructions isn't enough.

Learn to make a substancial argument rather than lean on someone else's as if it were a universal fact.

I gave you answers to your questions. If you want to find them and make a real argument against them I will indulge you further in this but not until then.
Thank you for what discussion we have had so far. I was not aware of the chinese room argument until you brought it up and I read up on it.
 
Last edited:
  • #135
TheStatutoryApe said:
For one you have misquoted me, the first quote there was from someone else

I apologize that I got the quote mixed up. Nonetheless the second quote was yours.


and while that doesn't make much difference the fact that you don't seem to be paying attention and the fact that you conveniently don't quote any of my answers to the questions you claim I am not answering constitutes a problem with having any real discussion with you.

Please tell me where you answered the following questions found the end of the quote below:

Tisthammerw said:
By the way, you haven't answered my questions regarding my latest thought experiment (the robot and program X). Let's review:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the “right” program with learning algorithms etc. (let's call it “program X”) there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

You claimed that just having “the right hardware and the right program” would be enough. Clearly, just having the “right” program doesn't work. You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

I await your answers.

Where did you answer these questions?

Note what happened below:

TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.

I responded that while people are obviously capable of understanding (there's no dispute there) my claim that a complex set of instructions--while perhaps necessary--is not sufficient for understanding (as this example proves: we have the “right” program and still no understanding).

But notice that you cut out the part of the thought experiment where I asked the questions. See post #128 for yourself if you don’t believe me. You completely ignored the questions I asked.

I will however answer one of your questions I failed to answer earlier.

So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books?

Part of it is that he can't learn binary code the same way he can learn English. Suppose for instance you use this rule:

If you see 11101110111101111
replace with 11011011011101100

And you applied this rule many times. How could you know what you what the sequence 11101110111101111 means merely by executing the instruction over and over again? How would you know, for instance, that you're answering “What is 2+2?” or “What is the capital of Minnesota?” It doesn’t logically follow that Bob would necessarily know the meaning of the binary code merely by following the rulebook any more than the man in the Chinese room would necessarily know Chinese. And ex hypothesi he doesn't know what the binary code means when he follows the rulebook. Are you saying such a thing is logically impossible? If need be, we could add that he has a mental impairment that renders him incapable of learning the meaning of binary code even though he can do fantastic calculations (a similar thing is true in real life for some autistic savants and certain semantics of the English language). So we still have a clear counterexample here (see below for more on this) of running the “right” program without literal understanding.


If you don't agree with my answers that's quite alright but please give me a response telling me the issues that you have with them and it would also help if you stopped simply invoking the Chinese Room as your argument when I am telling you that I do not agree with the chinese room and I do not agree that a complex set of instructions isn't enough.

The reason I use the Chinese room (and variants thereof) is because this is a clear instance of a complex set of instructions giving valid answers to input without literal understanding. I used what is known as a counterexample. A counterexample is an example that disproves a proposition or theory. In this case, the proposition that having a complex set of instructions is enough for literal understanding to exist. Note the counterexample of the robot and program X: we had the “right” set of instructions and it obviously wasn't enough. Do you dispute this? Do you claim that this man executing the program understands binary when he really doesn't?

You can point to the fact that humans can learn languages all you want, claim they are using syntactic rules etc. but that still doesn't change the existence of the counterexample. Question-begging and ignoratio elenchi is not the same thing as producing valid answers.


I gave you answers to your questions.

Really? Please tell me where you answered the questions I quoted.
 
Last edited:
  • #136
Neural networking directly addresses these issues.
 
  • #137
tishammerw: i don't think your argument against learning algorithms is conclusive...when you discuss such techniques you are not thinking along the lines of serial processing like if-then logic, rather parallel processing. And with that you are not discussing the simple flow of 3-4 neurons like in spiking neurons but on a numerous system of billions of interaction whether it be nnets or GAs or RL.

On another thing...we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding. However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument...what is your "what else" that will support your argument. Heh we shouldn't have to come up with your side of the argument.
 
  • #138
pallidin said:
Neural networking directly addresses these issues.

Addresses what issues? And how exactly does it do so?
 
  • #139
neurocomp2003 said:
tishammerw: i don't think your argument against learning algorithms is conclusive...when you discuss such techniques you are not thinking along the lines of serial processing like if-then logic, rather parallel processing.

Even parallel processing can do if-then logic. And we can say that the man in the Chinese room is a multi-tasker when he follows the instructions of the rulebook; still no literal understanding.


And with that you are not discussing the simple flow of 3-4 neurons like in spiking neurons but on a numerous system of billions of interaction whether it be nnets or GAs or RL.

One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.


On another thing...we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding.

And I have provided you with a counterexample, remember? Learning algorithms, sensors, etc. and still no understanding.


However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument

My counterexample proved that not even the existence of learning algorithms in a computer program is sufficient for literal understanding. The man in the Chinese room used the learning algorithms of the rulebook (and we can make them very complex if need be) and still there was no literal understanding. Given this, I think it's fair for me to ask "what else"? As for what I personally believe, I have already given you my answer. But this belief is not necessarily relevant to the matter at hand: I provided a counterexample--care to address it?
 
  • #140
tishammerw: what counterexample? that searle's argument says that there is no literal understanding by the brain without this "something else" tha tyou speak of? I'm still lost with your counterexample...or is it that if something else can imitate the human and clearly not understand...and then doesn't this imply that humans may not "understand" at all? what makes us so special? why do you believe that humans "understand"? and where is this proof...wouldn't searles argument also argue against human understanding?

It is fair for you to ask "what else" but you must also answer the question...because to us all that is needed are learning algorithms that emulate the brain nothing more.
If we were to state this "what else" then we would go against our beliefs? so is it fair for you to ask us to state this "what else" that YOU believe in? NO! and thus you must provide us with this explanation
 

Similar threads

Replies
1
Views
1K
Replies
21
Views
2K
Replies
9
Views
2K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
2K
Replies
3
Views
1K
Back
Top