Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #36
saltydog:eek:h but i will...see the bricks are conscious but they adapted over the years to respond to almost nothing(the exception is a battering of environmental conditions). They are smart because they show no response or emotion whatsoever which i think is what's wrong with humans...I mean its great to feel love/happiness but to feel sadness/depression/failure over many years sucks.
I must correct myself though, a brick does respond without emotion...when you etch in a brick it allows you etch for if it did not all you to..you would not be able to, and when you bounce a ball towards it, it bounces it back. HEHE
 
Physics news on Phys.org
  • #37
Computers are digital, they must do their work by manipulating one digit at a time. Brains do it sort of analog-like but it more complex and more like many analogs operating at the same time.

The memory is similar to a relational database but many datum are related at once. The brain can think of a complex concept in a millisecond that would take many minutes to verbalize.

With machines, we are so far from the way the brain works I doubt we can ever make them equal.

Just my two cent's worth.
 
  • #38
neurocomp2003 said:
Zantra: "Can we teach a machine to "love"? No." I'd have to disagree, Personally emotions IMO comes down to interaction of a child and his surroundings(relatives/friends/parents) not simply by having NTs in your system.

Ok so then you would agree that by definition, love requires the interaction of a being with their surroundings and immediate friends/family in order to form a bond, correct? They must develop with those people a sense of familiarity with their surroundings and people. Based on that premise, you do not think an eventual machine might interact with human beings enough to form such a bond- not only love, but friendship?

Even prior to that step in AI evolution, how difficult would it be to emulate emotions to the point that you couldn't tell the difference? If you encountered a machine you didn't know was a machine, and they were able to emulate love, could you tell the difference? And if so, how?
 
  • #39
nono...lol i take it you never read my posts. My point was that your post seemed to be biased toward biology rather than psychology. That is to say that your first paragraph takes the stance that NTs are the cause. SEe the quote i cited also contained the answer to which i thoght was yours...that you stated which was we can not teach machines.

kublai: but we have analog to digital converts that happen on every aspect of a computer. eg a joystick.
 
Last edited:
  • #40
I think one of the key things here is the hardware. Our hardware is chemical and computers as we know them right now are solid state. An incredibly sophisticated and compact system.

Now, the only way I think we could get a computer to at least emulate our behavior convincingly, is to have an immensely more powerful system. I've done some study on the theory behind quantum computers. If we could get quantum computing off the ground I believe we may have the power to emulate human behavior. But it will always be artificial IMO.

We will never create life. Whether it's in a petri dish or an electronic device. I haven't heard a convincing argument that shows that we really understand what drives life in the first place.
 
  • #41
neurocomp2003 said:
nono...lol i take it you never read my posts. My point was that your post seemed to be biased toward biology rather than psychology. That is to say that your first paragraph takes the stance that NTs are the cause. SEe the quote i cited also contained the answer to which i thoght was yours...that you stated which was we can not teach machines.


Sorry I misread that as part of your response. I think we are actually in agreement here.. hehe. I may have oversimplified love. It is a combination of factors, not just NT. However if we can build a machine capable of processing all of those interactions we can simulate the experience. That was my point.
 
  • #42
neurocomp2003 said:
Skyeface: before i continue to argue, may i know your educational background...because nonlinear dynamics/multiagent learning is a huge part of my thinking process.

that was an insult neuro. i do not need to verify my educational background to speak on my beliefs... we are just debating our own viewpoints, the facts and quotes i know come from the books that I've read, and have never alleged, nor will i, to be an expert.

now that you've confirmed on judging my statements as a result of my educational background, i guess my argument is over.

:wink:
 
  • #43
no my basis is on what terms you know...if you don't know some of the terms that i need to use...then i cannot explain to you my thoughts. And thus the discussion is over. You say you are unfamiliar with nonlinear dynamics...have you ever read on a program called creatures by steve grand? Or do you know anything about child development, or anything on adaptive learning(neuralnets or swarm intelligence or genetic programming).
at lastly what do you define as consciousness? There are other threads on these forums where people define consciousness/intelligence. Check them out.
 
Last edited:
  • #44
neurocomp2003 said:
no my basis is on what terms you know...if you don't know some of the terms that i need to use...then i cannot explain to you my thoughts. And thus the discussion is over. You say you are unfamiliar with nonlinear dynamics...have you ever read on a program called creatures by steve grand? Or do you know anything about child development, or anything on adaptive learning(neuralnets or swarm intelligence or genetic programming).
at lastly what do you define as consciousness? There are other threads on these forums where people define consciousness/intelligence. Check them out.
yet another insult.

lol, it's crazy that one person can have an ego towards someone because of their intelligence.

I am 22 (3 weeks I'm 23), on my last semester of a 4 yr degree as an Advanced AutoCAD programmer. I am as average joe as someone could get, with the exeption of a good skill in CAD design thanks to my father.

Oh, and I have a high school diploma. Does that meet your 'standards' of required education on this particular public forum? lol

sorry, just wanted to good clean debates from others. ;-)
 
  • #45
heh your very sensitive if you take those as insults.
 
  • #46
StykFacE said:
yet another insult.

If you feel insulted by neuro's comments or questions, that's your problem. There is nothing abusive about anything he's written. Indeed, he is simply trying to establish a common baseline for communication.
 
  • #47
neurocomp2003 said:
no my basis is on what terms you know...if you don't know some of the terms that i need to use...then i cannot explain to you my thoughts.

saltydog explained his dispute with me, it was above my head, and i simply said i know nothing about it. that's called respect for others opinions, and he didnt assume "my terms i know", and hasn't poked at my intelligence yet.

learn from that. :approve:
 
  • #48
On another note, here's a little story not totally unrelated to the topic. Enjoy. :biggrin:

"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the stars?"

"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the machines."

"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."

"I'm not asking you, I'm telling you. These creatures are the only sentient race in that sector and they're made out of meat."

"Maybe they're like the orfolei. You know, a carbon-based intelligence that goes through a meat stage."

"Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take long. Do you have any idea what's the life span of meat?"

"Spare me. Okay, maybe they're only part meat. You know, like the weddilei. A meat head with an electron plasma brain inside."

"Nope. We thought of that, since they do have meat heads, like the weddilei. But I told you, we probed them. They're meat all the way through."

"No brain?"

"Oh, there's a brain all right. It's just that the brain is made out of meat! That's what I've been trying to tell you."

"So ... what does the thinking?"

"You're not understanding, are you? You're refusing to deal with what I'm telling you. The brain does the thinking. The meat."

"Thinking meat! You're asking me to believe in thinking meat!"

"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you beginning to get the picture or do I have to start all over?"

"Omigod. You're serious then. They're made out of meat."

"Thank you. Finally. Yes. They are indeed made out of meat. And they've been trying to get in touch with us for almost a hundred of their years."

"Omigod. So what does this meat have in mind?"

"First it wants to talk to us. Then I imagine it wants to explore the Universe, contact other sentiences, swap ideas and information. The usual."

"We're supposed to talk to meat."

"That's the idea. That's the message they're sending out by radio. 'Hello. Anyone out there. Anybody home.' That sort of thing."

"They actually do talk, then. They use words, ideas, concepts?"
"Oh, yes. Except they do it with meat."

"I thought you just told me they used radio."

"They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat, it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat."

"Omigod. Singing meat. This is altogether too much. So what do you advise?"

"Officially or unofficially?"

"Both."

"Officially, we are required to contact, welcome and log in any and all sentient races or multibeings in this quadrant of the Universe, without prejudice, fear or favor. Unofficially, I advise that we erase the records and forget the whole thing."

"I was hoping you would say that."

"It seems harsh, but there is a limit. Do we really want to make contact with meat?"

"I agree one hundred percent. What's there to say? 'Hello, meat. How's it going?' But will this work? How many planets are we dealing with here?"

"Just one. They can travel to other planets in special meat containers, but they can't live on them. And being meat, they can only travel through C space. Which limits them to the speed of light and makes the possibility of their ever making contact pretty slim. Infinitesimal, in fact."

"So we just pretend there's no one home in the Universe."

"That's it."

"Cruel. But you said it yourself, who wants to meet meat? And the ones who have been aboard our vessels, the ones you probed? You're sure they won't remember?"

"They'll be considered crackpots if they do. We went into their heads and smoothed out their meat so that we're just a dream to them."

"A dream to meat! How strangely appropriate, that we should be meat's dream."

"And we marked the entire sector unoccupied."

"Good. Agreed, officially and unofficially. Case closed. Any others? Anyone interesting on that side of the galaxy?"

"Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotations ago, wants to be friendly again."

"They always come around."

"And why not? Imagine how unbearably, how unutterably cold the Universe would be if one were all alone ..."

the end

http://www.terrybisson.com/meat.html
 
Last edited by a moderator:
  • #49
deckart said:
I haven't heard a convincing argument that shows that we really understand what drives life in the first place.

Consider the Lorenz Attractor. You know, that owl-eyes icon of Chaos Theory? That's a dynamic system with three degrees of freedom I believe can serve as a metaphor for the motor of life.

The Lorenz Attractor is stable: Trajectories within the attractor remain there. Surrounding the attractor is a basin of attraction. Nearby points in the basin are pulled into the attractor by the dynamics of the system. If the trajectory is perturbed to a point outside of the attractor, it does not return to the attractor. However it may be that it now is in a new basin of attraction and so is attracted to a new attractor. Rene' Thom uses this to describe change in nature:

"All creation or destruction of form or morphogenesis, can be described by the disappearance of the attractors representing the initial forms and their replacement by capture by the attractors representing the final form."

There is something else though about the attractor: it is dense. This means trajectories NEVER cross. It is in fact a fractal with an infinitely nested structure. Each point is distinct from all the others.

The Lorenz system is a simple example containing 3 degrees of freedom. What might a system exhibit with a large number of degrees of freedom in a world in which the degrees of freedom is increasing by the very attractors themselves in the same way that the Lorenz Attractor generates a diverse set of points?

I could imagine such a world of attractors pushed to increasingly higher dimensional ones as the attractors themselves generate an increasing number of degrees of freedom in a self-reinforcing act we mistakenly interpret as the evolution of life from simple to complex.
 
Last edited:
  • #50
Tom Mattson said:
If you feel insulted by neuro's comments or questions, that's your problem. There is nothing abusive about anything he's written. Indeed, he is simply trying to establish a common baseline for communication.

of course it's my problem... lol, i was insulted and there's nothing i can do really. and he wasn't trying to establish common grounds, he was challenging my education against his own.

;-)
 
  • #51
StykFacE said:
of course it's my problem... lol, i was insulted and there's nothing i can do really. and he wasn't trying to establish common grounds, he was challenging my education against his own.

Good grief.

No, he wanted to know what you know so that he could determine how to answer, or whether to answer at all. As a rule it's best not to assume the worst about people.

If you still feel that this issue is unresolved, then continue it via the private message system. All further posts along this line of discussion will be deleted.
 
Last edited:
  • #52
Thank you tom,

Stykface: like tom said i was trying establish a basis of what terms you know...if i don't know where to begin then i would more than likely start at child development and neural nets. Or if you want spiking neurons/nonlinear dynamics-though i myself am only abeginner when it comes to these fields.

but yeah not once did i take a stab at your intelligence. If you equate intelligence with the knowledge, well then umm i don't know what to say. Knowledge-base is different for everyone and therefore cannot compare intelligence based on knowledge alone. IMO-intellgence is based not on what you know but how fast you learn OR the capability to which you can apply newly learned things.

And well i use to have high respect for auto cad users, because most of them have to think interms of schematic 3D.

EDIT: sorry tom, i was posting while you posted the above post...sorry.
Oh and that dialogue post was funny as hell.
 
  • #53
It would be impossible to program an AI as intelligent as us. The only way an AI as intelligent as us could be created is by something more intelligent than us.
 
  • #54
robert said:
It would be impossible to program an AI as intelligent as us. The only way an AI as intelligent as us could be created is by something more intelligent than us.

But why should that be? Why can't an intelligence appear due to self-organization?

Also, let's look at this argument schema:

An entity with the intelligence of X can exist iff it is created by an intelligence that is greater than that of X.

Just substitute "X=homo sapiens" and what happens? It follows that the only way that human intelligence can exist is if a greater intelligence created human intelligence.

Then one would naturally ask, "So what created the intelligence of our creator?"

This slippery slope would go on ad infinitum.
 
  • #55
Well me too:

Termites don't know what they're building: The clay cathederal emerges from the mud from local interractions between mud, termite, and pheremone.

What's this to do with intelligence, AI and man? Stars don't either. :smile:
 
Last edited:
  • #56
neurocomp2003 said:
tsishammer: but you see humans have sensory systems that feed into the the brain, and the entire human system flows..does a stack of brick walls flow?

Well, suppose the building has water that flows places. Is the building conscious? Is it capable of understanding?

My point of the "brick building" argument was to illustrate why some people (including me) believe it is implausible that consciousness, understanding, etc. can be brought about by the mere organization of matter.


perhaps from a philosophical standpoint and that the adaptation that a brick/wall has accustom to is to not respond at all. The entire of an artificial system is teh concept of fluidic motion of signals to represent a pattern (like in steve grand's book). and where not talking about a few hundred atoms here we are talking about:
(~100billion neurons*#atom/neuron+~10000*#neurons*#atom/synapse)
Thats how many atoms are in teh brain and mostlikely a rough guess would be
10^(25-30) atoms. Try counting that high.

Suppose we have trillions of bricks. Will the building be conscious?

As for john searle's highly used argument: this can also be applied to humans' but because we have such big egos we do not adhere to it.

Well, I agree. It can be applied to humans. So what?

The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.
 
  • #57
saltydog said:
The complex stack of bricks is static. Nothing happens. Same dif with neurons if they were static. The point is that neurons are dynamic.

True, but even if the bricks jiggled like jello the arrangement still wouldn't understand anything. For the Chinese Room, see post #56. Is the room dynamic? Sure. But the fellow still doesn't understand Chinese.

Star Trek has nothing to do with this.

The phrase "emergent property" regarding that sort of thing was used in an episode of Star Trek: TNG.
 
  • #58
Tom Mattson said:
But why should that be? Why can't an intelligence appear due to self-organization?

Also, let's look at this argument schema:

An entity with the intelligence of X can exist iff it is created by an intelligence that is greater than that of X.

Just substitute "X=homo sapiens" and what happens? It follows that the only way that human intelligence can exist is if a greater intelligence created human intelligence.

Then one would naturally ask, "So what created the intelligence of our creator?"

This slippery slope would go on ad infinitum.

Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem. Humans began to exist, but our Creator didn't. That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).
 
  • #59
Tisthammerw said:
Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.

I've used the analogy of a matrix of butterflies elsewhere: A large matrix with millions of butterflys set at each matrix point flapping their wings. Patterns emerge from the beating: sometimes it's chaotic, other times waves of patterns spread through the matrix. The butterflies respond to stimulus: wind, mating, food supply. A predator approaches the matrix causing the flapping to exhibit a particular pattern of beating as the matrix, in a very simple sense, becomes conscious of the predator. Later, by random chance or otherwise, this same pattern emerges again in the matrix . . . it remembers.

I know that's weird to some, dumb to others. Discovery comes from the strangest of places. :smile:
 
  • #60
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now let's say you hook up some senses to the brick wall. so that it could reallly detect thle "outside" world and then allowed to interact. You got to remember the brain isn't grown in one day. I highly doubt a baby without a brain will ever grow conscious. But that is an amoral experiment

And if searle argument can be made for humans...then why do we assume we have a higher cognitive state that no ANN can ever match? perhaps our emotions are just the sum of NN signalling. My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.

ALSO in regards to your post to tom it doesn't make sense to slap a set of new rules of beginnings for the first creator? THat he magically existed...unless your saying that their being evolved from the physical fundamentals that exist in our universe.
 
  • #61
Sorry, I was just reaiding through and had to make a comment.
StykFacE said:
... "pain sensory receptors"...? is this something that will physically make the computer 'feel', or simply receptors that tell the central processing unit that it's 'feeling' pain, then it reacts to it.

lol, sorry but I'm having a hard time believing that something, that is not alive, can actually feel pain.

yes we have 'receptors', but when it tells are brain that there is pain, we literally feel it.

;-)

I would like to point out that we don't really 'feel' pain if this is how it is defined. When we get hurt a message is translated to our brain telling us that we are hurt. If it doesn't arrive or get processed then we don't 'feel' it. Thus the use of pain medications.
 
  • #62
saltydog said:
We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.

Even if that were true, we'd need computers to have something else besides operating rules on input to create output if we're to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What would we add to the computer to make it literally understand? A magic ball of yarn?

I think it is quite possible to simulate intelligence, conversations etc. with the technology we now have; but in any case it seems clear that functionalism is false if the Chinese room argument is valid.
 
  • #63
neurocomp2003 said:
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now let's say you hook up some senses to the brick wall.

Let's say that's impossible to do just by arranging the bricks.

And if searle argument can be made for humans...then why do we assume we have a higher cognitive state that no ANN can ever match?

To answer this question I'd need to know what an ANN is.

perhaps our emotions are just the sum of NN signalling.

I don't believe that's possible (think Chinese room applied to molecular input-output).

My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.

I don't agree with all of what Searle says. I am not a physicalist, I am a metaphysical dualist. We are intelligent, but our free will, understanding etc. cannot be done (I think) via the mere organization of matter. Chemical reactions, however complex they are, cannot understand any more than they can possesses free will.
 
  • #64
Tisthammerw said:
The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make their own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input. From that standpoint, they could learn emotion. If they are able to emulate us socially and physiologically, then there's no reason why they couldn't eventually have a deeper understanding of what it's like to be us. After all what separates us? Different composition. They don't have the 5 senses as well do,they aren't capable of self awareness, and they are incapable of learning. All of these obstacles I believe, can be overcome.

Think about emotion- we associate certain actions with certin stimuli. we learn not to tough a hot stove because it hurts us. A computer can learn, through repetition, that certain things present a danger to self. Emotions such as love, empathy, bonding, are associated with familiarity. A computer can learn to "miss" things because of their benefit to them. A computer can be "taught" to be lonely, and I believe can eventually learn it on it's own. We become lonley because we are used to being around people. Behaviors are learned, so therefore a sufficiently advanced computer can "learn" emotions. When a computer "learns" to emulate human humanistic behavior, what remains to differentiate us? One has a silcon brain, the other a "meat brain".
 
Last edited:
  • #65
nopes... theyre just depending on set of programs or instructions created human intellegence..
 
  • #66
Tisthammerw said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.

But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

Humans began to exist, but our Creator didn't.

Well first, how do you know that there is a Creator?

And second, what makes you think that humans began to exist? Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

But when one states that there is a creator and that humans began to exist, one is simply presupposing that the answers to those questions are "no".

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).

It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.
 
Last edited:
  • #67


I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation. It's obvious nowadays that this test is not good enough. There are things that niether your analogy nor the "Turing Test" take into account.
Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place. Feed, sleep, survive... simplistically speaking. The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended. The child is emersed in an atmosphere saturated with information that it absorbs and learns from.

There are computers that are capable of learning and adapting, perhaps only in a simplistic fashion so far but they are capable of this. The computer though, while it has the basic programing does not have nearly the level of information to absorb from which to learn. A child has five senses to work with which all bombard it's programing with a vast amount of information. The compter is stuck in the theoretical box. It receives information in a language that it doesn't understand and never will because it has no references by which to learn to understand even if it were capable of learning. It can upload an encyclopedia but if it has no means by which to experience an aardvark or have experiences that will lead it to some understanding of what these descriptive terms are then it never will understand. Your analogy requires a computer to not be able to ever learn, which they can, and to never be able to have eyes and ears by which to identify the strings of chinese characters with even an object or colour, a possibility which remains to be seen.
 
  • #68
Zantra said:
If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make their own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input.

A number of problems here. One, you're kind of just assuming that can computers can be self-aware, which seems like a bit of question begging given what we've learned from the Chinese room thought experiment. Besides, how could machines (regardless of who or what builds them) possibly understand human input? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story shows. What could the designer (human or otherwise) add to make a computer understand? A magical ball of yarn?
 
  • #69
Tom Mattson said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.

But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

You're going to be making a priori assumptions regardless of what you do. As a mirror for the cosmological argument, "Anything that begins to exist has a cause" also has an a priori assumption: ex nihilo nihil fit. But I believe this one to be quite reasonable. The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.


Well first, how do you know that there is a Creator?

A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.


And second, what makes you think that humans began to exist?

The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)


Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).

It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.

Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.
 
  • #70
TheStatutoryApe said:
I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation.

I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?


There are things that niether your analogy nor the "Turing Test" take into account. Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place.

I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.


The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended.
….
There are computers that are capable of learning and adapting
….
Your analogy requires a computer to not be able to ever learn

Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
 

Similar threads

Replies
1
Views
1K
Replies
21
Views
2K
Replies
9
Views
2K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
2K
Replies
3
Views
1K
Back
Top