Are There Limits to AI's Replication of Human Emotions and Intelligence?

  • Thread starter Delong
  • Start date
  • Tags
    Ai Limits
In summary: It would take a lot of time and resources to build something like that. Plus, I'm not sure that it would actually be that much smarter than a regular computer.
  • #36
The true importance of being able to feel/emote is that it gives living beings will to survive.

So, if we'd try to to create a "being" which might be even greater than us (say, more capable), then giving computers/robots "consciousness" won't be enough. It will have to feel to be motivated to strive, prosper and survive, or else it would just be in a "frozen state", since from where would it get motivations to do anything? Yes, you can let it "think" (simulate it) that it is motivated for this and that, but that isn't true motivation.

Say Earth will get to be destroyed, humans will immediately look for the way to survive, and perhaps populate other planets, but a robot, would do nothing on its own, except if we program it to be "ready" for such situation. The point is, robots do only what we program them to do, and having consciousness wouldn't matter, since as said, motivations doesn't happen because of consciousness alone, but because of having feelings and emotions on top of that, or better to say, in parallel with that.

How can computer feel? Whatever program you make for computer it won't make it feel (no matter how amazing the simulation might be), and even combining silicon chips with biological cells won't be enough. Why? Because mere presence of physical elements and biological cells, even if put in "right structure", don't give rise to consciousness and feelings "automatically". How can I claim this? Just imagine the second after a human dies... what changes? Brain is there, body is there, but consciousness and feelings aren't. Why not? Why cannot we put consciousness back into those brains? (Imagine that we keep brain in wet and warm condition, and pumping blood in it.) What is lacking?

I used to make computer programs so I have a clue about what one might program/simulate and one might not. And IMO, no matter how closely we imitate human brains with computer software it won't ever match. Roger Penrose says, put simply, that brains are capable of working in non-algorithmic way, while computers cannot. He's not alone with this idea. What about you?
 
Last edited:
Physics news on Phys.org
  • #37
The true importance of being able to feel/emote is that it gives living beings will to survive.

So, if we'd try to to create a "being" which might be even greater than us (say, more capable), then giving computers/robots "consciousness" won't be enough. It will have to feel to be motivated to strive, prosper and survive.

Yet, how can computer feel? Whatever program you make for computer it won't make it feel (no matter how amazing the simulation might be), and even combining silicon chips with biological cells won't be enough. Why? Because mere presence of physical elements and biological cells, even if put in "right structure", don't give rise to consciousness and feelings "automatically".

Imagine the second after a human dies... what changes? Brain is there, body is there, but consciousness and feelings aren't. Why not? Why cannot we put consciousness back into those brains? (Imagine that we keep brain in wet and warm condition, and pumping blood in it.) What is lacking?

I used to make computer programs so I have a clue about what one might program/simulate and one might not. And IMO, no matter how closely we imitate human brains with computer software it won't ever match. Roger Penrose says, put simply, that brains are capable of working in non-algorithmic way, while computers cannot. He's not alone with this idea. Though, I don't think that ability is the only and most important difference between a very advanced robot and a human, there is something we aren't seeing yet (scientifically I mean), well obviously, since if it weren't so, we'd know exactly how mind/thoughts "form" (emerge from brain/body).
 
Last edited:
  • #38
To me there is one word that sums up the difference between a computer and a human being and that is intent.

In saying this however, it is hard to really define intent in the context of human beings. For all we know, our intent might be, for the most part, predetermined. It might even be that the intent of a collective organism and its individual parts is pre-programmed or solveable based on a solution to a system like an optimization problem for example, but this is digressing.

So in saying that, if AI was ever to get to the stage where it would be hard to distinguish between humans and a computer, intent would have to be addressed. As humans have the ability to change their intent over time, so would a computer.
 
  • #39
chiro said:
To me there is one word that sums up the difference between a computer and a human being and that is intent.

In saying this however, it is hard to really define intent in the context of human beings. For all we know, our intent might be, for the most part, predetermined. It might even be that the intent of a collective organism and its individual parts is pre-programmed or solveable based on a solution to a system like an optimization problem for example, but this is digressing.

So in saying that, if AI was ever to get to the stage where it would be hard to distinguish between humans and a computer, intent would have to be addressed. As humans have the ability to change their intent over time, so would a computer.
Intent is good example, but IMO you'd not have any intent if you'd not have feelings. The desire/intention to do something, to do anything, comes due to existence of feelings... Feelings motivate.

So, biological physical existence (life) brings awareness, awareness with nervous system brings consciousness, consciousness with complex enough brains brings thoughts and feelings, consciousness with even better brains bring ability to communicate well (e.g. language), this brings to intentions and desires, which is resulting in action and experiencing within this physical existence, comes procreation. Circle is complete.
 
Last edited:
  • #40
Boy@n said:
Intent is good example, but IMO you'd not have any intent if you'd not have feelings.

The desire/intention to do something, to do anything, comes due to existence of feelings, while feelings are possible due to mind/consciousness and our physical body, where consciousness is possible due to awareness/brains.

So, awareness brings consciousness, consciousness brings thoughts and feelings, these two bring intentions and desires, and all of that brings experience within existence.

Where are the feelings/emotions driving the intent of my computer? Or car? Or even my heart for that matter? Complex objects are capable of doing complex tasks entirely without emotion. I think it is fallacious to suggest that complex tasks require intent, it is entirely conceivable that one could put together a software package capable of a wide range of tasks (including learning new tasks) whose "intent" is simply the fact that it is on and it is programmed to work.
 
  • #42
Ryan_m_b said:
Where are the feelings/emotions driving the intent of my computer? Or car?
Did I say that? Even if I somehow did (no idea how), I meant the opposite, that computers don't have feelings, thus they don't have desires/intents.

Ryan_m_b said:
Or even my heart for that matter? Complex objects are capable of doing complex tasks entirely without emotion.
Have you heard of people who cannot emote, and they have great difficulty to do ANY tasks, including getting out of the bed when they wake up. They still manage to do it, with great difficulty, because they use memory of past experiences when they were still able to emote.

Ryan_m_b said:
I think it is fallacious to suggest that complex tasks require intent, it is entirely conceivable that one could put together a software package capable of a wide range of tasks (including learning new tasks) whose "intent" is simply the fact that it is on and it is programmed to work.
That's no intent, less so desire, if it's programmed in. Desires and intentions arise spontaneously when one is motivated through feelings/emotions, which are both possible through consciousness, which is possible through awareness.
 
  • #43
Boy@n said:
Did I say that...
The point that you seemed to have missed is that complex tasks do not necessarily require an emotive agent. Humans are intelligent and emotive agents, the latter affects what and why we do things. However it is fallacious to assert that all intelligent agents must have emotions.

I mentioned devices capable of complex tasks because I believe it is a good analogy to how intelligent software does and will work, mechanically with no awareness or emotion.
 
  • #44
"If the human brain were so simple that we could understand it , we would be so simple that we couldn't"-
Emerson M. Pugh
 
Last edited by a moderator:
  • #45
Ryan_m_b said:
The point that you seemed to have missed is that complex tasks do not necessarily require an emotive agent. Humans are intelligent and emotive agents, the latter affects what and why we do things. However it is fallacious to assert that all intelligent agents must have emotions.
We are just sharing opinions, right? Neither of us can claim to know if feelings are necessary for intelligent life to survive and prosper, or not.

It is my belief, that any intelligent being, who is aware of oneself and others, will have some kind of feelings, because feelings drives and defines the self-aware one in who one is and what one does. IMO, intelligence, left alone, would be as good as dead.

Ryan_m_b said:
I mentioned devices capable of complex tasks because I believe it is a good analogy to how intelligent software does and will work, mechanically with no awareness or emotion.
No matter how good the software will be, it won't be able to evolve, survive and prosper within a changing environment in such efficient way humans can do it. Computers/robots will be only able to perform tasks they'll be programmed to perform. Simulating creativity and inventing won't give them those abilities.
 
  • #46
Boy@n said:
We are just sharing opinions, right? Neither of us can claim to know if feelings are necessary for intelligent life to survive and prosper, or not.

It is my belief, that any intelligent being, who is aware of oneself and others, will have some kind of feelings, because feelings drives and defines the self-aware one in who one is and what one does. IMO, intelligence, left alone, would be as good as dead.
Well we can and we can't. I would maintain that you are making the claim that an intelligent agent must be an emotive one. I would counter that claim by pointing out that non-emotive objects can perform complex tasks that previously would have only been in the domain of human beings. For example; software that can analyse speech semantically and respond. I see no reason to suggest that as complexity of tasks increases emotion must arrive. I'm quite tired now so I'll give the matter some thought overnight but I'm pretty sure there are examples of intelligent agents without emotion.
Boy@n said:
No matter how good the software will be, it won't be able to evolve, survive and prosper within a changing environment in such efficient way humans can do it. Computers/robots will be only able to perform tasks they'll be programmed to perform. Simulating creativity and inventing won't give them those abilities.
Software can evolve, genetic algorithms are a good example of that. As for the rest of your statement I don't think you can categorically say that it isn't possible to write software capable of learning and adapting. Such things already exist in a limited capacity and I see no reason to believe that this capability cannot scale.
 
  • #47
Just looking at the evolution of humans and the brain, we already have a billion year head start. So to think that we could compress that process into a matter of decades might be unrealistic. But, I think if clever enough software and powerful enough software were set up so that a computer could evolve (maybe throw in a 3D printer and give it some robotics to repair and modify itself) I think yes, in the long term you could have a machine that could have conciousness, even emotion.
 
  • #48
gordonj005 said:
Just looking at the evolution of humans and the brain, we already have a billion year head start. So to think that we could compress that process into a matter of decades might be unrealistic. But, I think if clever enough software and powerful enough software were set up so that a computer could evolve (maybe throw in a 3D printer and give it some robotics to repair and modify itself) I think yes, in the long term you could have a machine that could have conciousness, even emotion.
I think there is a flaw in this thinking; the eye has had billions of years to evolve yet we managed to invent cameras. Now cameras are obviously nothing like the eye, but they don't need to be and it is certainly not desirable for them to be. Instead they fulfil a function that the eye can also do but in a different way.

This is how we approach developing intelligent software. We don't have to simulate an entire brain and body in order to make some software that can recognise voice, or faces, or patterns of behaviour etc. It doesn't have to be anything like human to perform tasks that at the moment we have to employ our intelligence for.
 
  • #49
Ryan_m_b said:
I think there is a flaw in this thinking; the eye has had billions of years to evolve yet we managed to invent cameras. Now cameras are obviously nothing like the eye, but they don't need to be and it is certainly not desirable for them to be. Instead they fulfil a function that the eye can also do but in a different way.

This is how we approach developing intelligent software. We don't have to simulate an entire brain and body in order to make some software that can recognise voice, or faces, or patterns of behaviour etc. It doesn't have to be anything like human to perform tasks that at the moment we have to employ our intelligence for.

Right, a good example being chess programs. While this has little relation to general intelligence, attempts to model human thinking about chess never go very far. Using completely different methods, computers have reached a point where no human would consider a match against a computer at any time control.

(Caveats: it is virtually undisputed that top humans play some positions better than any computer; and also true the computers play some positions better than any human. Yet the last human computer match (involving Vladimir Kramnik) demonstrated to everyone's satisfaction that human-computer direct matchups were no longer interesting. Final observation, suggesting value of cyborgs for the medium term: expert human players (human ratings go, e.g. novice, class player (E, D, C, B, A), expert, master, International Master, Grandmaster, top 20 player) + medium strength computer programs beat the strongest computer programs (playing with no human assistance).
 
  • #50
Ryan_m_b said:
All life? Including bacteria, plants and brain dead patients? I highly doubt it. All the evidence points to consciousness being a product of a central nervous system.

What evidence is this?

To start, how do I know Ryan_m_b is conscious?
 
  • #51
atyy said:
What evidence is this?

To start, how do I know Ryan_m_b is conscious?
The evidence would be that you are conscious (presumably), everything that is linked to your consciousness is found in other people who report similar experiences and all investigations into brain activity thus far show no difference in how peoples brains seem to work.

Of course one can never get around the proposal that everyone is a philosophical zombie bar oneself but it's not a logical proposition.
 
  • #52
Ryan_m_b said:
The evidence would be that you are conscious (presumably), everything that is linked to your consciousness is found in other people who report similar experiences and all investigations into brain activity thus far show no difference in how peoples brains seem to work.

Of course one can never get around the proposal that everyone is a philosophical zombie bar oneself but it's not a logical proposition.

I'm a zombie. I presume you are one.
 
  • #53
atyy said:
I'm a zombie. I presume you are one.
Considering we live in a world where the vast majority claim consciousness I can only conclude;

1) Everyone is conscious and you are joking
2) Everyone is conscious and you are trying to make a point
3) Some people are zombies and you are one of them
4) Everyone but me is a zombie but the nature of most of them is to pretend to be conscious

I'm going to go with option 1 or 2. Either way we are getting off track.
 
  • #54
Ryan_m_b said:
Of course one can never get around the proposal that everyone is a philosophical zombie bar oneself but it's not a logical proposition.
Now why did you have to bring that up? Just because philosophers worry themselves silly about zombies and qualia does that mean that the sciences need to do so.

I guess it was inevitable that these concepts would arise. Discussions of what constitutes "true AI" are problematic given that we do not yet know what constitutes natural intelligence.
 
  • #55
I think it's more plausible that artificial intelligence will gain a different form of intelligence/awareness as to human intelligence/awareness. I was reading a article about a particular fish that was deemed intelligent because it used a rock to break a shell to get the food inside. Some scientists had a problem with calling that a form of intelligence others justified that intelligence can be judge in different ways. I agree with that later.

AI could eventually reach a human like state of mind but I think that's a long time off and their really isn't any point to start working on that right now.
 
  • #56
But the thing about the eye is that its flawed. We design the camera to be better. The human body is not by any stretch of the imagination a perfectly functioning system. So I don't see your argument... The argument here is whether AI systems CAN develop emotions and conciousness, NOT whether its advantageous to develop them or not. I think we're on two different topics.
 
  • #57
gordonj005 said:
But the thing about the eye is that its flawed. We design the camera to be better. The human body is not by any stretch of the imagination a perfectly functioning system. So I don't see your argument... The argument here is whether AI systems CAN develop emotions and conciousness, NOT whether its advantageous to develop them or not. I think we're on two different topics.
I think you've missed a couple of points again. Firstly when we say things like "the eye is flawed" or "a camera is better" it is really important to first describe what metric you are using to establish this and then apply it universally. For example: In terms of resolution and practicality the camera is better. In terms of efficiency and durability the eye is better.

I bought up the example of a camera because it does a few of the jobs that the eye does, specifically it does the few jobs that we want to replicate but in a more convenient (to the best of our ability) way. The discussion going on in the thread right now is dealing with the claim that intelligent software will require emotions. I'm contesting that by pointing out that when we want to replicate a human faculty (e.g. vision) we don't go about it by copying how humans work. This led to the examples given by me and others of intelligent software that acts nothing like a human does. Because of this I disagree that it is necessary that a future general intelligent piece of software will require emotion.
 
  • #58
steps towards AI

Hey!

It seems that most people agree that sometime in the future, technology will be able to -at least- simulate everything that now only humans can do. I'm not concerned with a "when" but in the order of steps that will have to take place.

For example, it seems as if chess playing is much easier to replicate than walking. Computers have mastered chess playing. Walking on two feet will probably take a lot longer to replicate.

So the question/discussion is, what do you think the order of replication of human attributes will be. What are the most difficult to replicate human characteristics and what are the easiest. Can we even know before we succeed? Are there things impossible to replicate?
 
  • #60
Speaking of cameras, here's a rather expensive chess board whose pieces are various (expensive!) camera lenses or parts thereof:

[URL]http://www.instablogsimages.com/1/2011/07/22/lensrentals_chess_set_01_g2izg.jpg[/URL]

A computer can now beat the best human in chess (but not even close in go). However, imagine taking a supposedly intelligent robot into a messy room and tell it to find and assemble the chess boards. The above is one of the boards. Here are some more:

[URL]http://www.beautifullife.info/wp-content/uploads/06/img6.jpg[/URL]

[URL]http://www.instablogsimages.com/images/2009/06/30/vacuum-tube-chess-set_01_zR3na_58.jpg[/URL]
A basic problem with designer chessboards: They designers don't know the game. White on the right.

[URL]http://www.instablogsimages.com/images/2010/08/18/hammond-chess-set_vLIsA_58.jpg[/URL]
Obviously a bit too much drinking is going on here. White on the right, clowns.

[URL]http://www.instablogsimages.com/images/2009/10/27/chess-set-_02_5nw8X_17621.jpg[/URL]
Why are so many of these designer chessboards set up wrong?

[PLAIN]http://www.inewidea.com/wp-content/uploads/2009/08/2009081403.jpg
No way to get this one wrong!

[URL]http://trendsupdates.com/wp-content/uploads/2009/08/air_chess.jpg[/URL]
Finally, white on the right.

harim-chess.jpg
The reason for the above is because someone of note said (danged if I can find the quote) that while a computer program can beat a human, it could never find a chessboard in a messy room. And that's the kind of board used in tournaments. The designer board above? Without special programming? Not a chance, at least not for a long time.

Pattern recognition is what we humans do best. Not math, not logic, and definitely not considering millions of options. We are pretty lousy at math and logic, and we certainly can't look 17 plies deep into a chess game. We don't need to. Human chess players think by gestalt. Computer chess programs don't think. They work by brute force.

A computer go game that can beat the best human is still a long ways into the future. Brute force can help; a computer go game did beat a 8 dan professional a few years back. However, the program was given a nine stone handicap (about the same as a two rook handicap in chess) and ran on an 800 CPU supercomputer with equivalent of 15 teraflops.

If someday a computer program is made that can beat an 8 dan profession playing white, that program still won't be able to find the chessboard hidden in plain sight in a messy room.
 
Last edited by a moderator:
  • #61
D H said:
Pattern recognition is what we humans do best.

Highly recommended reading for everybody interested in human intelligence and AI: "[URL Hawkins On Intelligence
[/URL]
 
Last edited by a moderator:
  • #62


Modern computers are largely based on the theoretical model first produced by Von Neumann in WWII. Its a great theory for producing a universal calculating machine and automatons but, evidently, completely inadequate for an AI approaching the capacity of a human being. Not really surprising when you consider the human brain has little resemblance to a conventional computer chip.

IBM recently completed stage 2 of their first neuromorphic processor design. Essentially they are attempting to cram as many pseudo neurons and synapses onto a chip as possible with current technology. They carefully studied how the brain of a cat is organized and tried to replicate the most important aspects on a chip using reprogrammable memristors capable of recursive functions. Like the human brain the circuitry itself changes according to the needs of the program at the moment making it capable of calculations well beyond those of conventional Von Neumann designs using a similar number of components. Instead of programming it how to walk or talk, you would teach it.

Its essentially the brute force engineering approach to the problem. If the theories and mathematics are completely inadequate, build a few working models and see how they work. Semiconductors are a multi billion dollar industry with huge research budgets that can easily afford such things. Considering how little progress has been made with the foundations of fuzzy logic, chaos theory, etc. it seems quite likely that like the steam engine such brute force engineering approaches will produce answers before the theoreticians do.

Quite likely then we will first see computers and robots that imitate much of what humans can do and the fact that some of them might approach human consciousness may be something that slowly dawns on us over time.
 
Last edited:
  • #63
I see my question was moved, but people here are mostly concerned with what machines can and can't do. Although this answers part of my questions, I'm more interested in the hierarchy of difficulty of the things that AI can or will be able to do.

It just seems strange to me that what people consider as "higher functions" of the mind, like strategic thinking and decision making are more easily implementable (at least in certain cases) than making a machine that is able to see(i.e identify objects around it the way humans do) or just cross the countryside on foot and be able to do it as well as humans or better. Most people take these abilities for granted. What makes seeing so much more complex than chess playing?

An idea I have is that the difficulty of a task that humans do to be implemented by machines is proportional to how long this task has been evolving. For example, seeing has been evolving long before there where any humans around, but our strategic thought has been evolving for maybe a few hundred thousand years? (maybe more, I don't know really but certainly much less than the evolution of seeing) The more time something is evolving, the more complex it gets, right? So are we going to get computers winning at all symbolic games(chess, go, computer games) long before we see a robot team winning at a soccer game?
 
  • #64
All of our definitions of intelligence have been of anthropomorphic nature. So, I don't see why we won't see sentient AI in the future, since technology is bounded with our own intelligence. I mean, everything to this point can be called of AI nature based on its dependence on our intelligence.
 
  • #65
Ryan_m_b said:
I think you've missed a couple of points again. Firstly when we say things like "the eye is flawed" or "a camera is better" it is really important to first describe what metric you are using to establish this and then apply it universally. For example: In terms of resolution and practicality the camera is better. In terms of efficiency and durability the eye is better.

I bought up the example of a camera because it does a few of the jobs that the eye does, specifically it does the few jobs that we want to replicate but in a more convenient (to the best of our ability) way. The discussion going on in the thread right now is dealing with the claim that intelligent software will require emotions. I'm contesting that by pointing out that when we want to replicate a human faculty (e.g. vision) we don't go about it by copying how humans work. This led to the examples given by me and others of intelligent software that acts nothing like a human does. Because of this I disagree that it is necessary that a future general intelligent piece of software will require emotion.

Yes, I would agree with you there. Furthering technology does not require emotion or conciousness. I don't think anyone would disagree with that. What I'm arguing is that its possible that future generations of AI could develop these features independent of what we require from it. Perhaps I'm misinterpreting the trend of this thread. But I definately agree with you on that point.
 
  • #66
gordonj005 said:
Furthering technology does not require emotion or conciousness. I don't think anyone would disagree with that.
Furthering technology in general? Of course not. Creating a true artificial intelligence: There are plenty who disagree with you. Just go to google scholar and search for qualia+zombie+artificial intelligence. http://scholar.google.com/scholar?hl=en&q=qualia+zombie+artificial+intelligence&btnG=Search
What's the right answer: Nobody knows. Should you or someone else create a true AI that doesn't have qualia and then we'll know the answer. The flip side (attempting to create a true AI but failing) does not provide an answer. It just means the researcher tried and failed.
 
  • #67
I think the term "true AI" is self defeating.

What do we mean by "true AI". I don't see how it could mean anything else than something identical to a human being. But, how do you create something that is not identical with a human being to be identical with a human being in the first place?

Or more simpler. What satisfies the criteria for "true" in "true AI"?
 
  • #68
Willowz said:
What satisfies the criteria for "true" in "true AI"?
The standard definition is a construct that can pass the Turing test.
 
  • #69
D H said:
The standard definition is a construct that can pass the Turing test.
But, that is a very poor definition. I believe, there are chatter-bots that can satisfy the criteria.
 
  • #70
Willowz said:
I believe, there are chatter-bots that can satisfy the criteria.
First off, find one. Even with those artificially constrained contests that limit judges to a small time limit or a canned set of questions, the humans still win.

Secondly, I said it is the standard test. I didn't say it's a good one. There's the Chinese Room problem, after all.

Finally, most AI researchers don't care. Most of them know that true AI is a long ways away. They're quite happy proposing or creating soft AI that wins grants or makes money. There is still quite a bit of money to be made with soft AI.
 

Similar threads

  • General Discussion
Replies
1
Views
601
Replies
23
Views
3K
  • General Discussion
Replies
9
Views
2K
  • General Discussion
Replies
10
Views
1K
Replies
24
Views
2K
  • General Discussion
Replies
5
Views
935
  • General Discussion
Replies
5
Views
2K
Replies
7
Views
1K
Replies
10
Views
2K
Replies
8
Views
1K
Back
Top