Is Destroying An Advanced Robot Murder?

In summary, the last book in a trilogy by Isaac Asimov deals with the ethical and moral question "Is it murder when a highly advanced robot is completely destroyed by a human or another robot?" The conclusion is that if the robot is conscious, then it is murder.
  • #1
baywax
Gold Member
2,176
1
The premise of the last book in a trilogy by Asimov deals with the ethical and moral question "Is it murder when a highly advanced robot is completely destroyed by a human or another robot"?

There, (Elijah Baley, future Super Sleuth) is told that the Spacer world of Aurora has requested through diplomatic channels that he go to Aurora. He is told that the mind of Jander Panell, a humaniform robot identical to R. Daneel Olivaw has been destroyed via a mental block - "roboticide" as Baley later terms it.

The robot's inventor, Han Fastolfe, has been implicated. Fastolfe, whom we last met in The Caves of Steel, is the best roboticist on Aurora. He has admitted that he is the only person with the skill to have done it, although he denies doing it. Fastolfe is also a prominent member of the Auroran political faction that favors Earth. Implication in the crime threatens his political career. Therefore, it is politically expedient that he be exonerated.
from: http://www.answers.com/robots%20of%20dawn

The "Robots of Dawn" is the last in a trilogy by Isaac Asimov that started off with the "Caves of Steel" and "The Naked Sun".

What do you think? Imagine that you had developed a relationship with an highly evolved, human-like robot that really was as spontaneious, entertaining and as intellectually stimulating as any human you'd ever met.

Now imagine that someone destroyed that robot. Would you consider it as henous a crime as murder? Would the courts agree? The medical community... etc...?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
If the robot was not conscious, then i wouldn't consider it murder.
If it were, i would.
 
  • #3
How do you measure level of consciousness? and what is the treshold to start caring?
 
  • #4
sneez said:
How do you measure level of consciousness? and what is the treshold to start caring?
I can't and the threshold would be any conscious being.
 
  • #5
To check if the robot is conscious there is a simple test. Tell him to look at something like a chair, and describe what he thinks it is.

If he says its a chair, then he is concious.

If he says that it is a visual representation of a chair generated by one of his subprograms responding to sensory input, he is not conciousness.

As long as he see just a chair, he has the same feeling of personal identity (the illusion of separation between ourselves and other things) as we do.
 
  • #6
I don't agree that experiencing the illusion of subject object is sign of consciousness. According to your test you cannot test consciousness of animals or plants. (Well, here we go with the eternal problem of consciousness). I propose another solution. If one is attached to the robot, killing or destroying the robot caused the person harm and should be taken to the court. Just like killing animal or any other "things" we may grow attached to. Its not the best but its kinda straight forward.

For PIT2, You produced contradictory statement. To continue, is solar system conscious? why yes/no and how do you measure it? Do you care if I kill it?
 
  • #7
sneez said:
For PIT2, You produced contradictory statement. To continue, is solar system conscious? why yes/no and how do you measure it? Do you care if I kill it?
I was talking about the principle, not the practical situation. If the robot is conscious, and if i would know that, then i would consider it murder.

But, since i can't determine whether the robot is conscious, and the robot in every way appears to be human, then i would also consider it murder because i believe human behaviour can only exist with consciousness present.

Also my mirror neurons and subconscious instincts would probably work regardless of what my rational mind decides to believe in, and i would feel empathy for the robot and consider it murder.
 
  • #8
Ok PIT2, that makes more sense.

I found a potential contradiction in my earlier statement. What if someone can show that he/she feels attached to all living things including animals we eat. Shoud we be brought to the court? (i guess yes, but what should be the ruling? )
 
  • #9
I have two answers which I think are the only workable ones:

In terms of individual morality it is a question of whether the individual thinks it is murder. "I am killing someone" vs. "I am destroying something."

I am supposing that one distinguishes murder from say involuntary "man"slaughter, i.e. to kill in ignorance is not murder.

The second answer is that apart from each individuals morality all we have left is social convention. It's murder if the law or the social conventions say it is murder.

I see in most such discussions the implicit assumption that to kill a sentient entity is murder. This is the social convention and personal ethic most tend to adopt. Which of course must, and here did, bring us to the question to "Is the hypothesized robot a sentient entity".

I do recall some SF author's point that it is a sentient entities responsibility to demonstrate its sentience. I wouldn't quite go that far. I would however propose a modification to Turning's test:

Construct an artificial language rich enough to express abstract concepts such as "don't kill me" or even better "You shouldn't kill me" but said language simple enough so that the entity can at least learn to mimic its syntax and grammar. Then attempt to teach the entity this language.

I'm using artificial language so that one can circumnavigate lack of inherent skills such as speech or the ability to consciously control skin color and also so the language itself is created a posteriori to the creation of the entity so as to avoid the various "card file" loop holes to the usual Turning test.

One then applies Turning's test using this artificial language. If the entity learns the language well enough to communicate through a blind channel to some questioner and if it can argue the case "You shouldn't kill me!" it is then sentient. (That is even if the argument isn't a very good one... the ability to argue is the ability to comprehend the issues involved.)

By this criterion certain human beings are in fact not sentient. By this criterion some of the smarter non-human animals on this planet are very close to being sentient.

Regards,
James Baugh
 
  • #10
If you believe in the materialisitic-scientific worldview, human beings already are "advanced robots". If we see ourselves as machines, why shouldn't we extend our rights to our fellow mechanisms?

If on the other hand you don't believe in materialism, then the question is moot as we will never be able to build a thing that seems to have consciousness.
 
Last edited by a moderator:
  • #11
jambaugh said:
I have two answers which I think are the only workable ones:

In terms of individual morality it is a question of whether the individual thinks it is murder. "I am killing someone" vs. "I am destroying something."

I am supposing that one distinguishes murder from say involuntary "man"slaughter, i.e. to kill in ignorance is not murder.

The second answer is that apart from each individuals morality all we have left is social convention. It's murder if the law or the social conventions say it is murder.

I see in most such discussions the implicit assumption that to kill a sentient entity is murder. This is the social convention and personal ethic most tend to adopt. Which of course must, and here did, bring us to the question to "Is the hypothesized robot a sentient entity".

I do recall some SF author's point that it is a sentient entities responsibility to demonstrate its sentience. I wouldn't quite go that far. I would however propose a modification to Turning's test:

Construct an artificial language rich enough to express abstract concepts such as "don't kill me" or even better "You shouldn't kill me" but said language simple enough so that the entity can at least learn to mimic its syntax and grammar. Then attempt to teach the entity this language.

I'm using artificial language so that one can circumnavigate lack of inherent skills such as speech or the ability to consciously control skin color and also so the language itself is created a posteriori to the creation of the entity so as to avoid the various "card file" loop holes to the usual Turning test.

One then applies Turning's test using this artificial language. If the entity learns the language well enough to communicate through a blind channel to some questioner and if it can argue the case "You shouldn't kill me!" it is then sentient. (That is even if the argument isn't a very good one... the ability to argue is the ability to comprehend the issues involved.)

By this criterion certain human beings are in fact not sentient. By this criterion some of the smarter non-human animals on this planet are very close to being sentient.

Regards,
James Baugh

Does this test apply to infants? Probably not.
 
  • #12
Crosson said:
To check if the robot is conscious there is a simple test. Tell him to look at something like a chair, and describe what he thinks it is.

If he says its a chair, then he is concious.

If he says that it is a visual representation of a chair generated by one of his subprograms responding to sensory input, he is not conciousness.

As long as he see just a chair, he has the same feeling of personal identity (the illusion of separation between ourselves and other things) as we do.
Don't you have it backwards? Saying "It's a chair" only requires pattern recognition - matching a visual pattern to a catalog/description. have been doing that for decades.

If, on the other hand, he understands how his view of the chair is generated, that's self-awareness: sentience.

Of course, no human would describe a chair as an occular image received by the sensory receptors of our imaging device, then transmitted to a processing center for pattern recognition either...
 
  • #13
I suppose I'd better define "highly advanced robot". In the books Asimov describes his robots as having "positronic" neuronetworks. I can only imagine that these neuronets are the closest approximation to an organic neuronetwork one can construct without actually growing neurons to match a machine's physiology.

The approximate neuronetwork in Asimov's robots or "positronic" network acts like an organic entity in that it pulses when it is stimulated, simulating the electromagnetic pulse of the sodium/potasium pump mechanism found in the axons, dendrites and synapses of a nerve cell. Bear in mind that Asimov wrote the first two books in the 50s and the last one "Robots of Dawn" in the 80s. So his initial understanding of neurophysiology was limited by the scientific understanding of that field from the 1950s.

I'd say that "highly advanced" robots would mean the type of mechanism that, although constructed by humans, has taken an evolutionary path of its own and developed, according to robot laws and the laws of nature, a sense of self awareness that could well be described as parallel to a humans. This is in light of the fact that the robots have become part of human society over a period of about 1500 years (into our future).

(edit) One complication Asimov didn't go very far into detail about was emotion in these robots. I'm guessing that leaving the hormonal system out of the mix in a robot's physiology would be one way to ensure the 3 laws of robots. In other words, no "crimes of passion" or "emotional outbursts" could occur without the hormonal system or an approximation thereof showing up in the robot anatomy. Thank you!
 
Last edited:
  • #14
baywax said:
Does this test apply to infants? Probably not.

I erred in stating "By this criterion some humans are not sentient".

The test (like the original Turning test) is sufficient but not necessary to define sentience, i.e. it will prove sentience but failure doesn't disprove sentience. Thus indeed infants could not pass the test, at least not quickly and may yet be sentient.

You may take advantage of my lack of a specific time frame in which case the infant can over say 5 years learn the language sufficiently to communicate and demonstrate its awareness and ability to think abstractly.

Another issue is when sentience (by any of various definitions) emerges in a human. Certainly it is not likely present at conception. Certainly it is by most criterion present in a five year old person (absent severe mental disability). Can an objective criterion be defined? If so then when along the path from zygote to fetus to infant to adult does it emerge?

Et cetera
and regards,
J.B.
 
  • #15
PIT2 said:
I can't and the threshold would be any conscious being.
Any conscious being? You mean like a monkey? A dog? A snail? A plant?
 
  • #16
Crosson said:
To check if the robot is conscious there is a simple test. Tell him to look at something like a chair, and describe what he thinks it is.

If he says its a chair, then he is concious.

If he says that it is a visual representation of a chair generated by one of his subprograms responding to sensory input, he is not conciousness.

As long as he see just a chair, he has the same feeling of personal identity (the illusion of separation between ourselves and other things) as we do.
So... when the biologist, or the physicist, or the neurologist, or the philosopher admits that he has merely acquired mental imagry, and his brain has processed that into an abstraction we call a chair, then 'e ceases to be conscious?

And, by your criterion, a robot that was simply programmed to state what kind of furniture it perceives (in particular, without any self-awareness), it would be conscious?
 
Last edited:
  • #17
I think discussing this is irrelevant. We don't yet know how we could go about programming self-awareness, or even the simple ability to make a decision. The definition of artificial life will be decided, if ever, after we have already created it.
 
  • #18
Hurkyl said:
Any conscious being? You mean like a monkey? A dog? A snail? A plant?
If they are conscious, yes then i would call it a crime to do them harm.
 
Last edited:
  • #19
There is a good chance that human morals have been selected over time through evolutionary processes. We treat people who kill other humans different from people who kill ants.

Conscious is a not at all the clear. Is conscious the ability to be aware of ones surrounding? A lot of organisms on the Earth are aware of their surrounding. What degree of ability classifies as conscious and what degree does not?

Humans, as a species, have benefited from not killing fellow humans on a whim. Therefore, such behavior probably have been selected for over time. We don't really care if someone killed an ant. We do care if someone killed a human.

If advanced robots will have the same delicate relationship with humans as humans have with humans, treating the destruction of such an advanced robot may be considered murder over time. However, by that time, will there even be a difference between humans and advanced robots?
 
  • #20
Should we be able to destroy things we have created?
 
  • #21
jambaugh said:
I erred in stating "By this criterion some humans are not sentient".

The test (like the original Turning test) is sufficient but not necessary to define sentience, i.e. it will prove sentience but failure doesn't disprove sentience. Thus indeed infants could not pass the test, at least not quickly and may yet be sentient.

You may take advantage of my lack of a specific time frame in which case the infant can over say 5 years learn the language sufficiently to communicate and demonstrate its awareness and ability to think abstractly.

Another issue is when sentience (by any of various definitions) emerges in a human. Certainly it is not likely present at conception. Certainly it is by most criterion present in a five year old person (absent severe mental disability). Can an objective criterion be defined? If so then when along the path from zygote to fetus to infant to adult does it emerge?

Et cetera
and regards,
J.B.

When does self awareness emerge during the course of human development? When does it emerge in a machine? Good questions.

I think we can start to categorize whether the robot is the property of a human or whether it is a product of nature. In doing so we begin to define which laws apply in the case of destroying an advanced form of a robot. Is it property damage only? Or, is it murder?
 
  • #22
I very much like the opening of American Declaration of Independence, the "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights..." bit. The question of equating robot's destruction with that of murder is, I believe, only an instance of the wider question of "Who is equal to Man?", to which entities can we apply the same "self-evident truths"? For example, only that to which we concede the "unalienable Right of Life" can be murdered in the criminal law sense.

So, what takes it that we consider something, say a robot, equal to Man? I'm amusing myself with a thought about that.

Frequently I come to think about lion hunt in savannah. There is something I find very gross in these documentaries, that chills me each time. And I certainly don't think anything about lions, they are acting the same as Man does -- I've had my stake for dinner today too. It's the lions' pray that disturbs me. There is this big herd of stocky looking animals, outnumbering lions ten to one, and most of them heavier and taller than a lion. A herd capable of killing anything smaller that stands in its way in a stampede. And what happens? The lionesses pick out a fragile looking member of the herd (a non-adult frequently), and go after it -- perfectly human. The herd, instead of responding with a concentrated stampede, just goes each gnu for itself -- already on the verge of human behavior. And the final horror: after the lions got their pray and start shredding it apart on the spot, the herd just stops and continues about whatever they were doing before. Now, that is outright inhuman.

The savannah herd lacks empathy. Its members are not capable of putting themselves in the position of their fellow killed member and acting upon that sensation. In effect, they do not grant themselves the unalienable rights that the Declaration speaks about. Hence, they are not equal to Man. There is my current working answer: it is equal to Man that which itself grants to its kin the unalienable rights compatible to Man's.

Back to the intelligent robots, here's my practical recipe to decide if they can be murdered. Start destroying one by one -- say those damaged, obsolete, etc. -- in "public", among many fellow working robots. And wait for the robo-militia to come at you, waving their Declaration of Independence.

(But don't start a war then :)

--
Chusslove Illich (Часлав Илић)
 
Last edited:
  • #23
It is still humans who are writing the laws, 1500 years from now. Robots may have integrated well with society but they still have not been granted equal status to humans.

Look at chimps and apes today. They are sentient. They can read, communicate and they are empathic to name a few sentient traits. Yet, they are not treated as equal. When they're caged, tortured and murdered in the name of research humans applaud the quest for answers to their medical problems. There are no laws against this other than the animal rights activist's wish list of compassionate laws.

Are the robots property?

Are they individuals with the same rights as humans? (based on their ability to reason, study, communicate, respond logically etc...)
 
  • #24
Murder is a word that applies only to a specific interaction between two humans that results in death. Humans do not murder a chimpanzee (a very advanced primate) if they throw a stone at it and kill it without cause--in the same way humans cannot murder robots. Murder is an unjustified action of one human against another that results in death.
 
  • #25
Rade said:
Murder is a word that applies only to a specific interaction between two humans that results in death. Humans do not murder a chimpanzee (a very advanced primate) if they throw a stone at it and kill it without cause--in the same way humans cannot murder robots. Murder is an unjustified action of one human against another that results in death.

en.wikipedia.org/wiki/Murder_(disambiguation)

1. The unlawful killing of one human by another, especially with premeditated malice.
2. Slang. Something that is very uncomfortable, difficult, or hazardous: The rush hour traffic is murder.
3. The collective noun for crows.

Thanks for pointing that out Rade, where were you earlier!?

How about, is it "disambiguation" to destroy a robot?:rolleyes:
 
  • #26
This has some interesting implications.

Modern Man has never had a rival. (Personally, I am not sure that a sentient species is capable of living in peace with another sentient species, even in principle.)

If Neanderthal were found living in the remote reaches of the Ural mountains, would killing him be murder?
 
  • #27
Why cannot we define murder as the destruction of a being that has an understanding and active participation in the system of morals?
 
  • #28
Mk said:
Why cannot we define murder as the destruction of a being that has an understanding and active participation in the system of morals?

We'd have to have a judge rule in our favor.

Murder, today, is defined by a human killing human. Killing a Neanderthal anywhere on Earth is murder because the Neanderthal is part of the human species.

Killing a human who exhibits no moral conscience and has no active participation in the system of morals... is murder. It is only by the order of a judge that murder suddenly becomes a "justifiable execution" or, of course, during a war declared by a head of state.

So, its starting to look like discombobulating a robot would turn out to be "destruction of property" regardless of how much we think the robot empathizes, realizes or sympathizes with human sentiment.
 
Last edited:
  • #29
The definition of murder would have to be redefined in a social system where sentient beings from multiple species interact. In a fictional world where one race kills another without consequence it is unlikely that there would be any peace between the two species. Some sort of lawful mediation is required, or the complete destruction or enslavement of at least one species. The likely result will be somewhere between what is easy and what is profitable. I doubt morals would have much to do with it at this scale.

Are there differences in what level of consciousness/sentience a species has? Is a human more conscious than an ape? Is a robot with a greater intelligence than our own more conscious than ourselves? Or is consciousness measured more by morality than intelligence? Are the moral principles of two species compared by the destructive potential of their weapons?

I think if these robots were made to act human it would go a long way towards us regarding them as human. I think Asimov has it right. It's when they start developing personalities and disobeying commands that we would consider them sentient. When one member of a foreign species has the ability to resist their own destruction and enslavement and has an intelligence and moral system compatible with our own society, then it would be murder to kill any member of that species.

I'm just typing out loud here. The words we have don't seem to satisfy the criteria.
 
  • #30
sneez said:
How do you measure level of consciousness? and what is the treshold to start caring?

my depiciton of the level of conciousness would be the amount of self thinking that the robot or being can do. Can it feel the pain if i insulted it. and also how it relates and reacts with its surroundings
 
  • #31
WhatIf...? said:
Can it feel the pain if i insulted it.

How do you insult a robot? Call it a bucket of bolts? Its only going to agree with you with a big :smile:
 
  • #32
baywax said:
Killing a Neanderthal anywhere on Earth is murder because the Neanderthal is part of the human species.
No they are not.
Homo sapiens and Homo neanderthalenis are merely part of the same Genus. Legally, they are no more human than chimps.
 
  • #33
The key to answering this question lies in the fact that laws and ethics evolve alongside the advances in the experiences that require them.

There was no such thing as
- musical copyright infringment before the days of recordable music
- credit card fraud before the days of credit cards
- cyber bullying before the days of the internet
 
  • #34
DaveC426913 said:
The key to answering this question lies in the fact that laws and ethics evolve alongside the advances in the experiences that require them.

There was no such thing as
- musical copyright infringment before the days of recordable music
- credit card fraud before the days of credit cards
- cyber bullying before the days of the internet

Agreed. There is no way to know what laws will look like and who they'll apply to in 1500 years.

I would think that the term "murder" and the punishments for murder would apply to a person killing a neaderthal or to another primate or even to the destruction of a whale before it was applied to a man-made object like a robot.
 
  • #35
Back in history, many occasions of homicide were accepted and not considered murder rather natural justice. Due to human needs in maintaining an increasing complex civilization more laws were established and the right to commit homicide was reduced to only self defense.
My point is that it is not whether it is moral or immoral to commit a killing of a conscious robot, but when does human necessity require for the robot to live or die for the benefit of human civilization. That is how morality and law develops, and that is why most people still eat the meat of once conscious, sensitive, and feelings animals.
 

Similar threads

Back
Top