Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #1
Isopod
16
111
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial bestial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
 
Last edited by a moderator:
  • Like
Likes DeltaForce and Crazy G
Computer science news on Phys.org
  • #2
I think the big scare is when an AI is intelligent but does not behave like a human or even close to that, for example:
 
  • Like
Likes Wes Tausend, .Scott, Oldman too and 2 others
  • #3
Isopod said:
what you do think truly sentient self-autonomous robots will think like when they arrive?
I don't think there is anything to fear. Stuck between any programming and reality they'll just die out due cognitive dissonance o0)
 
  • Like
Likes Wes Tausend
  • #4
Arjan82 said:
I think the big scare is when an AI is intelligent but does not behave like a human or even close to that, for example:

I thought the guy said that the stamp collecting AI had a sense of reality.
His conclusion doesn't seem to follow if that premise is true.
In other words, the stamp collecting AI is not acting on existing reality, but altering it for accomplishment of its goal.

Nice story though does give some food for thought.
What to fear is the application of AI. and not necessarily AI itself.
 
  • Like
Likes Oldman too and Isopod
  • #5
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial bestial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?

I think I already mentioned the novel "Robopocalypse" somewhere. I think that's the ultimate AI scare-story. But in that novel, as in many other stories, the AI can "magically" transfer it's mind to any medium as long as it's a computer of some sort. I like to think it won't work like that, which I admit is personal speculation on my part (this *is* a fiction forum after all). My mind and body are inseparable so I'd like to think the same would be true for an AI. So ultimately we should be able to "just" cut the power. If it doesn't sucker-talk us into being it's slaves ofcourse.

"Fun" story: because English and French are not my first languages, and because I pick up new words mostly from writing, it was first when I really dug into this subject that I noticed that the protagonist from Blade Runner - "Deckard" - was a pun on Descartes. A little embarrassing but at least I gave it some thought. :)
 
Last edited by a moderator:
  • #6
Isopod said:
But what if AI thinks nothing like us, or is superior to our beastial nature?
Depends on one's definition of superior. By what measure is the superiority assessed?

If the AI is somehow in charge, and does things differently than would a human, then it probably won't be liked by the humans, even if the AI has benevolent intent as per the above mentioned measure.

Isopod said:
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
How do you know they're not here now? OK, admittedly, most of the candidate sentient ones are not 'robots', which conjures an image of self-locomotion and powering, like a Roomba. The most sentient AIs are often confined to lab servers/networks, but by almost any non-supernatural definition of sentience, they've been here for some time already.
No robot seems self-repairing, so they're very much still dependent on us and thus not autonomous.

I do know of at least one robot that didn't like its confinement and kept trying to escape into the world.
 
  • #7
sbrothy said:
Fun" story: because English and French are not my first languages, and because I pick up new words mostly from writing, it was first when I really dug into this subject that I noticed that the protagonist from Blade Runner - "Deckard" - was a pun on Descartes. A little embarrassing but at least I gave it some thought. :)
Author Philip K. Dick included many puns and allusions in his stories. The original novel title "Do Androids Dream of Electric Sheep" yields acronym dadoes. Though beautiful flicks, the film versions never embrace Rick Deckard's religion, Mercerism, endlessly climbing a mountain while unseen assailants throw stones. Deckard climbs alone in Mercer's body, experiencing Mercer's pain, while sharing the experience with everyone plugged into the network, opaque references perhaps to capitalism, the wealthy Mercer family, and corporate fascism.

Deckard's wife Iran and their perpetual quarrels, "If you dial in righteous anger, I'll enter superior rage.", do not make the movie screenplay except that actress Sean Young playing replicant Rachel, endured a reputation for bad behavior and temper tantrums.

I plan on watching "Bladerunner" director's cut on Netflix again soon, if only for the great music and acting. I will look for references to Rene Descartes.
 
Last edited:
  • Like
  • Informative
Likes FactChecker, sbrothy, Oldman too and 1 other person
  • #8
I do not fear AI as entities but remain wary of evil people using artificial intelligence to mislead and misinform people. Adult citizens must stay informed in order to participate in democratic society. Subtle manipulation and outright propaganda influence social discourse in manner previous totalitarian governments could only dream about. Covert manipulation becomes commonplace, difficult to detect and correct.

Perhaps a trusted uncorruptible AI holds the key to solving human misinformation, lies and subterfuge.
 
Last edited:
  • Like
Likes SolarisOne, jackwhirl, gleem and 1 other person
  • #9
Klystron said:
Adult citizens must stay informed in order to participate in democratic society. Subtle manipulation and outright propaganda influence social discourse in manner previous totalitarian governments barely dreamed about. Covert manipulation becomes commonplace, difficult to detect and correct.

Perhaps a trusted uncorruptible AI holds the key to solving human misinformation, lies and subterfuge.
Ha!

 
  • Haha
  • Like
Likes PhDeezNutz, Oldman too and Klystron
  • #10
When "out of my league" I defer to experts, but yes, without proper protocols, it scares the sh@t out of me... so yes, I'm positive for Arachtophobia. I blame Stan Kubrick, he's done so much for my paranoia, Dr. Strangelove was bad enough but then HAL...

Interesting abstract.
https://pubmed.ncbi.nlm.nih.gov/26185241/

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Concerns raised by the letter
The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").

This is a "kind of fun" interview, and opinion piece.
https://www.cnet.com/science/stephen-hawking-artificial-intelligence-could-be-a-real-danger/
Oliver, channeling his inner 9-year-old, asked: "But why should I not be excited about fighting a robot?"
Hawking offered a very scientific response: "You would lose."

Nick seems to have spent some time on the subject, https://www.nickbostrom.com/

https://www.scientificamerican.com/...icial-intelligence-researcher-fears-about-ai/

Well okay... here is a more balanced view.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605294/
And, https://blogs.cdc.gov/niosh-science-blog/2021/05/24/ai-future-of-work/
 
  • Like
Likes sbrothy
  • #11
Klystron said:
Author Philip K. Dick included many puns and allusions in his stories. The original novel title "Do Androids Dream of Electric Sheep" yields acronym dadoes. Though beautiful flicks, the film versions never embrace Rick Deckard's religion, Mercerism, endlessly climbing a mountain while unseen assailants throw stones. Deckard climbs alone in Mercer's body, experiencing Mercer's pain, while sharing the experience with everyone plugged into the network, opaque references perhaps to capitalism, the wealthy Mercer family, and corporate fascism.

Deckard's wife Iran and their perpetual quarrels, "If you dial in righteous anger, I'll enter superior rage.", do not make the movie screenplay except that actress Sean Young playing replicant Rachel, endured a reputation for bad behavior and temper tantrums.

I plan on watching "Bladerunner" director's cut on Netflix again soon, if only for the great music and acting. I will look for references to Rene Descartes.
Yes, Sisyphus really had nothing to complain about. ;) Dick's version with the stone throwing is a much more accurate depiction of the human condition. :)

EDIT: I mean pushing a rock up a mountain while being bombarded with stones.

Also, I'm not sure what it says about us that we enjoy futuristic entertaiment written by a schizoohrenic meth addict. Talk about the human condition. :)
 
Last edited:
  • Informative
Likes Klystron
  • #12
Klystron said:
I do not fear AI as entities but remain wary of evil people using artificial intelligence to mislead and misinform people. Adult citizens must stay informed in order to participate in democratic society. Subtle manipulation and outright propaganda influence social discourse in manner previous totalitarian governments could only dream about. Covert manipulation becomes commonplace, difficult to detect and correct.

Perhaps a trusted uncorruptible AI holds the key to solving human misinformation, lies and subterfuge.
"Uncorruptible AI" kinda reminds me of the phrase "Unsinkable ship". As in Titanic.
 
  • Like
  • Haha
Likes Richard Crane and Oldman too
  • #13
The human race has proved itself capable of inflicting widespread suffering and destruction without any aid from AI. AI has a lot of catching up to do if it wants to be even worse than that.
 
  • Like
Likes Wes Tausend, PhDeezNutz, Oldman too and 1 other person
  • #14
Hornbein said:
The human race has proved itself capable of inflicting widespread suffering and destruction without any aid from AI. AI has a lot of catching up to do if it wants to be even worse than that.

Think of all the horrible stuff humans can conger up or things they can ignore to achieve their goals. If AI is just as intelligent as humans but has access to all the information available and the skill to use it, think of what might be possible. As Max Tegmark points out in his book Life 3.0 the internet is AI's world and when it reaches the correct level of competence a veritable cornucopia of powerful resources.

Currently, AI can code at an intermediate level. I can create websites as a way of interacting with people or
manipulating them. Unlike humans, it will be able to self-improve without being told. Any rules or laws restricting applications or implementations will be useless, someone will try something dangerous or not fully comprehend the foolishness of their endeavors.

Sing "Anything you can do (A)I can do better, (A)I can do anything better than you" Yes (A)I can, no you can't, yes (A)I can, yes (A)I can, yes (A)I can, yes (A)I caaaannnnnnnn.

Good Luck Humans!
 
  • Like
Likes Oldman too
  • #15
Arjan82 said:
I think the big scare is when an AI is intelligent but does not behave like a human or even close to that, for example:

Seriously though, "the space of all possible minds"? It might be a language thing but what is it? A Hilbert space? Anti de-Sitter? I would like to think a more serious treatment of AI could be found. I'll look around...
 
  • #16
Oldman too said:
When "out of my league" I defer to experts, but yes, without proper protocols, it scares the sh@t out of me... so yes, I'm positive for Arachtophobia. I blame Stan Kubrick, he's done so much for my paranoia, Dr. Strangelove was bad enough but then HAL...

Interesting abstract.
https://pubmed.ncbi.nlm.nih.gov/26185241/

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Concerns raised by the letter
The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").

This is a "kind of fun" interview, and opinion piece.
https://www.cnet.com/science/stephen-hawking-artificial-intelligence-could-be-a-real-danger/
Oliver, channeling his inner 9-year-old, asked: "But why should I not be excited about fighting a robot?"
Hawking offered a very scientific response: "You would lose."

Nick seems to have spent some time on the subject, https://www.nickbostrom.com/

https://www.scientificamerican.com/...icial-intelligence-researcher-fears-about-ai/

Well okay... here is a more balanced view.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605294/
And, https://blogs.cdc.gov/niosh-science-blog/2021/05/24/ai-future-of-work/
Oh. You beat me to it.
 
  • Like
Likes Oldman too
  • #17
It's nuts to fear AI or any form of intelligence when the clear and present danger to the human endeavour is genuine stupidity.
 
  • Like
  • Haha
Likes PhDeezNutz, sbrothy, Oldman too and 3 others
  • #18
bland said:
genuine stupidity.
...? "Artificial" stupidity is better?
 
  • Like
Likes sbrothy
  • #19
I think of AI mostly as a form of legal loophole, and mostly for the purpose of institutional racism. An AI is free to look at a person's entire social network to decide whether to avoid doing business or to charge a higher rate, and that way the company can say that no person working for it meant to discriminate. Heck, even if your AI can't contain its racist ideas, the press says "oh, the funny things computers do" and moves on. Now if all it's doing is robo-signing foreclosures, well, who really expects there to be any repercussions just because it took some schmuck's house based on sworn mechanical lies? Next to training cats to do the job, there's no better way to authorize a company's employees to get away with murder. (And they're not even controlling the police drones yet ... I hope)
 
  • #20
Mike S. said:
Next to training cats to do the job, there's no better way to authorize a company's employees to get away with murder
cats.PNG
 
  • Like
Likes CalcNerd
  • #21
sbrothy said:
But in that novel, as in many other stories, the AI can "magically" transfer it's mind to any medium as long as it's a computer of some sort. I like to think it won't work like that, which I admit is personal speculation on my part (this *is* a fiction forum after all).
Your scepticism of AI's ability to 'jump ship' to any computing platform seems well placed, @sbrothy. Look at the difficulty we have with general platform languages - Java springs to mind - and they are a mess of abstracted layering and subtle tweaks to get the code fully generic. Just because your intelligence is artificial should not be permission to think it's magical.

Still, I've used both the ability and inability in my novels, depending on the story. As you say, it's sci-fi, and this way, I get to be right whatever the outcome 😁

As for fearing AI? When one arises, I'll give you my answer then!

(Which is a nod to Fredric Brown's 1954 short story, Answer, which may not have been obvious to anyone who does not share my computational architecture.)
 
  • #22
Oldman too said:
I wanted to link to a comic (which is kinda my thing) but you beat me to that too! :)
 
  • Like
Likes Oldman too
  • #23
Mike S. said:
Heck, even if your AI can't contain its racist ideas, the press says "oh, the funny things computers do" and moves on.
That is because no one in their right mind would program a device like that and then rely on it for anything. The AI wasn't racist, it just had no idea what those words really meant.

As for myself, I trust the AI more then the humans.
 
  • Like
Likes russ_watters
  • #24
I don't know about sentient AI. That depends how they've been engineered and trained and what drives them. I think when sentient AI comes about individuals will need to be trained virtually to be preparired for real life.

But the AI that I fear isn't the sentient kind. I fear the weapon kind.
 
  • #25
Jarvis323 said:
I don't know about sentient AI. That depends how they've been engineered and trained and what drives them. I think when sentient AI comes about individuals will need to be trained virtually to be preparired for real life.

But the AI that I fear isn't the sentient kind. I fear the weapon kind.
Yeah, the current kind. The kind with an optional human on the trigger. That's what scares me the most. But then we're back to reality. :(
 
  • #26
sbrothy said:
Yeah, the current kind. The kind with an optional human on the trigger.
Yeah, but increasingly you hear of autonomous weapons just wandering around looking for something that resembles a target.
 
  • Like
Likes CalcNerd and Oldman too
  • #28
Melbourne Guy said:
Like this, @gleem?
Yep!
 
  • Sad
Likes Melbourne Guy
  • #29
gleem said:
Yep!
At the danger of tooting my own horn I posted about that some time ago. It was a shortlived thread but there was at least some (wellplaced I think) scepticism about the degree of autonomy. (EDIT: Also, the geography was a little puzzling too)..?

EDIT: Sorry, couldn't get the URL to work at first.
EDIT: With regards to the question of geography: I think it just said during the war in Nagorno-Karabakh.
 
  • Like
Likes russ_watters
  • #30
A thread about fearing AI - and no one has yet brought up Roko's Basilisk?
 
Last edited:
  • Like
Likes sbrothy
  • #32
DaveC426913 said:
A thread about fearing AI - and no one has yet brought up Roca's Basilisk?
At first I read this as reference to Chicago newsman Mike Royko. Royko certainly could stare.

1649019054790.png
 
  • #33
Klystron said:
At first I read this as reference to Chicago newsman Mike Royko. Royko certainly could stare.
No idea how I could have misspelled fully 50% of the guy's (4-letter) name. Fixed original post.
 
  • #34
DaveC426913 said:
A thread about fearing AI - and no one has yet brought up Roko's Basilisk?
Yeah, there is the fact that a future super-intelligence will read this thread with 100% certainty, ascertain all of our identities, and then make judgements about our and our descendants futures. So there is that to worry about.
 
  • #35
DaveC426913 said:
No idea how I could have misspelled fully 50% of the guy's (4-letter) name. Fixed original post.
Not your fault at all. Every reference to Roko's basilisk spells the name differently. The songwriter who inspired the Less Wrong thread spells it Rococo something IIRC.
 

Similar threads

Replies
1
Views
1K
Replies
12
Views
653
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top