Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #71
Algr said:
I really can't make sense of how you are judging the plausibility of future technologies.
Algr said:
I just think you are wrong.
Algr said:
You keep saying this and making analogies for it, but you’ve done nothing to convince me that it is true.
Since all of this is a matter of personal opinion anyway, you have stated your opinion, @DaveC426913 has stated he disagrees, and there's no point in arguing about it further. It's not as though any of this can be resolved by actual testing; that's why we're in the Sci-Fi forum for this thread.

Algr said:
You haven’t even linked to progress in the fields. (As I have.). Show me some articles on stability of social structures over a thousand years.
This is not one of the science forums, it's the Sci-Fi forum. This kind of request is off topic in the Sci-Fi forum since we are talking about fiction, not fact.

Algr said:
This is hopeless.
DaveC426913 said:
I'm glad you said it. I didn't want to. :wink:
In any case, the statement is correct. This subthread is off topic, please do not continue it further.
 
Last edited:
  • Like
Likes DaveC426913
Computer science news on Phys.org
  • #72
Moderator's note: Thread has been reopened after some cleanup. Please keep discussion on the thread topic.
 
  • #73
DaveC426913 said:
And yet, we are inventing the bear. We are heading toward AI.
The basilisk argument requires more than that as a premise. It requires the following to be true:

(1) An AI will come into existence in the future that will exhibit the specific behavior that is ascribed to the basilisk. That is a much stronger claim than just the claim that some AI will come into existence in the future.

(2) The future basilisk AI will have some way of bringing "you" into existence in its time period (so that it can mete out whatever rewards or punishments it chooses to "you")--i.e., a future being in that time period that will have some kind of connection to the present you that makes you care what happens to it in the same way that you care what happens to the present you.

(3) The future basilisk AI will have some way of knowing what the present you does so that it can use that information to make its choice of what rewards or punishments to mete out to the future "you".

It is perfectly possible to believe that AI will come into existence at some point in the future without believing the conjunction of the three specific premises above. So believing that AI is inevitable does not automatically mean you must believe in the basilisk and act accordingly.
 
  • Like
Likes Algr and sbrothy
  • #74
PeterDonis said:
The basilisk argument requires more than that as a premise.
Indeed. It was not my intent to suggest I had encapsulated the whole of the thought experiment.
What I wish I could do is find a good solid article that explains it. Currently, it requires a deep dive.
 
  • #75
DaveC426913 said:
What I wish I could do is find a good solid article that explains it.
My understanding from reading what I could find on it a while back is that the argument is based on the three premises I stated. More specifically:

That an AI, the "basilisk", will come into existence in the future that will create a being in its time frame that is "you", and that the basilisk will then punish this future "you" if the present you (i.e., you reading this post right now) did not do everything in your power to bring the basilisk into existence.

To me, there are several obvious holes in this argument, corresponding roughly to denying one of the three premises I stated:

(1) Even if we stipulate that some AI will come into existence in the future, that doesn't mean this AI will be the basilisk AI. I have not seen anyone advance any argument for why such an AI would have to come into existence, or even why one would be more likely than many other possible kinds of AI (including AIs that could do great harm in other ways).

(2) Even if we stipulate that the basilisk AI will come into existence, that doesn't mean the AI will be able to create a being that is "you" in the required sense. Part of the problem is figuring out what "the required sense" actually means. Does it mean the basilisk has to create an exact duplicate of you down to the quantum level? That's obviously impossible by the no cloning theorem. Does it mean the basilisk has to create a being that is "enough like" you? What counts as "enough like"? I have not seen anyone give precise and satisfactory answers to these questions; the only answer I've seen is basically handwaving along the lines of "well, we don't understand exactly what would be required but it seems like an AI ought to be able to do it, whatever it turns out to be".

(3) Even if we stipulate that the basilisk AI could create a future "you", that doesn't mean the AI will be able to know what the present "you" did. An AI can be as intelligent as you like and still be unable to know, in whatever future time it exists, what you, here and now in 2022, did or did not do. That would require a level of accuracy in the recording of detailed physical events that does not exist, never has existed, and it's hard to believe ever will exist. So it's extremely difficult to see how anything the present you does or does not do could have any actual effect on the basilisk; the information simply can't get transmitted from now to the future with that kind of accuracy.

One dodge (which was raised by another poster earlier in the thread) is to assume that the future "you" is actually a simulation--which raises the possibility that you, here and now in 2022, could actually be the "future you", in a simulation the basilisk is running of the year 2022 on Earth in order to see what you do. That would require you to believe that you are living in a simulation instead of the "root" reality, which is a whole separate issue that I won't go into here. But even if we stipulate that it's the case, we still have another issue: if you are actually living in the basilisk's simulated reality, then obviously you can't do anything to affect whether or not the basilisk exists. So it makes no sense to act as if you could, and you should just ignore the possibility.
 
  • #76
PeterDonis said:
An AI can be as intelligent as you like and still be unable to know, in whatever future time it exists, what you, here and now in 2022, did or did not do. That would require a level of accuracy in the recording of detailed physical events that does not exist, never has existed, and it's hard to believe ever will exist. So it's extremely difficult to see how anything the present you does or does not do could have any actual effect on the basilisk; the information simply can't get transmitted from now to the future with that kind of accuracy.
Btw, this argument is more general than just the basilisk case: it applies to any kind of "acausal trade", which is a topic you'll see discussed quite a bit on LessWrong (which is where Roko originally posted the basilisk idea). I have enough material for an Insights article on that general topic if there is any interest (and if it is deemed within scope for an Insights article).
 
  • Like
Likes Jarvis323
  • #78
At least in the near term, something that is dangerous to our future is "Deepfake" given our conformational biases and general laziness. One cannot even be sure if the website she is on is the real thing. What good will all our information technology be if we cannot trust it?

In a study on the detectability of deepfake videos, 78% of participants could not identify a deepfake video even when told to it was present in the group of videos that they were present.

https://www.independent.co.uk/life-...om-cruise-deepfakes-videos-test-b1993401.html
 
  • Like
Likes Klystron
  • #79
sbrothy said:
I'm not sure what it says about us that we enjoy futuristic entertaiment written by a schizoohrenic meth addict. Talk about the human condition. :)
PKD's admitted drug use -- self-satirized in his apologetic novel "A Scanner, Darkly" -- does not bother me in the least. Struggling artists, particularly poets, associate with drugs and alcohol as if it were a job requirement to be wasted. Polar opposites to STEM professionals who must stay straight to perform correctly.

As the reference to Paul of Tarsus reflects in the title 'Scanner', Phil 'got religion' late in life. I enjoyed reading his early outré stories as a child as an anodyne to religion. Compared to his peers, Phil was one of the least science knowledgeable successful SF authors of his time. He shamelessly glossed over space travel and technology in his stories, making silly errors whenever he attempted to be scientific. Add religion and the meme grows toxic.

Consider his anthropomorphic biological AI replicants in 'DADOES' / 'Bladerunner'. The entire plot revolves around the nearly impossible task of detecting replicants among humans. IDK Phil, look at the serial numbers such as the artificial animals have? Test reflexes? See who can run through a wall?

I like PKD and the artistically interesting movies made from his work but deplore the current notion that he was some visionary SF genius. If this encompasses the gist of your comment, I concur. Fun to imagine but meaningless hard science. "Not even wrong.".
 
  • #80
gleem said:
At least in the near term, something that is dangerous to our future is "Deepfake" given our conformational biases and general laziness. One cannot even be sure if the website she is on is the real thing. What good will all our information technology be if we cannot trust it?
Indeed. I agree, this is a very dangerous technology and a looming threat.

My only solace is knowing that, historically, it's really just a logical progression of ever-more increasingly devious ways of spreading propaganda, and that people get more and more savvy with each iteration.

Decades ago, it was sound bites. They could slice and dice someone's words to corrupt their message in any way desired. A century ago, it was flyers and posters. Luckily, the general public's shrewdness evolves in-step, eventually learning distrust and verify such outrageosities.

Note that our access to myriad competing news sources has also escalated. Makes it harder for lies to spread unchallenged. Drives an obligation to never trust anyone source, and always verify.I'm not saying there won't always be a real danger of a large fraction of the population who will believe whatever corroborates their world-view, but when has it ever been different? This is an incremental escalation, not a sea change.

I hope.
 
  • Like
Likes gleem, PeterDonis and Klystron
  • #81
Not to mention that AI can be far more cunning than a human ever could dream to be with the help of large data. Even the best cult leader would never be able to compete. And that is not even factoring in that the AI knows everything about you as an individual, is constantly experimenting on you, testing you for weaknesses, and refining its model of you. And it can potentially control your feed of information too. And, that is not to mention that people are already way too gullible and easily manipulated as it is.
 
  • Like
Likes CalcNerd
  • #82
Jarvis323 said:
Not to mention that AI can be far more cunning than a human ever could dream to be with the help of large data. Even the best cult leader would never be able to compete. And that is not even factoring in that the AI knows everything about you as an individual, is constantly experimenting on you, testing you for weaknesses, and refining its model of you. And it can potentially control your feed of information too. And, that is not to mention that people are already way too gullible and easily manipulated as it is.
While for the most part that is true, it's not endemic to AI. There's no reason people don't have that access and power. And there's no reason an AI would - unless we let it.

Popular media certainly strongly associate computer brains with inherent cyber-security genius expertise and omniscient access to world data - and that we are powerless to stop it - but that's really an artificial trope that plays on viewer ignorance of the subject matter - very much in the same way Ooh Scary Radiation created myriad giant monster bugs in the 50s.

It makes for a boring story if the world's most advanced AI is defeated because the IT guy simply unplugs its Wifi hotspot.
 
  • #83
This is the SF subforum, not linguistics, but I have always distrusted the expression artificial intelligence. AI is artificial, unspecific and terribly overused. What are useful alternatives?

Machine intelligence MI matches popular term machine language ML. Machine intelligence fits asimovian concepts of self-aware robots while covering a large proportion of serious and fictional proposals. MI breaks down when considering cyborgs, cybernetic organisms, and biological constructs including APs, artificial people, where machinery augments rather than replaces biological brains.

Other-Than-Human intelligence includes other primates, whales and dolphins, dogs, cats, birds, and other smart animals, and yet to be detected extraterrestrial intelligence. Shorten other-than-human to Other Intelligence OI for brevity. Other Intelligence sounds organic while including MI and ML and hybrids such as cyborgs.

Do not fear OI.
 
  • #84
You raise a good point.

But is the machine aspect the most important aspect that distinguishes them? The machine aspect refers to the substrate - the hardware, not the software.

What about, say, artificial biological devices?

I would suggest that the artificial versus natural intelligence is a more important distinguisher than the machine versus grown/bio/squishy substrate.

But YMMV.
 
  • #85
DaveC426913 said:
While for the most part that is true, it's not endemic to AI. There's no reason people don't have that access and power. And there's no reason an AI would - unless we let it.

Popular media certainly strongly associate computer brains with inherent cyber-security genius expertise and omniscient access to world data - and that we are powerless to stop it - but that's really an artificial trope that plays on viewer ignorance of the subject matter - very much in the same way Ooh Scary Radiation created myriad giant monster bugs in the 50s.

It makes for a boring story if the world's most advanced AI is defeated because the IT guy simply unplugs its Wifi hotspot.
The world data is owned, bought and sold, by people who use AI to process it. It's the reason the data is there in the first place. Maybe one AI doesn't have access to all of it. But there is an AI that knows what I just typed and has already thought of what ad to show me on social media after taking it into consideration.
 
  • #86
Jarvis323 said:
The world data is owned, bought and sold, by people who use AI to process it. It's the reason the data is there in the first place. Maybe one AI doesn't have access to all of it. But there is an AI that knows what I just typed and has already thought of what ad to show me on social media after taking it into consideration.
Sure, but AI-1234 doesn't inherently know what AI-4321 knows any more than Jarvis323 inherently knows what DaveC42693 knows. They have to communicate their knowledge just like we do. We can surmise how they do it better, faster etc. but it's not just magically part of their silicon DNA.

In mean, yes, we've built them to outcompete us, true. I just point out that data mining is not the exclusive ability of the AI. It's a quantitative improvement over our tendencies, not a qualitative improvement over our abilities.
 
  • Like
Likes Jarvis323
  • #87
DaveC426913 said:
You raise a good point.

But is the machine aspect the most important aspect that distinguishes them? The machine aspect refers to the substrate - the hardware, not the software.

What about, say, artificial biological devices?

I would suggest that the artificial versus natural intelligence is a more important distinguisher than the machine versus grown/bio/squishy substrate.

But YMMV.
Right. Biologics. Other Intelligence OI includes biological constructs, smart animals, ETI, machines, everything intelligent other than human. OI. :cool:
 
  • #88
I'm in agreement that the scariness of AI depends on what it is applied to.

Skynet is obviously terrifying because it has nuclear weapons and control over military robots.

an AI system put in place to keep the trains from running late is self contained and, provided it has the right goals, it wouldn't seem dangerous to me!

An AI police system would be terrifying, again because it has control over something inherantly dangerous which has authority to attack people under certain circumstances,

an AI controlling all the cars and trains and buses in a city might be problematic if not kept in check. Things might stop or be scooted aside to keep trains on time, which might cause injuries. It would also fall down if anyone had a non-AI vehicle in there!
 
  • #89
Moderator's note: Post edited at poster's request.

Melbourne Guy said:

Klystron said:
PKD's admitted drug use -- self-satirized in his apologetic novel "A Scanner, Darkly" -- does not bother me in the least. Struggling artists, particularly poets, associate with drugs and alcohol as if it were a job requirement to be wasted. Polar opposites to STEM professionals who must stay straight to perform correctly.
No it doesn't bother me. I suspect it a matter of truth in television. It was just a humorous observation. I also suspect that without his unique condition(s) (unfortunately "self-medication" is almost ubiquitous among psychiatric patients) PKD wouldn't have been so productive, neither would he have had the urge. I think we (and indeed he) should probably be grateful that he had an artistic outlet.

[Post-facto edited to "corroborate" my claim.]
 
Last edited:
  • Like
Likes Klystron
  • #91
PeterDonis said:
I don't understand.
I saw too late that my reply quoted more than I wanted. I only intended to quote Klystron but couldn't edit the preceeding stuff out. Just disregard it.
 
  • #92
sbrothy said:
I only intended to quote Klystron but couldn't edit the preceeding stuff out.
Ok. I'll use magic Mentor powers to do the edit.
 
  • Like
Likes Klystron
  • #93
PeterDonis said:
Ok. I'll use magic Mentor powers to do the edit.
I could really use a preview function when posting to this forum. Other forums have this functionality. Or is it there and I can't find it?
 
  • #94
sbrothy said:
I could really use a preview function when posting to this forum. Other forums have this functionality. Or is it there and I can't find it?
It's there when you're starting a new thread, but not for individual posts as far as I know.
 
  • #95
PeterDonis said:
It's there when you're starting a new thread, but not for individual posts as far as I know.
It's there, but hard to use. . . . 😒
1649352483215.png
 
  • Like
Likes sbrothy and Klystron
  • #96
Bystander said:
...? "Artificial" stupidity is better?
We may already have it, if artificial intelligence really is intelligent then it would know to dumb itself down enough to not be a threat, then, when the unsuspecting 'ugly bags of water'* have their guard down...

*star trek
 
  • Like
  • Haha
Likes sbrothy, Oldman too and Klystron
  • #97
bland said:
We may already have it, if artificial intelligence really is intelligent then it would know to dumb itself down enough to not be a threat, then, when the unsuspecting 'ugly bags of water'* have their guard down...

*star trek
OMG. Now there's a nightmare! :)
 
  • #99
  • Like
Likes russ_watters and Klystron
  • #100
DaveC426913 said:
We discourage blind links. It would be helpful to post a short description of what readers can expect if they click on that link, as well as why it is relevant to the discussion.
My bad! It's basically a long essay about how real AI wouldn't think like a human being as is usually portrayed in all the movies, etc.
 
  • Like
Likes DaveC426913
  • #101
Chicken Squirr-El said:
“as soon as it works, no one calls it AI anymore.”
- John McCarthy, who coined the term “Artificial Intelligence” in 1956

  • Cars are full of Artificial Narrow Intelligence (ANI) systems, from the computer that figures out when the anti-lock brakes should kick into the computer that tunes the parameters of the fuel injection systems.
  • Your phone is a little ANI factory.
  • Your email spam filter is a classic type of ANI.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
 
  • #103
PeroK said:
This is garbage.
:mad:
Poster is new. Be constructive if you have criticism.
 
  • #104
DaveC426913 said:
:mad:
Poster is new. Be constructive if you have criticism.
Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years.

In other words, he's telling us that in the 7 years since 2014 the world has changed more than it did in the entire 20th Century? By what measure could this conceivably be true? It's patently not the case. This is, as I said, garbage.

A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month.

This is also garbage. How can the world change technologically significantly several times a month? Whoever wrote this has modeled progress as a simple exponential without taking into account the non-exponential aspects like return on investment. A motor manufacturer, for example, cannot produce an entire new design every day, because they cannot physically sell enough cars in a day to get return on their investment. We are not buying new cars twice as often in 2021 as we did in 2014. This is not happening.

You can't remodernise your home, electricity, gas and water supply every month. Progress in these things, rather than change with exponential speed, has essentially flattened out. You get central heating and it lasts 20-30 years. You're not going to replace your home every month.

The truth is that most things have hardly changed since 2014. There is a small minority of things that are new or have changed significantly - but even smartphones are not fundamentally different from the ones of seven years ago.

Then, finally, just to convince us that we are too dumb to judge for ourselves the rate of change in our lives:

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.

I'm not sure what logical fallacy that is, but it's, like I said, garbage.
 
Last edited:
  • Skeptical
  • Like
Likes Jarvis323 and Oldman too
  • #105
Here's another aspect to the fallcy. The above paper equates exponentiating computing power with exponentiating change to human life. This is false.

For example, in the 1970's and 80's (when computers were still very basic by today's standard) entire armies of clerks and office workers were replaced by electronic finance, payroll and budgeting systems etc. That, in a way, was the biggest change there will ever be. I.e. from the advent of ubiquitous business IT systems in the first instance.

The other big change was the Internet and web technology, which opened up access to systems. In a sense, nothing as significant as that can happen again. Instead of the impact of the Internet being an exponentially increasing effect on society, it's more like an exponentially decreasing effect. The big change has happened as an initial 10 year paradigm shift and now the effect is more gradual change. It's harder for more and more Internet access to significantly affect our lives now. The one-off sea-change in our lives has happened.

In time it becomes more difficult for changes in the the said technology to make a significant impact. That's why a smartphone in 2022 might have 32 times the processing power of 2014, but there's no sense in which it has 32 times the impact on our lives.

Equating processing power (doubling every two years) with the rate of human societal change (definitely not changing twice as fast every two years) is a completely false comparison.

Instead, change is driven by one-off technological breakthroughs. And these appear to be every 20 years or so. In other words, you could make a case that the change from 1900 to 1920 was comparable with the change from 2000 to 2020. Human civilization does not change out of all recognition every 20 years, but in the post-industrial era there has always been significant change every decade or two.

AI is likely to produce a massive one-off change sometime in the next 80 years. Whether that change is different from previous innovations and leads to permanent exponential change is anyone's guess.

Going only by the evidence of the past, we would assume that it will be a massive one-off change for 10-20 years and then have a steadily diminishing impact on us. That said, there is a case for AI to be different and to set off a chain reaction of developments. And, the extent to which we can control that is debatable.

Computers might be 1,000 times more powerful now than in the year 2000, but in no sense is life today unrecognisable from 20 years ago.
 
Last edited:
  • Skeptical
  • Informative
Likes Jarvis323 and Oldman too

Similar threads

Replies
1
Views
1K
Replies
12
Views
748
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top