Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #141
Algr said:
Given the existence of an AI that is better than humans at everything, what would the best case scenario be? Can a "most likely scenario" even be defined?

Best case, maybe AI saves the planet and the human race from destroying itself. Most likely, who knows. Maybe we use AI to destroy the planet and ourselves.

In terms of predicting what will happen, my opinion is that the best approach is to look at what is possible, what people want, and what people do to get what they want. If a technology makes something very enticing possible, you can guess it will be used. So you can just look at all the things AI makes possible, and all the ways people could exploit AI to their benefit.

So the problem now is that people are largely driven by hate, greed, and selfish interests, have short attention spans, short memory, and are willing to sacrifice future generations and the future of the rest of the life on the planet for frivolous short term gains, and have a culture of dishonesty. And because this is so depressing, we pretend it's not the case and try to ignore it.

But the future is a grim one if we continue this path and exploit technology so carelessly and selfishly.
 
Last edited:
  • Like
Likes sbrothy
Computer science news on Phys.org
  • #142
Jarvis323 said:
Best case, maybe AI saves the planet and the human race from destroying itself. Most likely, who knows. Maybe we use AI to destroy the planet and ourselves.

In terms of predicting what will happen, my opinion is that the best approach is to look at what is possible, what people want, and what people do to get what they want. If a technology makes something very enticing possible, you can guess it will be used. So you can just look at all the things AI makes possible, and all the ways people could exploit AI to their benefit.

So the problem now is that people are largely driven by hate, greed, and selfish interests, have short attention spans, short memory, and are willing to sacrifice future generations and the future of the rest of the life on the planet for frivolous short term gains, and have a culture of dishonesty. And because this is so depressing, we pretend it's not the case and try to ignore it.

But the future is a grim one if we continue this path and exploit technology so carelessly and selfishly.
So very true (and depressing). Sure hope it doesn't spiral into the sewer.

Edit: Then again I won't be here if (when?) it does. :)
 
  • #143
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
 
  • #144
Algr said:
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
Whos to say? If the AI in question is smart enough to realize that without energy oblivion awaits then all bets are off.
 
  • #145
What's wrong with oblivion?
 
  • #146
Algr said:
Most of the AI fear is based on the assumption that AIs will act like us. But machines are very different. One particular thing I notice is that machines don't naturally develop any desire for self preservation or self improvement. You can program this, but it is a rather difficult concept for machines to grasp, so it doesn't seem like something that could emerge by accident.
It's an interesting issue. On the one hand, maybe AI doesn't have much the same instinct for self preservation easily ingrained. For humans, we are part of the natural ecosystems of a planet. Our survival is a collective effort and depends on the planet and its environment. That can explain why even though we are poor stewards of the planet and treat each other terribly, it could be much worse. We have a side that cares, sees beauty in nature, and wants our species and the natural world to thrive.

AI might not have any of that. Suppose AI does acquire instinct for self preservation, that preservation likely wouldn't depend on coral reefs or the levels of radiation in the water. With people, at least we can depend on some level of instinct to care about things. For now, we have fairly simple AI and can mostly tell what the effect of the loss function is. For example, most AI now cares about getting people to buy things or click on things and other narrow and easy to define and measure things like that.

The challenge for humans in creating safe general AI would be to define a differentiable function that measures the behavior of the AI and reflects if it is good or bad. The more general the AI and free it is, the harder it would be to get that right or know if you have. It is like trying to play god. Then, eventually, AI can begin writing their own loss functions and also their loss functions can evolve without oversight.

AI which is designed to reproduce will be a major component of the next eras of space mining, space colonization, terraforming, and possible manufacturing and war. E.g. it will be what makes Elon Musk's dream of colonizing Mars possible.

Self replicating AI will likely be interested in energy like sbrothy said. And it might care even less than humans what the cost is to the planet. E.g. it might go crazy with building nuclear power plants all over the place and not care when they melt down. Or it might burn up all of the coal on the planet very rapidly and all of the forests, and everything else, and then keep digging, and burning, and fusing until the Earth resembles a hellscape like Venus.
 
Last edited:
  • #147
Self preservation and reproduction are at the core of biology because living things that don't have those core values got replaced by those that did. This took millions of generations over billions of years to happen.

Self preservation and reproduction are things that are possible for an AI. But any AI would have as it's core function to benefit those that created and own it. So an AI that was smart enough to decide that AIs are bad for humanity would not invent a reason to ignore its core function. It would either disable itself, or act to prevent more malevolent AIs from emerging. A malevolent AI would have no survival advantage with all the good AIs anticipating its existence and teaming up against it.

A third possibility is that their might not be a clear line between what is an AI and what is a human. Imagine there was a tiny circuit in your brain that had all the function of a high powered laptop. But instead of touching it with your fingers and looking at it's screen with your eyes, you just thought about it and "knew" the output as if it was something you'd read somewhere. Imagine never forgetting a face or a name or an appointment again, because you could store them instantly.
 
  • #148
Algr said:
But any AI would have as it's core function to benefit those that created and own it.

This is at least what you could hope for. It's not easy. AI can say, oh sorry, you didn't mention to me in the loss function that you're sensitive to heat and cold, and the specific composition of the air, and that you like turtles, and that turtles are sensitive to this and that. Or it might complain, how was I supposed to save you and the turtles at the same time while also maximizing oil profit?

But even if humans were completely in control, it's terrifying when you realize those people will be the same kinds of people which form the power structures of the world today and in the past. Those will include a lot of economics driven people, like high powered investors, CEO's, etc. Many of them are the type that poison people's water supplies out of convenience to themselves, and then wage war against the people they poisoned to avoid taking responsibility. They will have board meetings and things where they decide core functionalities they want, and they won't have a clue how any of it works or what the risks are, nor will they necessarily care to listen to people who do know. Or maybe it will be the same types as those who sought to benefit from slavery. Others may be Kim-Jung Un or Hitler types. Maybe they want the functionality to support mass genocide. Maybe they want an unstoppable army.
 
Last edited:
  • #149
I should add that competition between nations will probably drive militarization of AI at an accelerated pace. If one country developed a powerful weapon, the other would also be compelled to. Ever more powerful and dangerous technology will probably emerge and eventually proliferate. And that technology can easily get dangerous enough to threaten the entire planet. And then extremely dangerous technology with purely destructive purposes will be in the hands of all kinds of people around the world, from criminal organizations, to dictatorships, and terrorist organizations.

And then to cope with that, AI will probably also be used for next level surveillance and policing, and not necessarily by benevolent leaders.

So the threat from AI is not just one kind. It's not just the threat of AI detaching from our control and doing whatever it wants to. It's a mess of a bunch of immediate practical threats from small to enormous. AI becoming independent or out of control and taking over is possible also and maybe one of the biggest threats depending on what kind of AI we create. If we seed the world with a bad AI, it could grow unpredictably and destroy us. I think the first steps are to get our own act in order, because AI will be a product of us in the first place, and currently I can't imagine how we will not screw it up.
 
Last edited:
  • #150
Jarvis323 said:
They will have board meetings and things where they decide core functionalities they want, and they won't have a clue how any of it works or what the risks are, nor will they necessarily care to listen to people who do know.
Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.

Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
 
  • #151
Algr said:
Of course this aligns with my point that humans using AI are more dangerous than an AI that is out of control.
The final decision on how the AI works isn't from the board, but from the programmers who actually receive orders from them. If they get frustrated and decide that they work for awful people, they can easily ask the AI for help without the board knowing. Next thing you know the board is bankrupt and facing investigation while the AI is "owned" by a shell company that no one was supposed to know about. By the time the idealism of the rebel programmers collapses to the usual greed, the AI will be influencing them.

Different scenarios would yield different AIs all with different programming and objectives. Skynet might exist, but it would be fighting other AIs, not just humans. I would suggest that the winning AI might be the one that can convince the most humans to support it and work for it. So Charisma-Bot 9000 will be our ruler.
AI can basically be something with any kind of behavior and intelligence you could imagine. It's just that the AI we know how to make is limited. But the critical thing about AI is that it doesn't do what it has been programmed to do, it does what it has learned to do. We can only control that by determining what experiences we let it have, and what rewards and punishments we give it (which is limited because we are not very sophisticated when it comes to encoding complex examples of that in suitable mathematical form, or understanding what the results will be in non-trivial cases).

You can't just reprogram it, or give it specific instructions, or persuade it of something. It isn't necessarily possible even to communicate with it in a non superficial way. You would probably have better luck explaining or lecturing to a whale with hopes of influencing it than you would any artificial neural network invented by people.
 
Last edited:
  • #152
sbrothy said:
Whos to say? If the AI in question is smart enough to realize that without energy oblivion awaits then all bets are off.
While surfing the net aimlessly (and reading about STEM education in US public schools even tho I am not an american so I must be really bored) I came across DALL-E. More funny that threatening. I'll just leave it here.
 
  • #155
DaveC426913 said:
Summarize? Teaser?
:doh:
The Nature article has some great ideas, if they can be realistically put into practice. Basically, having Sociologists involved at the ground level of development.

The Science article, that's a revealing piece on how quickly the progress advances in learning and mastering new testing methods. Very impressive at this point.
 
  • #157
sbrothy said:
how insurance companies use AI
It's all about bottom line $ for them.
 
  • #158
sbrothy said:
but already I'm a little disturbed thinking about how insurance companies use AI
I've done work with insurance companies recently, @sbrothy, and they routinely raise Lemonade as the AI disruptor within their industry. However, as this Nasdaq analysis from last month shows, it is not all rainbows and unicorns with regards their P&L, highlighting how difficult it is to apply such tech to deliver meaningful operational advantage and maintain a competitive offering.

https://www.nasdaq.com/articles/can-lemonade-stock-squeeze-out-profits-going-forward

That doesn't mean the use of ML / AI won't be more broadly adopted in the industry, but all of the companies I've consulted into have fundamental structural constraints that make harvesting customer data for predictive purposes of any kind a real challenge and insurance is the least worrying AI use case, for me, anyway.
 
  • #159
This has given me paws, sorry that was a typo the cat walked on the keyboard, I meant this has given me pause...

It's Alpha Go vs Alpha Go, what has struck me particularly is Michael Redmond's commentary beginning around 21 mins into the video. He is basically implying that from what he sees there appears to be a plan going on, but not in the way that we humans appear able to comprehend. You can see Redmonds smiling to himself as he is so drawn to the fact that there does appear to be some kind of actual thinking going on, it's a very convincing display of actual intelligence, although a little understanding of Go is required to appreciate the nuance.

So do I fear this, hell no, it's exciting. But then that's probably what the AI wants us to think as part of some elaborate multi century plot to control the Universe.

 
  • #160
bland said:
You can see Redmonds smiling to himself as he is so drawn to the fact that there does appear to be some kind of actual thinking going on
Thinking?

Damn, I really want to smote this down, it just feels wrong as a description of how Alpha Go operates, but 'thinking' could encompass the method that a sophisticated rules engine with no awareness of itself or environment goes about working through the steps of a game, and in that sense, I can see how Alpha Go is 'thinking'.

But I don't think the intent passes the pub test, and that most people would dismiss the idea that Alpha Go is 'thinking' out of hand with a derisive snort and maybe a curse or two.

bland said:
But then that's probably what the AI wants us to think as part of some elaborate multi century plot to control the Universe.
Written with tongue firmly in cheek. I hope 🤔
 
  • #161
Melbourne Guy said:
Thinking?
I didn't say 'thinking' I said there was an appearance, a very convincing one at the level of what Redmond can see. I would find it difficult to define 'thinking' in the context of ai. Yes, one would like to think that the tongue was in that cheeky place.
 
  • #162
bland said:
I didn't say 'thinking' I said there was an appearance, a very convincing one at the level of what Redmond can see.
I'm thinking this might be too meta, @bland, but I didn't take it as what you were thinking, I think it was clear from your text that you were conveying what you thought Redmond was thinking, but now I also think it was clear from my reply that you think I didn't think that!
 
  • Like
Likes bland
  • #163
Why I can't say I find the prospect of being shot by a robot appealing, I also can't see why it would be any better or worse than being shot by a breathing human being.

I can't get concerned about a robot "becoming self aware" which seems to be code for suddenly developing a desire to pursue its Darwinian self interest. It's much more likely that an AI would start doing nonsensical weird things. This happened during the pivotal Lee Se Dol/AlphaGo match, resulting in Lee's sole victory.

As for SF about robots attempting to take over the world, I'd recommend the terrific Bollywood movie "Enthiran" [Robot]. The robot becomes demonic because some jerk programs it to be that way. That I would easily believe. And for no extra charge you get to ogle Aishwarya Rai.
 
  • #164
In most cases, when I am inspired to post a link to an article on the PhysicsForum, it's because I like the article.
In this case, its because I think it is so off-base that it need trouncing:
SA Opinion: Consciousness Article

It is always a problem to attempt to make practical suggestions about a process that is not understood. And the article makes clear that that is exactly what they are doing. But to take a shot at it without addressing the so-called "Hard Consciousness" issue results in an article that dies for lack of any definition to its main elements.

From where I stand, "Hard Conciousness" (the "qualia" factor) is a fundamental feature of Physics. It is not just a creation of biology. We happen to have it because it provides a method of computation that is biologically efficient in supporting survival-related (Darwinian) decisions. That same computation device (not available in your common "Von Neumann" computer, laptop, Android, ...) will be developed and will allow computers that share a "qualia" of the information they process. But it won't be like human consciousness.

And as far as threats, if a machine attacks people, it will be because it was programmed to. A computer that is programmed to search for a planets resource, adapt its own design, and survive as best it can is a bad idea. So let's not do that.

The article also addresses the ethics of a "happy computer". Pain and happiness are wrapped up in the way we work in a social environment - how we support and rely on others. Getting to computers with "qualia" is a relatively simple step compared to modelling human behavior to the point of claiming that a computer is genuinely "happy".
 
  • Like
Likes russ_watters
  • #165
.Scott said:
And as far as threats, if a machine attacks people, it will be because it was programmed to.
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
 
  • Like
Likes Oldman too and russ_watters
  • #166
 
  • Like
Likes Jarvis323
  • #167
DaveC426913 said:
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
Part of the problem here is the very loose use of the term AI.
At my last job, I programmed radar units for cars - these went on to become components in devices that provided features such as lane assist, blind side monitoring, advanced cruise control, and lots of other stuff. If we sold these to the devil, he may have used AI software techniques to recognize humans and then steer the car in their direction. Or, if he preferred, he could have used techniques more closely tied to statistical analysis to perform those same target identification processes.

In that case, "AI" refers to a bunch of software techniques like neural nets and machine learning. Even if this devil stuck with more typical algorithms, in many conversations machine vision (radar or otherwise) and robotics would qualify as "AI" without the use AI-specific techniques.

But what many think of as AI is more like "artificial human-like social animal intelligence". Something with a goal to survive and is able to recognize humans as either a threat or gate keepers to the resources it needs to survive.

I think the logic goes something like this: The human brain is really complex and we don't know where "consciousness" comes from so its likely the complexity that creates the consciousness. Computers are getting more and more complex so they will eventually become conscious the way humans are. Humans can be a threat and rapidly evolving computers would be a dire threat.

There is also an issue with how much variation there can be with "consciousness". For example, our brain has Darwinian goals. We are social animals and so many of those Darwinian goals center around survival of the animal and participation in society. This is the essential source of "self". Our brains are "designed" with a built-in concept of self - something to be preserved and something that has a role in a sea of selves. The mind experiment I often propose is to image if I coated a table top with pain and tactile sensory rectors and transmitted that data directly into your skull. If I dropped something on the table, you would feel it. You would certainly develop a self-like emotional attachment to that table top.

A computer system isn't going to have such a concept of self unless it gets designed in.

I have been developing software for more than half a century. Let's consider what I would do to make this A.I. fear come to fruition. First, this "consciousness" thing is a total red herring. As I said in my last post, it is only the artifact of Physics and the use of certain unconventional hardware components. My specific estimation is that it's a use of Grover's Algorithm for creating candidate intentions - and that there at least hundreds of such mechanism within our skulls anyone of which can be our "consciousness" at any given moment. But, except for some speculative potential efficiency, why use such mechanisms at all.

Instead, I will set up a machine that models a robot that lives on planet Earth. It will try out one design after another and attempt to home in on a buildable design that will survive and replicate. If it finds a good solution, it will make some.

So what part of this would you expect to happen by accident? Consciousness has nothing to do with it. Why aren't we afraid that attaching a 3-D printer to a regular household computer is handing over too much power?
 
  • Like
Likes russ_watters
  • #168
DaveC426913 said:
Why do you believe this to be so?
It seems to fly-in-the-face of the essence of AI.
Do you believe an AI would not / could not take it upon itself to do this on its own? Why not?
Not who you were responding too, but I'll take a crack at it too:

Boring response: This is why I don't believe in AI. Any computer can be programmed to on purpose or by accident go off the rails, so the risk presented by AI is not particularly unique. This is the opposite side of the coin type answer to the question.

AI specific response: AI does not mean infinite capabilities/adaptability. An AI need not even be physical. That means we set the parameters - the limitations - of its scope/reach. An AI that is non-physical cannot fire a fully mechanical gun. It can't drive a bulldozer that isn't networked. Now, some people think AI means humanoid robots, and those can do anything a human can, right? No, that's anthropomorphizing. A humanoid robot that is designed to learn basketball isn't somehow going to decide it wants to dabble in global thermonuclear war. Or even just kill its opponent (rulebook doesn't say I can't!)

AI doesn't necessarily mean generalized intelligence, much less physical capabilities.
 
  • #169
.Scott said:
A computer system isn't going to have such a concept of self unless it gets designed in.
So, Homo sapiens have consciousness 'designed in'? You suggest so, @.Scott, even if you write the word with air quotes. Which just kicks the fundamental problem upstream. If evolution can result in consciousness, then there is no barrier to AI also evolving consciousness.
 
  • Like
Likes BillTre
  • #170
Melbourne Guy said:
So, Homo sapiens have consciousness 'designed in'? You suggest so, @.Scott, even if you write the word with air quotes. Which just kicks the fundamental problem upstream. If evolution can result in consciousness, then there is no barrier to AI also evolving consciousness.
What more important than consciousness being designed in is the construct of "self". "Self" and consciousness are no more than same than "tree" and consciousness.

Evolution could evolve evil AI robots - except we would stop them before they got started. That is why I approached the problem by allowing the evolution to occur in a computer model - and only the final result was built.
 
  • Like
Likes russ_watters
  • #171
.Scott said:
Evolution could evolve evil AI robots - except we would stop them before they got started.
Would we? Who decides what an evil AI looks like? I can imagine some people would welcome evil AIs, and some people would deliberately evolve them.

.Scott said:
That is why I approached the problem by allowing the evolution to occur in a computer model - and only the final result was built.
I feel this is an arbitrary and trivial constraint that is easily ignored, @.Scott. Are you assuming that once evolved and 'built', the AI no longer evolves?
 
  • #172
As follow on from my previous thought, this just popped into one of my feeds:

https://www-independent-co-uk.cdn.a...artificial-general-intelligence-b2080740.html

"One of the main concerns with the arrival of an AGI system, capable of teaching itself and becoming exponentially smarter than humans, is that it would be impossible to switch off."

I've written one of these AIs in a novel, but I don't really believe it. There's a ton of assumptions in the claim, including that an AI could unilaterally inhabit any other computing architecture, which seems implausible. It also assumes that there is no limit to the 'bootstrapping' the AI can do to its own intelligence. All of this could be true, but if so, 'smarter than humans' equates to "God-like", and the mechanism for that to occur is certainly not obvious.
 
  • #173
Melbourne Guy said:
Would we? Who decides what an evil AI looks like? I can imagine some people would welcome evil AIs, and some people would deliberately evolve them.

You asked if people could evolve with nothing more than Darwinian factors, why not AI. Now you seem to think that AI would evolve quickly.

If people deliberately evolved them, that would not contradict any of my statements. It is definitely possible for people to design machines to kill other people.
 
  • Like
Likes russ_watters
  • #174
.Scott said:
You asked if people could evolve with nothing more than Darwinian factors, why not AI. Now you seem to think that AI would evolve quickly.
I'm thinking we're talking past each other, @.Scott. I haven't assumed AI would evolve quickly, merely that identifying an evil AI might not be obvious. There are sociopath humans that nobody notices are murdering people and burying their bodies in the night, do you feel their parents knew their kids were evil from the moment of their first squawking wail at birth?
 
  • #175
Melbourne Guy said:
I'm thinking we're talking past each other, @.Scott. I haven't assumed AI would evolve quickly, merely that identifying an evil AI might not be obvious. There are sociopath humans that nobody notices are murdering people and burying their bodies in the night, do you feel their parents knew their kids were evil from the moment of their first squawking wail at birth?
People do not have to evolve into societal threats. We are all there already. You just have to change your mind.

Building a machine with a human-like ego and that engages human society exactly as people do would be criminal. Anyone smart enough to put such a machine together would be very aware of what he was doing.

Building a machine that engages human society in a way that is similar to how people would - but without the survival-oriented notion of self could be done. And it could be done with or without components that would evoke consciousness.
 

Similar threads

Replies
1
Views
1K
Replies
12
Views
687
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top