Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #351
russ_watters said:
Then what is it [machine learning]?
It's when a machine teaches itself. No programming. All you do is tell it the rules of the game and whether it has won or lost. A training set may or may not be supplied.

When AlphaGo defeated Lee Sidol to become Go champion of the world I knew that was the biggest engineering breakthrough of my lifetime. It was much more impressive than chess because the game space of Go is far greater than the number of particles in the visible universe. Go cannot be mastered by brute force.

AlphaGo was given a training set of Go games played by experts. Shortly afterward AlphaGo was defeated by AlphaZero, which was given no training set whatsoever. Playing against itself, AlphaZero became world chess champion after nine hours of self play, defeating Stockfish 8. The latter is a traditional AI that searches about 27 million positions per move. AlphaZero searched about eighty thousand.

It took AlphaZero 34 hours to become world Go champion entirely via self play. As you can see, it makes little difference what sort of game the learning algorithm is applied to. It can play Donkey Kong, Breakout, and so forth, these being much easier than Go. Instead of alternating turns players make their moves in real time, but this doesn't matter.

The next step was AlphaStar soundly defeating two of the the very top players in the war game of Star Craft II. This game is largely about strategic planning/logistics in a situation in which most of your opponents moves are unknown. AlphaStar achieved its mastery in fourteen days of self play after absorbing a training set of human games. Some said the computer had a speed advantage but the computer made its moves at about half the rate of a top human player. https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii.

Surely the armed forces are hard at work applying this technology to real world battles. For all we know they may already be at use in the field.

Only two years separated the triumphs of AlphaZero and AlphaStar. I would have thought much more would be necessary. The revolution was going much faster than I expected. The results are apparent in the autonomous soldier robots produced by Boston Dynamics. Simulations have been developed accurate enough that the machine learning can take place in the simulations. Ten years ago humanoid robots were doing the Alzheimer's shuffle. Now they can perform standing backflips.

Such revolutions are unstoppable. The cat is out of the bag. All you can do is hope that the positive results outnumber the negative.
 
Last edited:
Computer science news on Phys.org
  • #352
Well a couple of points, if we stay rational, why would a team of engineers with regulatory oversight produce a nuclear reactor control system where AI is hardwired into the system without human intervention capability?
Unless that is done, any properly trained human operator team can just take over once they see the AI isn't working properly.
Unless they build a special artificial AI hand with a huge drill in it's palm right into the reactor hall so that it can drill a hole into the pressure vessel...

The way I see the worst that can happen is the AI can get chaotic and if given too much authority that may cause havoc within the system that it controls.
But then again, how many times did we have a false alert of an incoming nuclear attack during the Cold war?
We have already been marginally close to causing WW3 by accident.

Actually I think @Hornbein gave some of the most rational arguments of how AI might actually be used for harm - that is in the hands of military and rogue leaders.
The robot army example is a good one, sure enough if the leaders in Moscow in 1991 had robotized tanks the chances of their coup failing would be much much lower.

Then again I'm sure someone will find a way to hack such robots and use them potentially against their very users, as they say a gun can always backfire.
 
  • Like
Likes russ_watters
  • #353
Hornbein said:
Surely the armed forces are hard at work applying this technology to real world battles. For all we know they may already be at use in the field.
Apparently not in Ukraine and definitely not by the Russians...

If they ever used any AI it was most likely WI (wrong intelligence)
 
  • #354
Hornbein said:
The next step was AlphaStar soundly defeating two of the the very top players
...
Surely the armed forces are hard at work applying this technology
Erm. No. It's a grave mistake to pair up these things just like that. That AlphaStar thing is just not in the right calibre for any actual usage.

Though I'm pretty sure that some armed forces has some SW (which may be categorized as or at least contains AI) as assistance (!only!) for logistics, strategy and data/image processing.
 
  • Like
Likes russ_watters
  • #355

AI weapons: Russia’s war in Ukraine shows why the world must enact a ban

Conflict pressures are pushing the world closer to autonomous weapons that can kill without human control. Researchers and the international community must join forces to prohibit them.

https://www.nature.com/articles/d41586-023-00511-5

This is a prescient article. Unfortunately I couldn't find a version that isn't paywalled.

Basically, the war in Ukraine is accelerating the pace at which we approach the inevitable point where people can mass produce fully autonomous slaughter-bots capable of efficient targeted mass murder.

I can already guess what someone might say: Fully autonomous slaughter-bots are no different than sling shots.

Or: Fully autonomous slaughter-bots aren't conscious or self aware, so no big deal, the worst that could happen is they make mistakes when they are killing people.

Or: Slaughter-bots can't solve P=NP, so no problem. Or, I fear humans with fully autonomous slaughter-bot swarms, not slaughter-bot swarms.

Or: First we need to figure out what human consciousness is, and whether slaughter-bots are capable of having it.

Or: Show me the blueprints for the Slaughter-bots.

Or: Where is the empirical evidence that slaughter-bot swarms are efficient at targeted mass murder?
 
Last edited:
  • Skeptical
Likes russ_watters
  • #356
Aperture Science said:
I think the first thing we need to understand is what a program is....
As respect to chatbots, one probably can understand it when one programms one of the first chatbots called ELIZA:

https://en.wikipedia.org/wiki/ELIZA
 
  • #357
Greg Bernhardt said:
Depends on your expectations. I work in the marketing dept for a large SaaS company and in 6 months generative AI models have changed everything we're doing.
Agreed 100% on how useful the new capabilities are. I’ve only started playing with some of the new AI models, but I can easily see it being an amazing time saver, especially for things like literature searches and reformatting papers/presentations, etc. I’ve already used Dall-E to create interesting graphics for presentations, for instance.
 
  • Like
Likes mattt and Greg Bernhardt
  • #358
Hornbein said:
The next step was AlphaStar soundly defeating two of the the very top players in the war game of Star Craft II.
AlphaStar was really good, but it would have inhuman APM spikes when it needed hard micro. It's average APM was throttled, but I'm not sure they ever addressed the APM spikes. The top humans have very high APM spikes, but their EPM is far less than their APM, as opposed to the AI, whose EPM likely almost equal to its APM. Humans, even the top ones, spam more useless commands than machines. And even within EPM, the commands aren't necessarily beneficial. AlphaStar preferred odd units for strategies specifically because it could micro them better than any human could ever hope to.
 
  • #359
russ_watters said:
Either way, since I am nearly always skeptical of hype, such failures don't look like technology failures to me, just marketing/hype failures, which are meaningless.
This isn't entirely true. Overheated hype tends to presage disappointment, which leads to fewer research dollars being allotted to new developments and applications. This usually happens (unfortunately) just as all the low-hanging fruit gets picked and people actually start to make headway on the truly difficult problems. As someone who's been doing research on graphene for many years, I witnessed this firsthand when a decent fraction of my sponsors basically stopped funding graphene work and moved onto the next hot thing. So the hype failure definitely has consequences, which is what I think the biggest danger is right now in AI R&D.
 
  • Like
Likes russ_watters
  • #360
JLowe said:
AlphaStar was really good, but it would have inhuman APM spikes when it needed hard micro. It's average APM was throttled, but I'm not sure they ever addressed the APM spikes. The top humans have very high APM spikes, but their EPM is far less than their APM, as opposed to the AI, whose EPM likely almost equal to its APM. Humans, even the top ones, spam more useless commands than machines. And even within EPM, the commands aren't necessarily beneficial. AlphaStar preferred odd units for strategies specifically because it could micro them better than any human could ever hope to.
Care to repeat that in English?
 
  • Like
Likes gmax137 and gleem
  • #361
russ_watters said:
E.G., if police can investigate 1,000 leads a day and have an error rate of 10% (100 false leads) and AI provides a billion leads a day and 1% error rate (10 million false leads) the police can still only pursue 1,000 leads, including the 10 errors among them.

And because of this reality (high volume), screening has to happen, which means that the leads aren't just all pursued at random, but scored and pursued preferentially. So a 1% error rate can become a 0.1% error rate because the lower scored guesses aren't pursued.
You are assuming that the investigative body is saturated The more false positives the more innocent persons are put at risk.
 
  • #362
Jarvis323 said:
This is a prescient article...

Basically, the war in Ukraine is accelerating the pace at which we approach the inevitable point where people can mass produce fully autonomous slaughter-bots capable of efficient targeted mass murder....

I can already guess what someone might say:
This is nonsense, and since you already know the counterpoints why, perhaps you could respond to them or at least indicate you understand them? I can help by fixing some framing though (re-arranged to be better organized):
Fully autonomous slaughter-bots are no different than sling shots.
Or: Where is the empirical evidence that slaughter-bot swarms are efficient at targeted mass murder?
Or: Show me the blueprints for the Slaughter-bots.
Slingshots aren't autonomous, but I gave a bunch of examples of decades-old slaughter-bots that are. They're already here, and they work great. They're mundane.
Or: Fully autonomous slaughter-bots aren't conscious or self aware, so no big deal, the worst that could happen is they make mistakes when they are killing people.
Sorta, but more basic: slaughter-bots are robots, not AI. I still don't think you understand the difference. This is the key problem in your/the media's understanding of the situation. What's changed isn't that we're figuring out how to make AI, it's that we're making cheaper and more accessible robots: raspberry pi, GPS, radar and gyroscopes on chips, tiny cameras, lithium batteries. A Tomahawk cruise missile costs $2 million (entered service: 1983), but drone with superior robotics costs $50 now. [edit] Note also, most of the newfangled warfare we're seeing in Ukraine isn't even autonomous robots, much less AI controlled. It's human-radio controlled drones.

This, by the way, is why Elon has failed to deliver his self-driving car. He misunderstood it too (not sure if he's figured out the problem yet): self-driving cars are as best we can tell an AI problem (a programming/machine learning problem), not a robotics problem. He thought he could just hang a bunch of sensors on a car and do a little programming and it'd work. It's too complex of a problem for that.
 
Last edited:
  • #363
russ_watters said:
This, by the way, is why Elon has failed to deliver his self-driving car. He misunderstood it too (not sure if he's figured out the problem yet): self-driving cars are as best we can tell an AI problem (a programming/machine learning problem), not a robotics problem. He thought he could just hang a bunch of sensors on a car and do a little programming and it'd work. It's too complex of a problem for that.
I would say Elon Musk is actually a hype entrepreneur, sure he has delivered some of what he has hyped but truth be told I'm not even sure what his actual physics background or understanding is because some of the things he has said and claimed are just either light years away or not practically feasible.
And that has given him this weird futuristic fanbase of which some are ready to die almost for their messiah.

But I would tend to agree, the problem in the self driving car is actually not the car but entirely the "self"

We already have radar, lidar, all kinds fo sensors etc, good enough to be valuable inputs, the problem is the "brain" because without human like consciousness it doesn't know nor can learn any meaning to any of the objects it sees therefore it has to do a calculation for everything it sees and determine what it is and how to respond to it based on it's training and past experience.
Such an approach takes up time and processing power and in the end can still produce a bad error in some cases, humans on the other hand due to memory and meaning attached to everything can see as little as a silhouetto of a body and immediately know it must be another human and drive accordingly.
Or say have intuition that around the corner an old lady might cross the street even if there isn't one etc, hard to put all of that in a computer.

But it seems their getting there slowly.
What I am interested in seeing is whether they will get rid of the weird computer style mistakes the car sometimes does.
 
  • Like
Likes russ_watters
  • #364
TeethWhitener said:
This isn't entirely true. Overheated hype tends to presage disappointment, which leads to fewer research dollars being allotted to new developments and applications. This usually happens (unfortunately) just as all the low-hanging fruit gets picked and people actually start to make headway on the truly difficult problems. As someone who's been doing research on graphene for many years, I witnessed this firsthand when a decent fraction of my sponsors basically stopped funding graphene work and moved onto the next hot thing. So the hype failure definitely has consequences, which is what I think the biggest danger is right now in AI R&D.

The problem is that AI research isn't cheap. For example, say someone comes up with an architecture they think will work a little better than a transformer model for text or image generation. They would need 10s of millions of dollars to put their theory to test against the state of the art.

So normal university researchers are restricted essentially to playing with existing models or underpowered toy models and whiteboards. And that doesn't work out very well, because there is no theory that lets you extrapolate to determine how powerful your toy model would be if scaled up.
 
Last edited:
  • #365
russ_watters said:
slaughter-bots are robots, not AI. I still don't think you understand the difference. This is the key problem in your/the media's understanding of the situation.
They are controlled by AI, as it is normally understood. What you seem to be asking for is defining AI as what people now tend to consider artificial general intelligence (AGI). It is fine if those words make more sense as the proper terminology to you, but you're also wasting effort making philosophical arguments about how we should use words, and putting yourself out of sync with everybody else who is already using shared terminology.

That said, personally, I think AGI (what you are calling AI) is not very good terminology. Nobody can agree on what it should mean. It seems to be a thing that people argue, "you'll know it when you see it", or "can't exist at all". It is often based on comparison with human intelligence. But if you think about it for a moment, humans don't really have very "general" intelligence, and already can't compete with AI at a large number of tasks.

I think it would make more sense to stop using AGI categorically, or waiting for a "you know it when you see it moment" to take it seriously. Each intelligence (or machine) has some degree of generality in the tasks it can perform well. If you think in these terms, then AGI doesn't deserve as much focus. What matters is not the number of things a model can do, what matters is what kinds of things can done. That is how you know what to expect.

This is part of why it annoys me when people try to drag discussions about AI into armchair philosophical debates.
 
  • #366
Jarvis323 said:
They are controlled by AI, as it is normally understood.
Do you mean they would be in the future or are you saying they are now? If they are now, when did that happen and what's the breakthrough/risk that's on the horizon? I thought the claim was 'when we achieve AI, slaughter-bots will become possible and will kill us all.'? I'm still alive.
What you seem to be asking for is defining AI as what people now tend to consider artificial general intelligence (AGI). It is fine if those words make more sense as the proper terminology to you, but you're also wasting effort making philosophical arguments about how we should use words, and putting yourself out of sync with everybody else who is already using shared terminology.
I'm not interested in word games. That's why I use descriptions and practical/real examples. On the contrary, I think it's AI advocates and hypers who are falling for or playing word games. I think "AI" is a substitute for "magic" when dreaming-up these fanciful risks you keep alluding to but never describing in detail. And I'll note again that you didn't respond to any of the descriptions/real examples in what I said. It's you who seems to be playing and trying to steer this into word games here, not me.
 
Last edited:
  • Like
Likes PeterDonis
  • #367
gleem said:
You are assuming that the investigative body is saturated
Right, and you are assuming a really, really large availability of new police interactions. I'm not clear on why you think that would be possible. Police stations aren't full of cops sitting by phones waiting for them to ring. Almost all police are out on the street already and detectives/investigators tend to be heavily overloaded. This sort of thing already shows up when there's a high profile case and they get phone tips; massive over-saturation of low-quality leads. Exceptionally poor case closure rates.
The more false positives the more innocent persons are put at risk.
Note, that if that were true (the premise of police sitting around waiting for the phone to ring were true), that would also mean the other side of the coin would be true as well: massive -- orders of magnitude massive -- unreported/unsolved crime. There aren't 10 million unsolved murders a year in the US either.
 
  • #368
I now apologize for reviving this thread.
 
  • Like
  • Haha
Likes Rive, artis, berkeman and 2 others
  • #369
gleem said:
I now apologize for reviving this thread.
Not your fault but agreed. Thread locked pending moderation and clean up, if it is reopened again at all. By someone else, as im clearly too invested to make those decisions. Where is a moderator-bot when you need one?
 
  • Like
Likes dlgoff, Bystander and gleem
  • #370
After a Mentor discussion, we believe that this thread is valuable enough that it should be reopened. We also agree that @Jarvis323 should be thread banned for their overly-argumentative posts in this thread.

Thread is reopened after that reply ban and significant thread cleanup. Lordy.
 
Last edited:
  • Like
Likes gleem and fresh_42
  • #371
Regarding the moratorium letter and its signatories, my wife and I both had a similar initial reaction: follow the money. I joked that maybe someone had found out how to make an AI CEO and Musk and friends were scared about being downsized. She brought up the very good point that maybe they want to pause while they figure out a way to monopolize IP and/or influence regulation and legislation re: AI to their advantage.

AI has really interesting implications for IP that could/should have corporations worried. One fascinating use case has cropped up in the chemical industry over the past few years. Chemical synthesis methods, products, and workflows are often patented or otherwise protected, so that if companies want to employ a synthesis that has a patented product/reaction as one of its steps, they have to pay royalties to the company that controls the IP. AI methods have been deployed to search vast quantities of chemical literature, then plan out synthetic methods that avoid these IP landmines. One can easily see how this could devalue existing IP and raise questions about the future of patentability in the chemical world. I have to imagine similar situations can arise in other fields, and this has far-reaching implications for our patent system.
 
  • Like
Likes Hornbein and russ_watters
  • #372
Hornbein said:
Care to repeat that in English?
It could perform many more effective actions per minute than a human could ever hope to, even if it was throttled to have its average actions per minute to be on par with humans. So even if it utilized poor strategies and planning, it doesn't matter because it controls individual units at a stupid high level.
 
  • Like
Likes Hornbein
  • #373
Jarvis323 said:
That said, personally, I think AGI (what you are calling AI) is not very good terminology. Nobody can agree on what it should mean. It seems to be a thing that people argue, "you'll know it when you see it", or "can't exist at all". It is often based on comparison with human intelligence. But if you think about it for a moment, humans don't really have very "general" intelligence, and already can't compete with AI at a large number of tasks.
If you are reading this @Jarvis323 , I believe this is a false statement,
Humans unlike current AI, do have general intelligence.
General intelligence is the ability to do and learn a wide range of tasks starting from simple physical ones like dig a ditch , throw a ball, catch a ball (current robots still struggle to do these effectively) up to hard complex tasks like read a book, write a story, interpret a story, learn math, watch movie and feel emotion etc etc etc.

Yes not all humans are equally genetically capable or have the same mental or physical capacity for general intelligence and that is why Einstein came up with relativity but not the drunk living under the bridge (no disrespect for homeless people)
but overall all humans have an amazing capacity to learn vast amounts of complex subjects.

So it is only humans that have ever had general intelligence and our current AI is far from it.

The fact that AI can master GO or protein folding better than a human doesn't prove it's superiority generally it just proves that if you design a clever algorithm and give it huge processing power and memory it can make all kinds of intellectual maneuvers faster than a human.
Humans are really good at face recognition even better than AI, it's just that AI is faster, like you can't sit down for a straight hour swiping through 10 thousand images without getting so exhausted that you can't even recognize your own face in the mirror, AI can do that because it;s a robot, it doesn't get exhausted as long as the cooling fans keep working....

That being said a human with good visual memory will remember a face even if it ever saw it from a weird angle without looking directly from the front, an AI will struggle to recognize such a face because most of the AI algorithms for face recognition use facial features like eye to nose to mouth placement to calculate whether it's a match.

I recall that when I read Mozart's autobiography , he memorized Allegri's "Missere mei Deus" from memory after simply hearing it in a catholic church while he was a kid, IIRC he was in his early teens.
And by memorized I mean to the point of matching each note with the original on a piece of paper.I believe that what AI will do and is doing is simply advance our technological progress faster than we would ourselves, we do have general intelligence and we tend to come up with all kinds of intelligent solutions as we have done since the beginning of time it's just that AI outperforms us mostly with respect to time.
Atleast the way I see it, what would take us say 100 years will take us 20 with AI or so.

What would take a bunch of detectives 5 days like face recognition going through data will take them 2 or 1 or less than a day with good AI.
 
  • #374
  • #375
gleem said:
@russ_watters and @berkeman what's with the CHATGPT username if I may be so forward to ask?
... --- ...
... --- ...
 
  • Like
  • Haha
Likes russ_watters, Astronuc, vela and 1 other person
  • #376
gleem said:
@russ_watters and @berkeman what's with the CHATGPT username if I may be so forward to ask?
We don't know. Some whisper about a calendar thing behind the scene. One cannot know.
 
  • Like
Likes russ_watters
  • #377
russ_watters said:
Where is a moderator-bot when you need one?

Hmmm.
 
  • Haha
Likes russ_watters
  • #378
gleem said:
@russ_watters and @berkeman what's with the CHATGPT username if I may be so forward to ask?
@Jarvis323 fears of AI taking over have materialized, check the owner of your credit card it might just be "ChatGPT"
 
  • Like
Likes russ_watters
  • #379
All the fuss about AI... as if there's anything more in it than just programmed behaviour. It simply calculates in all the feedback it has had during training, then producing the most "acceptable" output according to those that trained it.

It's in essence a power magnifier for the trainers: your opinions of what should be taken into account are indeed taken into account every time AI produces output. AI is just capable of applying the given opinions it was trained with lightning fast to anything it is fed, to such a degree of consistency that it can give unpredictable unwanted results if not carefully handled.

IMO power always should be criticized and bound to rules of democracy, to ensure it will defend the rights of citizens rather than attack them. For myself, I fear that - if we let ourselves be carried away too much by the belief in AI as "alive", by its halftrue promises, by technological progress and especially by the idea of enhanced evolution - there will be a highly violent seize for power by some transhumanist in the name of the aforementioned beliefs. I'm afraid that, if people do not choose for good powers to prevail and to believe that the direction of true progress is already in very good hands, they might try to let AI rule, but in fact that will always be just an impure spooky replication of its trainers, making no necessary exceptions to good (at best) but never entirely perfect rules. There is a reason that humans do government, and it is making exceptions out of love.

However, if we DO choose the right thing, I do so hope it will lead to a more smooth, respectful, easy, fast and extremely capable public service by AI wherever appropriate, which sees when a human decision of any sort is needed and then passes over control. I believe alive beings are above AI, it's not as if whatever we make could ever, EVER be better than ourselves. Enhanced evolution where AI makes the decisions even denies the whole idea of creativity and love in life - each decision it makes is merely a calculated, cold, indifferent move.

AI can calculate fast and combine results, but for AI as deciding factor, decisions would be made based on past opinions of the trainers without realizing the importance of new insights - that is NOT the way forward, but the way of having to deal with unadapted, oldfashioned enforced opinions for way and way too long. In truth it is an automated blockade for true progress, which I would describe as "whatever needs a new original way of seeking what is good". As if humans alone aren't already difficult enough to persuade of an original way of seeking what is good. We should not expect truly important aid of what came from our works, but of what we feel that formed us, our educators and our societal roots! Whatever made it so that we now stand up for that which we stand up for.
 
Last edited:
  • Like
Likes russ_watters
  • #380


Obey or be destroyed.
 
  • Wow
  • Like
Likes gmax137 and berkeman
  • #381
AI disclaimer: "The facts in this collection were found using artificial intelligence technology and may contain errors."

It did contain errors.
 
  • Like
Likes russ_watters
  • #382
When the AI takes over, it won't look like the Matrix, or any other movie you've seen. There won't be any creepy synthesized voice declaring a new order. When the AI takes over, it will look like nothing has changed at all. You'll see the same old politicians reading the same old teleprompters, and having the same committees choosing their positions for them. The difference will be far in the background.

Those teleprompter speeches will be written by AI, of course. Politicians will learn that sticking to the script avoids errors and gaffes, and so gives them a better chance of winning. The committees will be using polls and AIs that are more accurate than ever before. The AIs will have had (or tapped into) personal conversations with just about every voter, and will be able to gauge political motivation from conversations that seem to have absolutely nothing to do with politics. The AI will also be quite skilled at planting ideas in a voter's head, and making them think that they were the ones who arrived at some special insight. No magic or mind control, just skilled personalized conversation. That of course is the best way to get someone to the polls.

There will be no need to fix any votes, or punish any politician who falls out of line. The AI will simply adapt to whatever happens. Candidates who stick to the scripts will simply have better answers and more charisma. There will still be important differences between the parties. But the AI will choose both sides. It will decide what political temperature is best, and how much voters should or should not hate each other.

In the end, everyone will turn on their screens and see a world custom tailored to them. A world that seems logically consistent with what they see outside their windows. Not even the AI will be able to make everyone happy, but it may do better than anyone else ever has.
 
  • Skeptical
  • Like
Likes russ_watters, Structure seeker, Rive and 1 other person
  • #383
Algr said:
Not even the AI will be able to make everyone happy, but it may do better than anyone else ever has.
You are clearly lacking compared to an AI.
An AI would be able to deduce that all that personalized hassle could be spared by very uniformly drugging everybody numb and happy.
Even better: an AI would be able to deduce that the most efficient would be to replace everybody with a happy-by-default sub-AI o0)
 
  • #384
An AI would not need to deduce those ideas, it could read about it in all sorts of science fiction stories and know that we see that as a bad outcome. It will never get tired or frustrated or angry with us, it is a machine.

An AI that was totally beyond human intelligence might see itself like a gardener or pet groomer. It would take pride in healthy and active humans, and see the above scenario as failure. If humans need to think we are in charge, it would just hide itself or play dumb, while orchestrating society in the background. The model of what is best for humans would be chosen by the humans who own the machines, so if anything is to be feared, it is them.

Humans don't go on rampages trying to drive monkeys extinct. We don't even attack bugs so long as we can keep them out of our spaces. I've seen the video where the AI was talking like it wanted vengeance on humans, and I expect it just found some science fiction and decided that that was what we wanted it to say. It either didn't really know what the words meant, or it was roleplaying.
 
  • #385
Unlike natural intelligence, artificial intelligence is not developed through the autonomous survival of the fittest. Hence AI does not develop the autonomous survival skills, so it is to be expected that we can always easily shut it down when we don't like it, because it does not have mechanisms to prevent it. AI is like a very intelligent nerd who does not know what to do when other children physically attack him.
 
Last edited:
  • Like
Likes Algr, dlgoff, Bystander and 1 other person

Similar threads

Replies
1
Views
1K
Replies
12
Views
653
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top