Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #316
nuuskur said:
I did mention cyber security. One does not produce ML algorithms with basic computer skills, drop the melodramatics, would you kindly?

All of the dangerous uses of AI you touched on are accessible to anyone with basic computer skills is my point.

nuuskur said:
NP problems do not become P problems out of thin air.

This statement doesn't make sense.

nuuskur said:
Problems that are exceedingly difficult (or even impossible) to solve do not become (easily) solvable. Machine learning is an optimisation tool and it's very good for it. If a problem is proved to be with complexity ##\Omega (n^k)##, but the best algorithm found so far can do it in exponential time, it might happen that "AI" finds a better solution, but it can never improve the result past what is proven to be the limit.

Which AI threats are you ruling out based on this argument?
 
Computer science news on Phys.org
  • #317
Jarvis323 said:
Are you talking about reinforcement through human feedback?
I believe that is a very advanced and complicated way of saying "bunch of programmers tweaking their code" so that it excludes certain phrases considered by humans as hurtful.

The AI itself had no understanding of this nor could it, it doesn't understand meaning as that is a complex subject only a true sentient being can grasp and even then not all of them anyway...

I do not like the word reinforcement used in this context because it makes it sound as if the AI is "learning" and just needs some "teaching/reinforcement" to understand better, that is not the case. An AI could only learn if it had awareness and understanding of meaning but that can only arise in conscious beings that are subjective.
Meaning without subjectivity is meaningless!
 
  • #318
artis said:
I believe that is a very advanced and complicated way of saying "bunch of programmers tweaking their code" so that it excludes certain phrases considered by humans as hurtful.

This is a fundamental misunderstanding of not just how it was done, but also what the thing is and how it works.
 
  • Skeptical
Likes russ_watters
  • #319
Jarvis323 said:
All of the dangerous uses of AI you touched on are accessible to anyone with basic computer skills is my point.
Fine, semantics..
Jarvis323 said:
This statement doesn't make sense.
That's a relief. Many doomsday preachers would say things like "if NP=P then we are all doomed" and then they heard about AI and said "now AI is gonna break all cyber security and we are all doomed" and so on and so on. The basis for those arguments is that problems that are difficult to solve become easily solvable via some blackbox magic, which you thankfully agree with is not the case.
Jarvis323 said:
Which AI threats are you ruling out based on this argument?
You're getting ahead of yourself. I never said anything was ruled out.
 
  • Haha
Likes russ_watters
  • #320
By the way , to all the AI scared people. Think of it like this. An AI self driving car can only cross the red light by mistake, it can never cross the red light knowingly, because it doesn't know what it means to make a deliberate mistake as that would require both consciousness as well as subjectivity which are arguably two sides of the same coin.
AI has neither, it doesn't know a damn bit about meaning nor is it subjective, electrons in digital switching circuits represented by binary code arranged in a special way that we refer to as a ML algorithm is actually just matter , this alone I think proves that consciousness is an emergent property of something more than just a complex calculation as many of the materialistically minded AI researchers would love to think.

So be at ease your self driving tesla won't spy/report on your mistress anytime soon...
 
  • Like
Likes russ_watters and Aperture Science
  • #321
Jarvis323 said:
This is a fundamental misunderstanding of not just how it was done, but also what the thing is and how it works.
Please explain then, I'm never above or too full of myself to not listen, so go ahead
 
  • #322
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.
 
  • Like
Likes russ_watters, artis and Aperture Science
  • #323
artis said:
Please explain then, I'm never above or too full of myself to not listen, so go ahead

You can read about it here.

https://en.m.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

It basically involves training the model by examples. People rank the examples (e.g., good or bad) and the training algorithm uses those ranked examples to automatically update parameter values through back propagation, to new values which result in (hopefully) the encoding of some general rules implicit in the relationships between the examples and their ratings, so that it can extrapolate and give preferred answers to new similar prompts.
 
Last edited:
  • #324
russ_watters said:
What, Dave, what's going to happen?

Something Wonderful. . . . :wink:

.
 
  • Like
  • Haha
Likes russ_watters and nuuskur
  • #325
nuuskur said:
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.

The point of AI is that it enables task automation. So you need to consider at what level of automation you semantically call it fear of the AI itself instead of the persons who ultimately pointed it at you or bear responsibility.

Lets consider an example based on guns. You can say I don't fear guns, I fear people, and that makes some sense. Now, lets consider a weapon where the user enters a persons name into a form and presses a button, and then the weapon tracks them down and kills them. Is that enough to be afraid of AI instead of the button pusher? How about if instead of having to manually enter a name, they just manually enter some traits, and then an algorithm identifies everyone with the traits and sends the weapons out?

So far it still arguably might fall into the AI doesn't kill people, people kill people category, because the AI doesn't really make a non-transparent decision. You could go further and say what if a person just instructed the AI to eliminate anyone who is a threat to them. Then there is some ambiguity, the AI is deciding who is a threat now. So there is additional room to fear the decision process. You could go further, and suppose you instructed the AI to act in your interest, and as a result the AI decides who is a threat and eliminates them.

Anyways, we obviously need to worry about the AI-human threat even in the absence of non-transparent AI decision making. There is also room to fear AI decision making whenever it becomes involved in making subjective or error prone decisions. But people make bad or threatening enough decisions as it is.

The actions here could be generalized for understanding some of the threat profile. Instead of kill, it could just injure, steal, manipulate, convince, misinform, extort, harass, threaten, oppress, or discriminate, etc. Instead of asking it to identify physical threats you could ask it to identify political threats, or economic threats or competition, or people who are vulnerable to a scam, or people who are right wing or left wing, or some race or gender, or ethnicity or nationality, etc.

Now imagine thousands of AI based systems simultaniously being put into continuous practice automating these kinds of actions over large AI created lists of people, on the behalf of thousands of different criminal organizations, terrorist groups, extremist groups, corporations, political organizations, militaries, and so on.

That would comprise one of many possible reasons why a person might want to fear AI or people using AI.

Beyond that you could fear economic disruption and job loss (which isn't that clearcut because technically greater efficiency should lead to better outcomes if we could adapt appropriately). You could fear unintentional spreading of misinformation. You could fear negative impacts to mental health based on the potential for more addictive digital entertainment, you could fear existential crisis, you could fear the undermining of democracy, you could fear unchecked power accumulation and monopoly, you could fear excessive surveillance or a police state, you could fear over-dependence leading to incompetence, etc.

It is such a complicated profile of threats that it is hard to wrap the mind around in my opinion. A very significant number of those threats are new and now current real world threats.
 
Last edited:
  • #326
I think the first thing we need to understand is what a program is and that true consciousness is.

In my opinion, a program only does what it's users and pogrammers tells it to do and nothing more, regardless of now "intellegient" and "powerful" it is. If you ask a planet-size computer to clean a room, it will ONLY clean the room. It will never just suddenly decide to rule the human race because it is not in its programming.

I also still dont think AIs are capable of emotion and true consciousness because we still dont know how our brains work and therefore it will not be possible to simulate them using a determinstic machine.

Lets think about the example of a personality trait "liking cats". I do not like them because I am allergic to them, so when someone mentions cats, my brain will make me think about the times I tear up uncontrollably in the presence of a cat. For those who do like cats, they will think about their cats and their beautiful memories together. It triggers a lot of emotions and we do this effortlessly as humans.

How would an AI ever do that? How does it ever interpret "cats" like we do? As images and memories (whatever that is in programming terms) and as not 1s and 0s.

Right now you can program all kinds of personalities and characteristics into AIs and make them look real , but it is just the duplication of the traits from the programmer to the program. My point is how does AIs ever have any"feelings" or "traits" that are "original" and "genuine", as in ones that are not raught by us? If they cant do this, how can we call them true consciousness and attribute any human characteristics to them?

I think any attempt to anthropomorphize AIs is unrealistic and stupid. Too many videogames and movies portrait AIs like humans with feelings to be hurt while in reality they are just tools that make human noises to make them more like us. Would you feel had if you dropped a hammer on the floor? Would the hammer feel bad?

There are no "good" or "evil" AIs, there are only good and evil programmers.

So I do not fear an AI overlord or an AI uprising, as I think those are impossible, provided they are in the right hands. Well maybe I should fear them the same way I fear any WMD, since they can be used by the ill-intended.

However, the more realistic concern regarding AI is that I believe it could think very differently from us and can misinterpret our orders, which leads to unexpected results.

For example, that planet-size computer I mentioned before may have its own interpretation as to what "clean the room" actually means. It could interprete cleaning the room as removing anything in it, which could include myself, and therefore it concludes that it should just blow me into atoms with lasers. But still, this is within its predetermined goal, since the room is in fact clean now. If we ask the same computer to "make everyone happy", it could by force put everyone in a simulation where we live the best life we could ever imagine and preserve our bodies in pods like in the Matrix.

But ultimately it is our fault that it thinks differently than us because we failed to consider all the possibilities when making the program. This is I think the biggest risk involved in dealing with an AI with great capabilities, because it is our responsibilities to regulate an AI's behavior. My point is, as AIs become more and more powerful, we do need to be more and more specific and careful in programming them and setting boundaries as to what they can and cannot do. We also need to think more and more like them to make sure they do exactly what we intend them to do.
 
  • #327
Aperture Science said:
I think the first thing we need to understand is what a program is and that true consciousness is...

The first thing to do is understand what machine learning is.

Aperture Science said:
But ultimately it is our fault that it thinks differently than us because we failed to consider all the possibilities when making the program.

All of the possibilities of back propagation? What kind of programs are you imagining?
 
  • #328
Jarvis323 said:
The first thing to do is understand what machine learning is.
All of the possibilities of back propagation? What kind of programs are you imagining?
I know very little about machine learning and programs. I only know Google is using captcha to let users train their AI to indentify traffic pictures. That's about it.

What I was referring to by "possibilities" are all the ways an AI can solve a problem within its capabilities. Although desired results are achieved, there may be unwanted side effects. So I was trying to make the point that when developing AIs we should be cautious and set adquate and clearly-defined goals and restrictions as AIs gradually will have more resources at their disposal.

One example would be something like Asimov's three laws of robotics (though still a fictional work). Another one would be the backpropagation you mentioned (i just googled what it is).

im just an enthusiast so i dont know much detail.

:)
 
  • #329
A.I. race should pause for six months, says Elon Musk and others
https://finance.yahoo.com/video/race-pause-six-months-says-212621876.html

Tech leaders urge a pause in the 'out-of-control' artificial intelligence race​

https://www.npr.org/2023/03/29/1166...e-out-of-control-artificial-intelligence-raceAI could become a danger if it is put 'in control' of critical systems, and and the input includes erroneous data/information.

AI for analysis of large data sets can be useful, but if the input is incorrect, then erroneous or false conclusions may occur. It may be minor, or it could be major, or even severe.

Remember - garbage in, garbage out.

How will AI check itself for error/misinformation/disinformation?
 
  • Like
Likes Lord Jestocost and Greg Bernhardt
  • #330
Astronuc said:
A.I. race should pause for six months, says Elon Musk and others
If there is money to be made, a pause will never happen. Also, Elon is just bitter that he couldn't buy OpenAI.
 
  • Like
Likes PeterDonis, russ_watters, dlgoff and 2 others
  • #331
Greg Bernhardt said:
If there is money to be made, a pause will never happen. Also, Elon is just bitter that he couldn't buy OpenAI.
I'm sure AI development will continue, but it's the implementation that must be considered. It is one thing to use AI to perform an analysis, or a non-critical analysis, but it's another thing entirely to allow AI to assume command and control of a critical system.

In Quality Assurance, we have different levels (or grades) of QA for software/hardware based on whether it is a minor, major or critical function. For example, a scoping calculation or preliminary/comparative analysis may allow a lower level of QA, however, for a design and analysis of a 'critical' system, that requires a higher level of QA. By 'critical', I mean the failure of which could cause serious injury or death of one or many persons.

A critical system could be control of a locomotive or a set of locomotives, an aircraft, a road vehicle (truck or car), . . . .

In genealogical research, some organizations use AI systems to try and match up people with their ancestors. However, I often find garbage in the information because one will find many instances of the same name, 'e.g., John Smith, in a given geographical area such as a county/shire/district/parish or several counties/shires/districts/parishes, and many participants are not too careful in placing unrelated people in their family trees. That's an annoyance, and I can chose to ignore. However, if the same lack of care was to be applied to a medical care situation, the outcome could be life threatening for one or more persons (e.g., mixed up patients receiving the other's care, or a misdiagnosis).
 
  • Like
Likes Lord Jestocost
  • #333
Astronuc said:
AI could become a danger if it is put 'in control' of critical systems, and and the input includes erroneous data/information.

AI for analysis of large data sets can be useful, but if the input is incorrect, then erroneous or false conclusions may occur. It may be minor, or it could be major, or even severe.
You damn sure wouldn't want to put ChatGPT in it's current form as a teacher, sure it will pass the simple stuff but on more complicated outputs it will teach you that earth is flat with non zero probability.

Just today I wanted to see whether it will recognize some very niche engineering apparatus that I'm dealing with and sure enough it gave me names to models and companies that were all either scams or some classical "free energy" perpetual motion machine guys writing their blogs.

I have noticed that whenever the information pool from which it can chose becomes small it tends to run into errors. And sure it is not conscious it doesn't see the sketchy meaning behind those scam articles therefore it simply sees the words like a bull sees the red cloth and it runs right in.
 
Last edited:
  • #334
Jarvis323 said:
Anyways, we obviously need to worry about the AI-human threat even in the absence of non-transparent AI decision making. There is also room to fear AI decision making whenever it becomes involved in making subjective or error prone decisions. But people make bad or threatening enough decisions as it is.
Jarvis , I get it you think it's a threat, but then again we have thermonuclear weapons for over 50 years now and "Putin is Puting" them in Belarus as we speak from what I understand , you really think a face recognition software will end your life?
Let's say you live in China, they have CCTV everywhere , even without AI you couldn't cross the street in Beijing without being "caught" and your social credit score possibly lowered.
So AI will do what exactly? Decide to invade Taiwan without supreme leader's approval?
No it's not gonna happen, I suggest we are still in the times where we have to worry about humans instead of robots.

And I would think experts and leaders around the world are smart enough to no put ChatGPT in the command of nuclear reactors (even if it could do that) or any other critical infrastructure just yet.

Even Tesla is still struggling with their autopilot, turns out driving is really hard even when your not conscious, or maybe especially if your not conscious, now what does that tell me?

It tells me that a very important aspect of awareness and consciousness is the ability to subjectively interpret every detail around you to give it meaning.
Only when AI will have that ability at which point it will most likely become AGI, only then will I begin to fear it.
Until then the best it could do is to mess up something critical that it's assigned to, but guess what? Humans blew up Chernobyl, humans did WW2 and dropped first 2 atomic bombs, we already have done pretty much the worst stuff we can do, I do not think a fancy robot will be able to do more.

But then again I could be wrong
 
  • Like
Likes russ_watters
  • #335
Jarvis323 said:
You can read about it here.

https://en.m.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

It basically involves training the model by examples. People rank the examples (e.g., good or bad) and the training algorithm uses those ranked examples to automatically update parameter values through back propagation, to new values which result in (hopefully) the encoding of some general rules implicit in the relationships between the examples and their ratings, so that it can extrapolate and give preferred answers to new similar prompts.
Ok, now I do recall this is the way they do it, but eventually this is just a more complicated way of doing what I said. The model is dumb as a rock, it doesn't understand the words it deals with the same way we understand them with added meaning/load. Therefore it will spew racist crap unless it is "shown" not to do so by way of literally "dragging it by the nose" like a spoiled dog back to the place where it messed up the carpet or whatever , and even then a dog understands better than these current language models because for them this "teaching" is simply a way to put "weights" so that the algorithm labels these certain words as "bad" so uses them less often or not at all.

At least that is how I understand it.
No matter how many "catchpa" like examples you give it it still doesn't understand the meaning it just sees that "this and that goes there" better than elsewhere.
 
  • #336
I'm not sure if I should apologize for reviving this thread. This topic seems to carry an emotional aspect. My intention was to get opinions on the rapid increase in AI potential. There is a lot I would like to respond to but this post might be a TLDR one. I woke up this morning with this thread on my mind just turning over what was posted last night before 10 PM. The commonly held fear of AI as depicted in movies of the physical destruction of mankind which even the wildest speculation at this time seems improbable. Another fear is the disruption of society or the economy due to the misuse of AI. This I believe is real. This is easy because we do it all the time ourselves. Then, there is the unappreciated dangerous AI agent which we think we can manage. I think there is a possibility for this. Here I am thinking of nuclear power. We embrace it with the full knowledge that it can also destroy us. This may happen but we're confident that nothing bad will happen. Finally, I guess we could be looking through rose-colored glasses and saying this is the best thing that has happened to mankind. Unfortunately, we even manage to screw up the use of antibiotics.

Today the internet and social media are having untoward effects on society even as they drive our economy to higher levels. We all thought this was great but now we have to live with the bad and the good.


We sometimes refer to the “game of life”. AI is really good at games, being able to see many moves ahead in various scenarios and even being able to find new strategies foreign to humans and do this at light speed. LLM has all the information (rules) needed to play the game built into the language. The only thing it needs is the right prompt to set it in motion. I believe AGI is absolutely attainable and be developed sooner than many think. It doesn’t have to be sentient. Remember a few years ago AI was a single-task agent and forgot everything when you changed the task. Not so now. People are adding “plug-ins” to refine its performance for specialized tasks. Some have found ways of increasing the capability of a neural network without increasing its size.

We berate AI for making mistakes, going off the rails, and hallucinating which I might point out is also a human failure. We mix up facts, forget things, make biased statements, go off track, mislead, and make stupid statements and on top of that are handicapped by some emotions. AI may use us like we use one another. Paraphrasing what Yondu Udonta, the blue guy in guardians of the galaxy said to Rocket the genetically engineered raccoon: I know what AI is because it is us.
 
  • Like
Likes russ_watters
  • #337
artis said:
you really think a face recognition software will end your life?
You are misidentified as a terrorist and located. Law enforcement approaches you. You reach into your pocket for your cell phone and you go down in a hail of bullets. It is possible. it has happened.

Remember the TV series "Person of interest"? An AI system (The Machine) plugged into all surveillance monitors search for people demonstrating a possibility of terrorist activity.
 
  • #338
Greg Bernhardt said:
If there is money to be made, a pause will never happen.

The fact this seems true is all the more reason to try.
 
  • #339
Jarvis323 said:
Have you tried it?
No, I haven't. I've seen a bunch of samples provided and heard from people who have, and nothing sounds very compelling to me - I don't see a reason why I would try it.
 
  • #340
nuuskur said:
I hate this type of discussion: let's decide a priori something is a threat and then come up with explanations for why. Eventually, it has to be a person that carries out those deeds. So I stand by what I said. I fear evil people wielding knives. I don't fear the knife.
Nor do you/I fear everyone simply because knives exist.
 
  • #341
gleem said:
You are misidentified as a terrorist and located. Law enforcement approaches you. You reach into your pocket for your cell phone and you go down in a hail of bullets. It is possible. it has happened.
Right, but that doesn't have anything to do with AI. AI doesn't create that risk and there's no reason to think AI will make it worse, is there?
 
  • #342
Jarvis323 said:
Yes. Extremely worse.
Why? I think it'll make things better by improving identification and reducing false IDs, similar to how DNA evidence is freeing wrongfully imprisoned people. As I said previously, you're largely skipping the part where you explain the actual risk. That's why this sounds like fear of the dark.
 
  • #343
russ_watters said:
AI doesn't create that risk and there's no reason to think AI will make it worse, is there?
I think it contributes to the risk. AI is efficient, and tireless it can expand surveillance thereby increasing
the number of false positive events even if it is better than humans If there are no terrorists then the risk is increased with no benefit.
 
  • #344
Aperture Science said:
I think the first thing we need to understand is what a program is and that true consciousness is.
Machine learning -- the huge breakthrough that is altering our lives at a dizzying pace -- is not programming.
 
  • #345
Some people use all available technology to kill their fellow man. Machine learning will be used for such purposes, on a large scale and soon. The head of the US military, Mark Milley, just announced that in fifteen years there will be "significant" numbers of robotic vehicles on the battlefield. Criminals will avail themselves of AI. All you can do is hope that the positive uses outweigh the negative ones.

Revolutions are traditionally won when the lower ranks of the army and police refuse to support the rulers. It seems to me that a robotic army should be more obedient, solidifying the ruling class's control. I would expect this is a powerful motive for the rapid deployment of such robots.
 
Last edited:
  • #346
gleem said:
I think it contributes to the risk. AI is efficient, and tireless it can expand surveillance thereby increasing
the number of false positive events even if it is better than humans If there are no terrorists then the risk is increased with no benefit.
That doesn't logically follow. We're talking about human police killing the wrong "suspect" here. The number of human police doing that can't be increased by a large number of false positives because there's only a limited number of interventions the police can do. AI can only increase the number of errors if it increases the percentage of errors; if it's worse than human police at its job.

E.G., if police can investigate 1,000 leads a day and have an error rate of 10% (100 false leads) and AI provides a billion leads a day and 1% error rate (10 million false leads) the police can still only pursue 1,000 leads, including the 10 errors among them.

And because of this reality (high volume), screening has to happen, which means that the leads aren't just all pursued at random, but scored and pursued preferentially. So a 1% error rate can become a 0.1% error rate because the lower scored guesses aren't pursued.
 
Last edited:
  • #347
Hornbein said:
Machine learning -- the huge breakthrough that is altering our lives at a dizzying pace -- is not programming.
Then what is it? And can you define it in such a way as to exclude a PID loop?
Hornbein said:
Some people use all available technology to kill their fellow man.
Agreed. But this isn't profound. The best at it haven't primarily used high-technology, they've used political power. Mundane trains were the key "technology" in what was probably the greatest murder-spree of all time.
Hornbein said:
Machine learning will be used for such purposes, on a large scale and soon.
How do you define "Machine Learning"? Is that "AI"? What, exactly, does it mean? This statement can't be evaluated without clarity of definition, otherwise it feels like hand-waving.
Hornbein said:
K. What does "significant" mean? More significant than the introduction of the Sidewinder missile in 1956? The Phalanx CIWS, introduced in 1980? Are these changes bigger/more fundamental? The descriptions are vague and feel hand-wavey (as is this thread, as I and others have complained many times).
Criminals will avail themselves of AI. All you can do is hope that the positive uses outweigh the negative ones.
Dynamite: 1866. ....Rockets: ~1000 AD.
 
Last edited:
  • #348
Honestly, right now my biggest fear is that AI will overpromise and underdeliver and we’ll get another 30+ year AI winter.
 
  • Like
Likes russ_watters
  • #349
TeethWhitener said:
Honestly, right now my biggest fear is that AI will overpromise and underdeliver and we’ll get another 30+ year AI winter.
Depends on your expectations. I work in the marketing dept for a large SaaS company and in 6 months generative AI models have changed everything we're doing.
 
  • Like
Likes TeethWhitener and russ_watters
  • #350
TeethWhitener said:
Honestly, right now my biggest fear is that AI will overpromise and underdeliver and we’ll get another 30+ year AI winter.
Agreed with Greg regarding expectations. I don't know anything about his industry (but I'm interested in what, specifically has changed/happened), but apparently that happened under the radar of the hype. Either way, since I am nearly always skeptical of hype, such failures don't look like technology failures to me, just marketing/hype failures, which are meaningless. I've never been holding my breath for Elon's "full self driving" so I'm not turning blue waiting for it.
 
  • Like
Likes TeethWhitener

Similar threads

Replies
1
Views
1K
Replies
12
Views
748
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top