Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #106
"Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind."

"Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey."


from: https://www.ibm.com/cloud/learn/what-is-artificial-intelligence#toc-deep-learn-md_Q_Of3
 
  • Informative
Likes Oldman too
Computer science news on Phys.org
  • #107
PeroK said:
This is also garbage. How can the world change technologically significantly several times a month? Whoever wrote this has modeled progress as a simple exponential without taking into account the non-exponential aspects like return on investment. A motor manufacturer, for example, cannot produce an entire new design every day, because they cannot physically sell enough cars in a day to get return on their investment. We are not buying new cars twice as often in 2021 as we did in 2014. This is not happening.

You can't remodernise your home, electricity, gas and water supply every month. Progress in these things, rather than change with exponential speed, has essentially flattened out. You get central heating and it lasts 20-30 years. You're not going to replace your home every month.

Your're analyzing the future in the context of its past. That just doesn't work. There could be no such thing as investment, and return and selling, etc, as we see them now.

For example, what limitations would those contraints really impose when you require 0 human labor to design, manufacture, distribute, dispose of, clean up, and recycle things, and have essentially unlimited resources, and can practically scale up as large as you want extremely fast, limited mainly by what you have in your solar system? And then after that, how long to colonize the nearby star systems?

The fact is that near future technology can easilly suddenly make these things possible. Your house and car could easilly be updated weekly or even continuously each minute, and for free, just as easy as it is for your computer to download and install an update.

And AI superintelligence isn't needed for that, just an AI pretty good intelligence. The superintelligence part may be interesting too, but not sure exactly what more can be done with more intelligence that couldn't be otherwise. I guess, probably things like math breakthroughs, medical breakthroughs, maybe imortality, maybe artificial life, or nano-scale engineering that looks like life, things like that.

Some other things to expect are cyborgs, widespread use of human genetic engineering, and ultra realistic virtual worlds and haptics, or direct brain interfaces, that people are really addicted to.

I don't know how to measure technological advancement as a scalar value though. I think Kurzweil is basically probably about right in the big picture.
 
Last edited:
  • #108
Lol, this is classic. . . . :wink:



.
 
  • Like
Likes sbrothy and Oldman too
  • #109
  • Like
  • Haha
Likes Chicken Squirr-El, Bystander and BillTre
  • #110
OCR said:
Lol, this is classic. . . . :wink:



.

Man. Kids and their computers. I'm flabbergasted. :)
 
  • #111
PeroK said:
This is garbage.
Agreed. It's dated from 2015, but includes a Moore's Law graph with real data ending in 2000 and projections for the next 50 years. It had already been dead a decade before the post was written! (Note: that was a cost-based graph, not strictly power or transistors vs time).

The exponential growth/advancement projection is just lazy bad math. It doesn't apply to everything and with Moore's law as an example, it's temporary. By many measures, technology is progressing rather slowly right now. Some of the more exciting things like electric cars are driven primarily by the mundane: cheaper batteries due to manufacturing scale.

AI is not a hardware problem (not enough power) it is a software problem. It isn't that computers think too slow it's that they think wrong. That's why self-driving cars are so difficult to implement. And if Elon succeeds it won't be because he created AI it will be because he collected enough data and built a sufficiently complex algorithm.
 
  • Like
Likes Oldman too and PeroK
  • #112
Hey, just want to say that I only posted this for "fun read" purposes, as noted by sbrothy and I definitely don't agree with everything in it. This is the "Science Fiction and Fantasy Media" section after all and I did not intend to ruffle so many feathers over it.

I get irritated when fiction always has the AI behave with human psychology and the WBW post touched on that in ways I rarely see.

Slightly related (and I'm pretty sure there are plenty of threads on this already), but I'm a huge fan of this book: https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

Highly recommend!
 
  • Like
Likes russ_watters
  • #113
Chicken Squirr-El said:
I read that a year or two ago. I loooove the vampire concept.

But I'm not really a sci-fi horror fan. If you want sci-fi horror, read Greg Bear's Hull Zero Three. This book literally haunts me. (I get flashbacks every time I see it on the shelf, and I've taken to burying it where I won't see it.)
 
  • Informative
Likes Oldman too
  • #114
russ_watters said:
It doesn't apply to everything and with Moore's law as an example, it's temporary.
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
 
  • #115
Algr said:
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
No, Moore's Law broke down* right at the time (just after) AMD was beating them to 1 GHZ in 2000. Monopoly or not, you need to sell your products to make money, and one big contributor to the decline of PC and software sales is there's no good reason to upgrade when the next version is barely any better than the last.

*Note, there's different formulations/manifestations, but prior to 2000 for PCs, it was all about clock speeds. After, they started doing partial work-arounds to keep performance increasing (like multi-cores).
 
  • #116
Algr said:
Just a side thought. Could it be that technological progress for microchips slowed down when Intel no longer had competition from the PowerPC architecture? Now that ARM is making waves, things might catch up to Moore's law again?
The point of the criticisms is that, in real world scenarios, nothing progresses geometrically for an unlimited duration. There always tends to be a counteracting factor that rises to the fore to flatten the curve. The article even goes into it a little later, describing such progress curves as an 'S' shape.
 
  • Like
Likes Oldman too, Klystron and russ_watters
  • #117
OCR said:
Lol, this is classic. . . . :wink:



.

I really didn't do this little film justice in my first comment. The "spacetime folding" travel effects are truly amazing. And what a nightmare.
 
  • Like
Likes OCR
  • #118
OCR said:
Lol, this is classic. . . . :wink:
The crisis in that film is that the machine has final authority on deciding what constitutes "harm", and thus ends up doing pathological things, including denying the human any understanding of what is really going on.
 
  • Like
  • Informative
Likes sbrothy and Oldman too
  • #119
OCR said:
Lol, this is classic. . . . :wink:


Turing's Halting Problem, personified.
 
  • Like
  • Informative
Likes sbrothy and Oldman too
  • #120
russ_watters said:
AI is not a hardware problem (not enough power) it is a software problem. It isn't that computers think too slow it's that they think wrong. That's why self-driving cars are so difficult to implement. And if Elon succeeds it won't be because he created AI it will be because he collected enough data and built a sufficiently complex algorithm.
Right now I think it is largely a combination of a hardware problem and a data problem. The more/better data the neural networks are trained on, the better AI gets. But training is costly with the vast amount of data. So it is really a matter of collecting data and training the neural networks with it.

AI's behavior is not driven by an algorithm written by people, it's a neural network which has evolved over time to learn a vastly complex function which tells it what to do. And the function is currently too complex for people to break down and understand. So nobody is writing any complex algorithms that make AI succeed, they are just feeding data into it, and coming up with effective loss functions that penalize the results they don't like.

But there is possibly a limit how far that can take us. There is also an evolution of architecture and transfer learning, and neuro-symbolic learning, which may spawn breakthroughs or steady improvements besides just pure brute force data consumption.
 
Last edited:
  • Informative
Likes Oldman too
  • #121
Moore's I agree is not a good model going into the future. But it doesn't stop people from trying to forecast improvements in computing power. Technologies like room temperature superconductors, carbon based transistors, quantum computing, etc. will probably change the landscape. If we crack fusion energy, then suddenly we have a ton of energy to use as well.

But in my opinion it also doesn't make too much sense to focus just on things like how small a transistor can be, and how efficiently you can compute in terms of energy. Because AI already gives us the ability to just build massive computers in space.

Quantum computing however does have the chance to make intractable problems tractable. There are problems which would take classical computers the age of the universe to solve that quantum computers could theoretically solve within a lifetime. A jump from impossible to possible is quite a bit bigger than Moore's law.

So then when these future technologies can potentially result in massive leaps forward that make Moore's law look like nothing, what about the progress that it took to develop those technologies in the first place. Sure, the unlocked capability is a step function, but in terms of advancement, do we also just draw a step function, or do we count the intermediate progress that got us there? Because there are a ton of scientific breakthroughs that are getting us closer happening constantly now days, even if most people aren't paying much attention.
 
Last edited:
  • #122
Jarvis323 said:
Right now I think it is largely a combination of a hardware problem and a data problem. The more/better data the neural networks are trained on, the better AI gets. But training is costly with the vast amount of data. So it is really a matter of collecting data and training the neural networks with it...

But there is possibly a limit how far that can take us. There is also an evolution of architecture and transfer learning, and neuro-symbolic learning, which may spawn breakthroughs or steady improvements besides just pure brute force data consumption.
I think you may have missed my point because you basically just repeated it with different wording. Yes, I know it is being approached as a hardware and data problem. But humans don't think by accessing vast data archives, taking measurements with precise sensors and doing exact calculations.
 
Last edited:
  • #123
russ_watters said:
I think you may have missed my point because you basically just repeated it with different wording.
"Imitation is the sincerest form of flattery." --Old proverb.
 
  • Like
Likes BillTre and russ_watters
  • #124
Jarvis323 said:
Moore's I agree is not a good model going into the future. But it doesn't stop people from trying to forecast improvements in computing power. Technologies like room temperature superconductors, carbon based transistors, quantum computing, etc. will probably change the landscape.
It does make it much harder to predict when instead of steady, continuous - predictable - advances you're waiting for a single vast advancement that you don't know when it will come, if ever. And many of the biggest advances I'm not sure if people even saw coming (such as the computer itself).

Jarvis323 said:
If we crack fusion energy, then suddenly we have a ton of energy to use as well.
Very doubtful. Fusion is seen by many as a fanciful solution to our energy needs, but the reality is likely to be expensive, inflexible, cumbersome and maybe even unreliable and dangerous. And even if fusion can provide power at, say, 1/10th the cost it currently is, generating the electricity is only around a third of the cost of electricity. The rest is in getting the electricity to the user. Fusion doesn't change that problem at all. And not for nothing, but we already have an effectively limitless source of fusion power available. As we've seen, just being available isn't enough to be a panacea.

Also, it's not power per se that's a barrier for computing power, it's heat. A higher end PC might cost $2000 and use $500 a year in electricity if run fully loaded, 24/7. Not too onerous at the moment. But part of what slowed advancement was when they reached the limit of what air cooling could dissipate. It gets a lot more problematic if you have to buy a $4,000 cooling system for that $2,000 PC (in addition to the added energy use). Even if the electricity were free, that would be a tough sell.
 
  • #125
Heh. Managed to get a topical comic in after all. (A few of the previous ones are pretty good too. He must have gad a good week.).
 
  • #126
russ_watters said:
But humans don't think by accessing vast data archives, taking measurements with precise sensors and doing exact calculations.
Who is to say humans have less precise sensors or that our calculations are less exact?
 
  • #127
Jarvis323 said:
Who is to say humans have less precise sensors or that our calculations are less exact?
Me? Honestly, I don't see how this is arguable. What's the exact color of the PF logo? How fast was the ball I just threw? Maybe we're talking past each other here, so if you're trying to say something else, could you elaborate?
 
  • Like
Likes Oldman too and BillTre
  • #128
russ_watters said:
Me? Honestly, I don't see how this is arguable. What's the exact color of the PF logo? How fast was the ball I just threw? Maybe we're talking past each other here, so if you're trying to say something else, could you elaborate?
Just because your conscious mind can't give precise answers doesn't mean your sensors and brain's calculations are at fault. You probably can catch a ball if someone tossed it to you and you don't need to consciously calculate trajectories and the mechanics of your hands. But you do do the necessary calculations. AI is the same. If you train a neural network to catch a ball, it will learn how to do it and it probably won't do it like a physics homework problem.

In the same way, when you see the color, maybe you can't recite the RGB component values, some people can't even see in color, but biological eyes are certainly not inferior sensors to mechanical ones in my opinion, within the scope of their applicability. And I'm not sure what technology can compete with a nose?

Of course we can equip AI with all kinds of sensors we don't have ourselves, but that's pretty much besides the point.

And what does it mean to say our brain doesn't do exact calculations? Does it mean there is noise, interference, randomness, that it doesn't obey laws of physics?

AI is based on complex internal probibalistic models. So they guess. Maybe which guess they will give is consistent if they've got a static internal model that's stopped learning. But they still guess. The main difference with humans is we don't just guess imediately, we second guess and trigger internal processing when we're not sure.

It might be possible AI can also try to improve its guesses at the expense of slower response time, but a general ability to do this is not a solved problem as far as I know.
 
Last edited:
  • Skeptical
Likes BillTre
  • #129
Jarvis323 said:
Just because your conscious mind can't give precise answers doesn't mean your sensors and brain's calculations are at fault.
That isn't what you or I said before - it sounds like exactly the opposite of your prior statement:
Who is to say humans have less precise sensors or that our calculations are less exact?
So I agree with your follow-up statement: our conscious mind can't make precise measurements/calculations. Yes, that matches what I said.
You probably can catch a ball if someone tossed it to you and you don't need to consciously calculate trajectories and the mechanics of your hands. But you do do the necessary calculations.
That sounds like a contradiction. It sounds like you think that our unconscious mind is a device like a computer that makes exact calculations. It's not. It can't be. The best basketball players after thousands of repetitions can hit roughly 89-90% of free throws. If our unconscious minds were capable of computer-like precision, then we could execute simple tasks like that flawlessly/perfectly - just like computers can.
AI is the same. If you train a neural network to catch a ball, it will learn how to do it and it probably won't do it like a physics homework problem.
Again, I agree with that. That's my point. And I'll say it another way: our brains/sensors are less precise and we make up for it by being more intuitive. So while we are much less precise for either simple or complex tasks, we require much less processing to be able to accomplish complex tasks. For computers, speed and precision works great for simpler tasks (far superior to human execution), but has so far been an impediment to accomplishment of more complex tasks.
 
  • #130
russ_watters said:
Again, I agree with that. That's my point. And I'll say it another way: our brains/sensors are less precise and we make up for it by being more intuitive. So while we are much less precise for either simple or complex tasks, we require much less processing to be able to accomplish complex tasks. For computers, speed and precision works great for simpler tasks (far superior to human execution), but has so far been an impediment to accomplishment of more complex tasks.
Maybe we're not talking about the same thing. You seem to be talking about computers and algorithms. I've been talking about neural networks. Trained neural networks do all their processing immediately. Sure it may have learned how to shoot a basket better than a person. But humans have a lot more tasks we have to do. If one neural network could do a half decent job shooting baskets and also do lots of other things well, that would be a huge achievement in the AI world.

Really, it's humans which do a lot of complex processing to complete a task, and to make AI improve, giving AI that ability is a primary challenge, because it has to know what extra calculations it can do, and how it can reason about things it doesn't already know. The ability to do this in some predetermined cases in response to a threshold on a sensor measurement
is there of course but that isn't AI.
 
  • #131
Jarvis323 said:
Maybe we're not talking about the same thing. You seem to be talking about computers and algorithms. I've been talking about neural networks.
What we were just talking about is precision/accuracy of the output, regardless of how the work is being done.
Trained neural networks do all their processing immediately.
What does "immediately" mean? In zero time? Surely no such thing exists?
Sure it may have learned how to shoot a basket better than a person. But humans have a lot more tasks we have to do.
Yes. Another way to say it would be sorting and prioritizing tasks and then not doing (or doing less precisely) the lower priority tasks. That vastly reduces the processor workload. It's one of the key problems for AI.
If one neural network could do a half decent job shooting baskets and also do lots of other things well, that would be a huge achievement in the AI world.
Yes.
 
  • #132
russ_watters said:
What does "immediately" mean? In zero time? Surely no such thing exists?

I mean there is just one expression, which is a bunch of terms with weights on them, and for every input it gets, it just evaluates that expression and then makes its guess. It doesn't run any algorithms beyond that. Of course you could hard code some algorithm for it to guess to run in response to an input. And one day maybe they could also come up with their own algorithms.

russ_watters said:
Yes. Another way to say it would be sorting and prioritizing tasks and then not doing (or doing less precisely) the lower priority tasks. That vastly reduces the processor workload. It's one of the key problems for AI.

I wouldn't view it this way exactly although that could be possible. The problem for a neural network I think is that it needs to have one model that gives good guesses for all of the different inputs. And the model emerges by adjusting weights on terms to try and minimize the error according to the loss function. So we have to also come up with a loss function that ends up dictating how much the neural network cares about its model being good at basketball or not.

The problem is that there is a whole world out there of things to worry about, and there are only so many terms in the model, and only so much of the world has been seen, and there is only so much time to practice and process it all. The network ultimately is a compressed model, which has to use generalization. When it shoots a basketball, it's using neurons it also uses to comb its hair, and play chess. And when it does a bad job combing its hair, it makes changes that can also affect its basketball shooting ability.
 
  • #133
PeroK said:
This is garbage.
Kurzweil is provocative and triggers reactions (just as he has with you, @PeroK) and those reactions cause people to discuss the ideas he espouses. It might be to scoff and dismiss his ideas (transhumanism is a great example that has attracted a lot of derision), or to argue his timelines are wrong, or even to agree but add qualifications.

Whatever, he causes a conversation about the future and while it might be viewed as garbage, it is not a bad thing.
 
  • Like
Likes russ_watters and DaveC426913
  • #134
Melbourne Guy said:
Kurzweil is provocative and triggers reactions

I got the impression he is also providing a primer on the subject for newbies even as he is arguing it.

I got the impression the explanation of geometric growth early in the essay is deliberately simplistic as part of the primer that he wants to lay out for newbies, and then goes on to nuance it a few paragraphs later.
 
  • Like
Likes russ_watters
  • #135
I'll just leave this here:



(Gasp)
 
  • #136
"Fear AI". There may a few ways that we really should "fear" or at least be wary. The obvious one is where the "AI" is given access to physical controls of the real environment e.i. driverless vehicles of any kind or control of weapons (as per "The Corbin project (movie)). we also know what happened to the HAL9000 computer in 2001.Space Odyssey.
I'm sure there are many more such examples of AI gone astray. It may also depend on the level of "awareness" and "intelligence" of the particular AI. The android in Isaac Asimovs "The Naked Sun" and " Caves of Steel" give examples of extremely advanced AI as to be almost human. But even so some of his tales also feature AI which turns out to be harmful due usually to some "failure" on its "mental state". Even his famed three laws of robotics didn't always stop harm occurring in his tales.
Also not forgetting M.Chritons(sp.) "Gray Goo" of self-replicating nanobots causing mayhem.
I would suggest that even humans fail and cause great harm so anything we build is also likely to "fail" in some unkown way so I would be very wary of so-called AI even at the highest level unless there was some sort of safeguard to prevent harm from occurring.
Could Ai ever become "self_aware? I very much doubt it. Even many animals do not seem to be self_aware so how could we ever make a machine to do it. I have no problem using "AI" etc as long it does what I want it to do and is ultimately under my control.

Yes, I prefer to drive a manual car.
 
  • #137
DaveC426913 said:
I read that a year or two ago. I loooove the vampire concept.

But I'm not really a sci-fi horror fan. If you want sci-fi horror, read Greg Bear's Hull Zero Three. This book literally haunts me. (I get flashbacks every time I see it on the shelf, and I've taken to burying it where I won't see it.)
I'm about a third of the way through HZT now. Thanks for the recommendation!
 
  • #138
Chicken Squirr-El said:
I'm about a third of the way through HZT now. Thanks for the recommendation!
:mad::mad:
I was trying to warn you off!
Don't come back saying I didn't. :nb)
 
  • #139
sbrothy said:
Heh. Managed to get a topical comic in after all. (A few of the previous ones are pretty good too. He must have gad a good week.).
Speaking of comics. I just read the coolest scifi comic: "Sentient". It would make one paranoia inducing film. And notably the protagonist is a ship AI 20 minutes into the future. Suddenly taske with protecting children.

Review
 
  • #140
Given the existence of an AI that is better than humans at everything, what would the best case scenario be? Can a "most likely scenario" even be defined?
 

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
633
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
2K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
5K
  • Astronomy and Astrophysics
Replies
3
Views
868
Replies
3
Views
2K
Back
Top