ChatGPT and the movie "I, robot"

In summary, I believe that chatGPT, on the long run, may be a danger to humanity. However, I think that adding emotions to the program could be a very interesting and exciting development, and be used to great effect in creating realistic and immersive experiences.
  • #36
russ_watters said:
The AI image recognition somehow predicts if the person it is looking at is carrying a weapon (or outside food?) and selects them for secondary screening by a human.
That's pretty much how it's been going on a lot of fronts. Facial recognition can spot somebody far easier mostly due to the volume of images it can process. So it says 'I spotted possible wanted guy-X' and the local guys check him out against a known specific person being sought. Perhaps they go so far as to ID him, but certainly not 'shoot without questions'.
I served for a month on a grand jury and found out a bit about how they profile certain people without appearing to directly do so since it's not legal, but it was clearly going on, and with good results. Can an AI spot a car with weapons or drugs laying out in sight? Can an AI target a profiled vehicle and find a trivial traffic offense to use as an excuse to pull it over?

Anyway, AI has been very successful in distinguishing melanoma from ordinary skin discolorations, considerably better than a doctor with decades experience. So they don't 'treat that guy' but they certainly flag him for more tests, perhaps a biopsy. I had cancer discovered dang early by a routine screening (that didn't involve AI AFAIK) and am cancer free for years without having ever taken a drug for it. The AI serves as a trivial routine screening for skin cancer that doesn't involve an expensive visit to an overworked expert.

On the more routine front, I'll think twice before sneaking in the sack of M&M's into the cinema, although I think I'm pretty good at not looking guilty about it, but that might change if I knew the AI was giving me the scrutiny. I think it would be in their favor to let you know that such scans are being done as you're reading the sign saying so.
 
  • Like
Likes russ_watters
Physics news on Phys.org
  • #37
russ_watters said:
Can you provide any examples of actual decisions being made? You've given subject matter/categories but not examples of decisions.
I think the COMPAS case -which is quite old- is the most famous. Although, it is not entirely clear if the problem here was black box AI or simply that the company did not want to reveal their IP.

See e.g.,
https://oxfordbusinessreview.org/th...ms-and-how-business-leaders-can-survive-them/

ML/black-box AI is also used by banks etc to make decisions around mortgages and personal loans; in the past this would have been done by an conventional algorithm but some companies are now using black-box AI for the same purpose (ML trained on large datasets).
There is plenty of information online about this.

Note that I am not in any way claiming to be an expert here. However, the data science department where I work is doing quite a bit of work on explainable AI, so I've seen a number of presentations about these issues meaning it is something I am quite aware of (we've evendone a little bit of work in my group on explainable AI in the context of quantum ML, but that is very, very preliminary work).
 
  • #38
f95toli said:
I think the COMPAS case -which is quite old- is the most famous. Although, it is not entirely clear if the problem here was black box AI or simply that the company did not want to reveal their IP.

See e.g.,
https://oxfordbusinessreview.org/th...ms-and-how-business-leaders-can-survive-them/
It's weird to me that you aren't directly answering my question, but I'm surmising that you believe COMPAS makes sentencing decisions. I and the USSC disagree. I think you are misunderstanding the situation. Humans are making the decision(s), not the software.

Also, while I didn't read the whole link, it looks to me like that question was only part of the case. Much of it was about the weird way the US deals with prejudice (in many cases we're supposed to pretend certain trends don't exist/not use them).

Edit: Heck, it doesn't even appear to me that COMPAS is a black box - just a locked box. In that sense I would tend to agree with the criticism of it from a rights perspective.
 
Last edited:
  • Like
Likes jack action
  • #39
To some extent I think people both misunderstand the "black box" idea and then, completely separately, mis-apply/interpret it as "decision making".

"Black box" just means it's so complicated that humans can't explain it...therefore AI!. It really doesn't follow. I liken it to perpetual motion machine claims; typically the "machine" is just complicated enough that the "designer" can't explain how it works. So they conclude "perpetual motion" because they want to believe it. Oddly, the errors they make in calculating the energy balance never result in below unity efficiency. Perpetual motion is the assumed conclusion/goal before the analysis starts. That's how I see the Black Box = AI logic.

But that's irrelevant anyway. The question being asked is whether or not the AI is making decisions, not how it is making decisions (or just judgements). I see nothing in the link that implies COMPAS is making sentencing decisions. It makes judgements - assessments - and then humans make the decisions.

Indeed, the risk as I described in my previous post isn't the AI making [bad] decisions, it's humans failing to make [informed] decisions and as a result making the bad decisions themselves. The risk is humans believing they've made perpetual motion machines when they haven't. If the power goes out it isn't the PMM's fault for not working it's the human's fault for relying on it to do something it can't do.

This risk is ever-present but hasn't really changed in concept. It's just that the risk of humans making bad decisions regarding computers' use grows as its capabilities grow. Police don't immediately tackle and tase people who set off metal detectors. Why? Because the people who use them understand what they do and what the beep and red light is telling them. The people using them know they have to use their own brain to evaluate the situation. The risk here is that humans stop using their brains, not that "AI" starts having one.
 
  • Like
Likes Halc and jack action
  • #41
russ_watters said:
It's weird to me that you aren't directly answering my question, but I'm surmising that you believe COMPAS makes sentencing decisions. I and the USSC disagree. I think you are misunderstanding the situation. Humans are making the decision(s), not the software.

I guess we are coming at this from different directions. It is (obviously) true that COMPAS and similar system do not legally make any decisions; but when such systems are used to make recommendations and in turns out that these are effectively always followed, then I don't the distinction matters much (if at all).

There are of course also more direct examples. If you e.g., apply for credit card or personal loan online, it is today very possible to that the decision is made by a ML system without any human intervention. They are also used to e.g., determine credit limits. Now, you can of course complain, and you might be successful, but it is certainly not true that there is always a human in the loop.
 
  • #42
f95toli said:
I guess we are coming at this from different directions. It is (obviously) true that COMPAS and similar system do not legally make any decisions; but when such systems are used to make recommendations and in turns out that these are effectively always followed, then I don't the distinction matters much (if at all).
It matters if humans set the criteria. Whether the criteria is programmed into the software, read off a physical table offline or just tucked into the back of the judge's head, it's humans making the decisions. This is more obvious - and exactly the same - when there are no computers involved in the process at all.

Broader: "AI" can't be making a decision if you don't even have "AI".
certainly not true that there is always a human in the loop.
That's false: a human set the criteria. It doesn't matter if a human does the math or a computer does the math: humans wrote the equation! The humans designed "the loop".
 
  • Like
Likes jack action
  • #43
f95toli said:
If you e.g., apply for credit card or personal loan online, it is today very possible to that the decision is made by a ML system without any human intervention.
According to such definition of a decision without human intervention, this machine also applies:

geeek-mystic-magic-8-ball-future-prediction-ball.jpg

A credit card company could very well rely on such a concept to approve loans (online versions of Magic 8 ball do exist).

What do you think is more dangerous: the Magic 8 ball itself or the person who decides to solely rely on it to make decisions?

That is the real danger of neural networks (I don't want to use the term "Artificial Intelligence" as it is misleading): People not understanding the process and accepting blindly the result. The machine in itself is harmless.
 
  • Like
Likes russ_watters
  • #44
jack action said:
That is the real danger of neural networks (I don't want to use the term "Artificial Intelligence" as it is misleading): People not understanding the process and accepting blindly the result. The machine in itself is harmless.
Totally, and I'll say it another way: the risk here isn't in ceding control to a potentially nefarious machine, it's in thinking you are ceding control to a machine when you are actually ceding control to the humans who made the machine -- and they don't want it(or maybe the nefarious ones do...?). Now nobody is making the decision.

That's why the legalities are what they are. I have a license that says I am responsible for my designs. I'm not entitled to claim the software gave me bad info. This isn't just some convenient legal fiction, it's a description of a true reality.
 
  • Like
Likes jack action
  • #45
russ_watters said:
Now nobody is making the decision.
Not exact. Somebody - or one might even say everybody? - is still making a decision, they just do not claim responsibility for it.
 
  • Like
Likes russ_watters
  • #46
russ_watters said:
It matters if humans set the criteria. Whether the criteria is programmed into the software, read off a physical table offline or just tucked into the back of the judge's head, it's humans making the decisions. This is more obvious - and exactly the same - when there are no computers involved in the process at all.
I would argue that there is still -conceptually- a difference. Using conventional methods you might say score an applicant in an number of different areas (say a number 1 to 5) using some criteria, you can then calculate a weighted average of those scores and say that applicants with an score higher than 3.0 will get their credit card (I am obviously oversimplifying ). Now, if someone is denied you can -at least in theory- go back an look at the different areas and see which areas they scored badly in.

In a ML model you might instead train the model using large dataset containing "good" and "bad" customers using all the information you have about them an input parameters. Now, if the model when in use then decides a new customer is "bad" and should not get their credit card, there is no way to figure out why; even the people who created the model can't tell you the reason. There might -technically- be a criteria involved (the model might give you a number saying "how good" the customer is and someone will have to decide what it means to be a bad customer), but for a specific customer you can't relate that number to any of the input parameters. it is the total lack of information WHY the model makes a certain prediction/decision that is the problem with black box AI.

One can of course argue that the people who made the decision are ultimately the people who decided that the bank (in this example) should use the ML approach, but to me that stretches the meaning of the word (they are of course ultimately responsible for what happens, but that is not the same thing).
 
  • #47
What if you asked the AI "What are the factors supporting the decision to give this person a credit card?" Shouldn't it tell you?
 
  • Like
Likes russ_watters
  • #48
Maarten Havinga said:
For those who do not know the movie/story: this thread is about whether AI such as chatGPT is - on the long run - a danger to humanity, and why or why not. With its popularity rising so quickly, chatGPT has influence on our societies, and it may be prudent to ponder about them. I, Robot is a nice movie discussing how AI, no matter how cleverly programmed, can lead to unwanted results and a suppressive robotic regime. The story (by Isaac Asimov) discusses adding emotions to robots, which may or may not be a good idea. Feel free to post opinions, fears and whatever comes to mind.

THIS IS NOT A THREAD FOR POSTING CHATGPT ANSWERS (unless they are needed as examples for your thoughts)
I think the main problems that will arise within the near term with the advances in large language modeling will not come from the technology in itself, such as chatGPT, but from cyborg integration and use of sensors, that will enhance its biomimetic potentials. We have little control over this development, but I do think a pause to discuss the advances we are seeing (somewhat unanticipated) for a while before civilization dives into the deep end and the technology goes beyond the overhyped baby toys like chatGPT is well worthwhile. It will be complicated and messy for us humans to adapt to this kind of technology in the long run: It opens many doors. That doesn't mean way are prepared to step through. Gonna happen anyway, unfortunately.
 
  • Like
Likes russ_watters
  • #49
f95toli said:
In a ML model you might instead train the model using large dataset containing "good" and "bad" customers using all the information you have about them an input parameters. Now, if the model when in use then decides a new customer is "bad" and should not get their credit card, there is no way to figure out why; even the people who created the model can't tell you the reason.....it is the total lack of information WHY the model makes a certain prediction/decision that is the problem with black box AI.
As @gleem 's response implies, that's a programming choice, not an inherent feature/problem with AI/ML. I could do the same thing myself without a computer: establish a criteria and refuse to tell you what it is. I have my doubts that such a thing would be legal when applied to things like the legal system and financial system and it would certainly be irresponsible/a bad idea.
 
  • #50
russ_watters said:
The risk here is that humans stop using their brains, not that "AI" starts having one.
I'll agree to that to some extent.
That has been a problem for quite some time, whereby what the computer terminal spits out will be entrusted as being completely and wholly, without question, as being correct.
In days of old, someone could point out the 'incorrectedness', and another could agree "you are right".

Systems have become much ever much more sophisticated ( in days of old a small program, say from a Vic20, could easily be read by a fair number of people ), that the lone operator has no way of knowing whether the output is actually in error, or if in error ( that has to be proven first somehow ), whether the algorithm itself has a fault, whether the result is from bad data input, or from some other glitch. It does progress towards the output much more than likely being accepted as correct until proven otherwise, with possible catastrophe ensuing before the proving can occur, and procedures taken to alleviate or correct a situation.

This is not so much an AI problem, but a reliance upon technology.
 
  • Like
Likes russ_watters
  • #51
One thing is clear we have different concepts of intelligence that may never be resolved.

No sooner do we have an improvement in AI (GPT-4) and another arrives on its heels. AutoGPT is a group of ChatGPTs used to accomplish a task orchestrated by another GPT4 agent. This seems to be exactly what the critics of unregulated AI have warned about.

See https://autogpt.net/auto-gpt-vs-chatgpt-how-do-they-differ-and-everything-you-need-to-know/

gleem said:
NLM it appears is the AI to rule all AIs. (Tolkien)

My concern is about what is going on behind the scenes, and what capabilities already exist but are not being released or published (thus proprietary) since the creators need to maintain a competitive edge. Paranoia you say? Maybe just business as usual.
 
  • Like
Likes russ_watters
  • #52
256bits said:
That has been a problem for quite some time, whereby what the computer terminal spits out will be entrusted as being completely and wholly, without question, as being correct.
Right, that is the problem I'm referring to.
In days of old, someone could point out the 'incorrectedness', and another could agree "you are right".
Not necessarily. I have a spreadsheet I use that was created by a 3rd party (a regulatory agency). It contains errors, but the cells containing the errors are locked and the formulas hidden. That's the "black box". The point is, it's the programmer's decision whether there's a black box or not. "AI" doesn't change that.
It does progress towards the output much more than likely being accepted as correct until proven otherwise, with possible catastrophe ensuing before the proving can occur, and procedures taken to alleviate or correct a situation.
Yes, increasing sophistication makes it more difficult to back-check the computer's result. This truism is independent of the existence (or not) of the black box.
This is not so much an AI problem, but a reliance upon technology.
Fully agree/that's the point I'm trying to make.
 
Last edited:
  • #53
Would it be fair to say that AI programs don't (yet) have the plasticity that biological brains possess - that's to say the silicon equivalent of the physical changes occurring in a given neuro network, in part resulting from external stimuli? Or is this a redundant question, as well as being unkind to AI developers?

PS. Is the Asimov film worth watching?
 
  • #54
No replies, so I'll just say I liked the movie iRobot - is that what you mean by "the Asimov film"? I haven't read Asimov's, so I can't comment on the movie's fidelity, but I suspect it is fairly low. But I found it more mindless action than thought-provoking, and in particular it doesn't do a good job of establishing who the antagonist is. Or, rather, the antagonist is first the AI (seemingly), then man, then AI again. I think that was for drama, but again it doesn't make for a very coherent message.

For most AI movies the antagonist is people, whether on purpose or by accident. Very few actually give the AI the agency to be bad, which I guess may be telling about the filmmakers' lack of belief in AI. iRobot spends so little time on the Final Boss I don't think it was even defined whether it had agency, but I think no. I think it's the by this time common conflicting programming trope (2001, Alien/s).
 
  • Like
Likes Dr Wu
  • #55
apostolosdt said:
The computer scientist in the movie (James Cromwell) suggests that "orphan" code pieces might wander in the machine's memory and combine spontaneously, occasionally producing code blocks that then cause the robot the express "emotions."

Is that a plausible event,
No, it is I think by far the most hilariously stupid movie idea I've ever seen. It's a great plot mechanism but no one who has any understanding of computers could take it seriously, since technically speaking, it is totally moronic.

EDIT: I should add, I actually broke out laughing out loud when this was said in the movie.
 
Last edited:
  • Like
  • Informative
Likes apostolosdt and russ_watters
  • #56
Maarten Havinga said:
For those who do not know the movie/story: this thread is about whether AI such as chatGPT is - on the long run - a danger to humanity, and why or why not. With its popularity rising so quickly, chatGPT has influence on our societies, and it may be prudent to ponder about them. I, Robot is a nice movie discussing how AI, no matter how cleverly programmed, can lead to unwanted results and a suppressive robotic regime. The story (by Isaac Asimov) discusses adding emotions to robots, which may or may not be a good idea. Feel free to post opinions, fears and whatever comes to mind.

THIS IS NOT A THREAD FOR POSTING CHATGPT ANSWERS (unless they are needed as examples for your thoughts)

What will come first is probably the ability for the AI to experience and (perhaps down the road even influence) it's environment. Much like a human baby exploring it's environment. Haptic / tactile feedback (pain?), video feed, audio, sense of smell etc...

I'm extremely sceptical of Strong AI, but I'm sure that in the very near future we wont be able to tell the difference - and what then, is the difference? I've probably mentioned this before on this forum.

What scares me the most really, is if a "zombie" AI without consciousness and proper feelings display true agency and proactivity in it's behaviour.

A human sociopath would be Mother Theresa compared to a machine devoid of empathy making proactive decisions influencing people's lifes.
 

Similar threads

Replies
11
Views
4K
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
14
Views
4K
Writing: Input Wanted Captain's choices on colony ships
Replies
4
Views
2K
Replies
14
Views
4K
Back
Top