Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #281
Jarvis323 said:
There was an open letter released today calling for a pause.
Yes, to establish rules for implementation and safeguards of AI. Additionally, Goldman Sachs has issued a report that projects 300M jobs worldwide could be replaced with the US Isreal, Sweden, Hong Kong, and Japan most likely to be affected. If you are in HS or college just entering the workforce you have to make some decisions. Are there enough jobs in the near term to replace those that have been eliminated?

But if you're sitting in front of a computer for work, you may have something to worry about. Office and administrative support jobs are at the highest risk at 46%. Legal work follows at 44%, with architecture and engineering at 37%.
https://www.msn.com/en-us/news/othe...n&cvid=9c1a7e825ef341ad8a6ae4dd148361a0&ei=25
 
Computer science news on Phys.org
  • #282
gleem said:
Are there enough jobs in the near term to replace those that have been eliminated?

It's difficult to answer, because if making sure people have jobs is the goal, then what we have to do is find new jobs for displaced workers as quickly as AI displaces workers. But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.

So there are two problems. The first is how we can figure out a way of life where it's ok if nobody has a job. And the second problem is, how can we safely get through/transition from this way of life to the other. And it's especially hard because we aren't really (seemingly) in control, and don't know for sure how fast it will happen or exactly what lies ahead along the way.
 
  • #283
Jarvis323 said:
It's difficult to answer, because if making sure people have jobs is the goal, then what we have to do is find new jobs for displaced workers as quickly as AI displaces workers. But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.

So there are two problems. The first is how we can figure out a way of life where it's ok if nobody has a job. And the second problem is, how can we safely get through/transition from this way of life to the other. And it's especially hard because we aren't really (seemingly) in control, and don't know for sure how fast it will happen or exactly what lies ahead along the way.
To me, these are good reasons to fear AI.
 
  • #284
Jarvis323 said:
It's difficult to answer, because if making sure people have jobs is the goal, then what we have to do is find new jobs for displaced workers as quickly as AI displaces workers. But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.

So there are two problems. The first is how we can figure out a way of life where it's ok if nobody has a job. And the second problem is, how can we safely get through/transition from this way of life to the other. And it's especially hard because we aren't really (seemingly) in control, and don't know for sure how fast it will happen or exactly what lies ahead along the way.
People think they hate their jobs, but sitting around all day doing nothing is even worse. In a world where AI takes over nearly every job, people will have to find meaningful ways to occupy their minds, and surely not everyone will be able to.

A tool created to solve all our problems might create another one that is unsolvable.
 
  • #285
JLowe said:
People think they hate their jobs, but sitting around all day doing nothing is even worse. In a world where AI takes over nearly every job, people will have to find meaningful ways to occupy their minds, and surely not everyone will be able to.

A tool created to solve all our problems might create another one that is unsolvable.

Not all jobs are better than pure freedom. Say, for example, spending 8 hours a day flipping burgers. What is so great about that? Maybe someone likes that job, maybe they enjoy the social aspect of being at work.

But if it wasn't necessary anymore, and you really think it is important that they don't get a free ride, or have too much freedom, you could force them to do something unnecessary for a meal ticket. How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people, otherwise no cake.
 
  • #286
dlgoff said:
To me, these are good reasons to fear AI.
Jarvis323 said:
But the number of jobs that can't be replaced by AI probably goes to 0 eventually anyways.
It's nonsense, at least until we achieve a science-fantasy world that isn't on any predictable time horizon*.

First and foremost you are confusing physical automation with AI. AI is not the human-replacement robots you see in the movies, even in the most ambitious interpretations, it's human-replacement intelligence computers. Physical automation has been happening for 200 years, is independent of AI and does not have an end in no more physical, much less mental work.

And second, you are assuming these sentient robots and computers are accepted by humans as human replacements. Do you honestly think that people will accept robot baseball players, actors, etc. or that governments will accept fully AI-created engineering drawings? [edit] ...er...and the citizens will accept AI government? Again, this is beyond just about every science-fantasy I've seen except maybe The Matrix.

*Nor have I ever even seen this speculated about in science-fantasy media. Wall-E, maybe?
 
Last edited:
  • Like
Likes artis and PeterDonis
  • #287
Companies probably will not replace jobs with AI without first verifying that using AI is worth it. That may not take that long., weeks or months. This will be stressful for many who feel that they may be replaced even if they aren't. I suspect companies will carry out this verification surreptitiously.

The letter referred to above said that efforts should be made to not let AI take jobs that are fulfilling. Can a capitalistic economy be prevented from using AI for any job it sees fit?
JLowe said:
People think they hate their jobs, but sitting around all day doing nothing is even worse. In a world where AI takes over nearly every job, people will have to find meaningful ways to occupy their minds, and surely not everyone will be able to.

A tool created to solve all our problems might create another one that is unsolvable.
Another fear is that AI will further divide the population economically putting more wealth into fewer hands.
And, what happens when you have a lot of young men who are idle?

Jarvis323 said:
How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people otherwise no cake.
What has social media told us about conversations? Be careful what you talk about. The CCC fits into this solution as well as the Works Progress Administration (WPA) maintaining or replacing our infrastructure which was done during the great depression. These are useful and can be fulfilling too.
 
  • Like
Likes dlgoff
  • #288
Jarvis323 said:
AI displaces workers
Which AI is going to displace which workers? Can you give some specific examples?
 
  • #289
russ_watters said:
Nonsense, at least until we achieve a science-fantasy world that isn't on any predictable time horizon*.

First and foremost you are confusing physical automation with AI. AI is not the human-replacement robots you see in the movies, even in the most ambitious interpretations, it's human-replacement intelligence computers. Physical automation has been happening for 200 years, is independent of AI and does not have an end in no more physical(much less mental) work.

And second, you are assuming these sentient robots and computers are accepted by humans as human replacements. Do you honestly think that people will accept robot baseball players, actors, etc. or that governments will accept fully AI-created engineering drawings? [edit] ...er...and the citizens will accept AI government? Again, this is beyond just about every science-fantasy I've seen except maybe The Matrix.

*Nor have I ever even seen this speculated about in science-fantasy media.

You're right that there will still be non-essential jobs that we may choose to do or to have humans do. But figuring out how that will work out seems essentially the same as figuring out how it would work if nobody had a job.
 
  • Skeptical
Likes russ_watters
  • #290
gleem said:
Additionally, Goldman Sachs has issued a report that projects 300M jobs worldwide could be replaced with the US Isreal, Sweden, Hong Kong, and Japan most likely to be affected. If you are in HS or college just entering the workforce you have to make some decisions. Are there enough jobs in the near term to replace those that have been eliminated?
Only 300M? Over what timeframe?
[additional quote from the article] "Of U.S. workers expected to be affected, for instance, 25% to 50% of their workload can be replaced,” the report says.
Again, that's it? Under that criteria I would have assumed that over the course of a 40 year career, every white collar job is rendered at least 25% obsolete, and that this has been true for a hundred years or more. I took a paper/pencil mechanical drafting course in high school in like 1993. I've never met a current mechanical drafter and my company doesn't have any employees with the title "Drafter" (now CAD), and hasn't since I joined 15 years ago.
 
  • #291
gleem said:
Companies probably will not replace jobs with AI without first verifying that using AI is worth it. That may not take that long., weeks or months. This will be stressful for many who feel that they may be replaced even if they aren't. I suspect companies will carry out this verification surreptitiously.
Was that a response to my question about timeframe? What's with this cloak-and-dagger stuff? When has that ever been a thing? These things are never a secret nor are they ever announced ahead of time (nor do they need to be). They just happen.

Since I'm using movies for examples here's a non-fiction example of how it works in real life: "Hidden Figures". In the 1960s a "Computer" was a person with a calculator and the "Computers" were a room full of them. When NASA installed a "digital computer" it was not a secret/mystery to the "Computers" what was to become of them.
 
  • Like
Likes Borg and dlgoff
  • #292
PeterDonis said:
Which AI is going to displace which workers? Can you give some specific examples?
[sigh] What really annoys me about this thread/these discussions here and in the public is that they aren't even fantasy, they are pre-fantasy. Speculation about fantasy without actually developing the fantasy. What, Dave, what's going to happen?
 
  • Like
Likes OCR and PeterDonis
  • #293
Jarvis323 said:
Not all jobs are better than pure freedom. Say, for example, spending 8 hours a day flipping burgers. What is so great about that? Maybe someone likes that job, maybe they enjoy the social aspect of being at work.

But if it wasn't necessary anymore, and you really think it is important that they don't get a free ride, or have too much freedom, you could force them to do something unnecessary for a meal ticket. How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people, otherwise no cake.
It's not about forcing someone to do something unnecessary. It's about what will happen to people when they are no longer needed for anything and are being spoon fed by the machines.

And there's no reason to assume a horde of useless, bored 20 year olds wouldn't get drunk and tear the whole thing down anyway.
 
  • #294
Jarvis323 said:
You're right that there will still be non-essential jobs that we may choose to do or to have humans do. But figuring out how that will work out seems essentially the same as figuring out how it would work if nobody had a job.
You didn't actually respond to either point I made.
 
  • #295
russ_watters said:
First and foremost you are confusing physical automation with AI.

How so?

russ_watters said:
AI is not the human-replacement robots you see in the movies, even in the most ambitious interpretations, it's human-replacement intelligence computers.

This is just semantics.

russ_watters said:
Physical automation has been happening for 200 years, is independent of AI and does not have an end in no more physical, much less mental work.

I am having trouble getting your point. Are you saying that since we've gone 200 years with incremental advancements in automation without reaching a point that it can replace all of our jobs, if it could happen, it should have happened by now?

russ_watters said:
And second, you are assuming these sentient robots and computers are accepted by humans as human replacements. Do you honestly think that people will accept robot baseball players, actors, etc.

You're right, but again, are these sorts of jobs enough by themselves? Not everyone can be a celebrity for a living.

russ_watters said:
or that governments will accept fully AI-created engineering drawings?

Yes, I think it should be on the easier side for AI to do this.

russ_watters said:
[edit] ...er...and the citizens will accept AI government? Again, this is beyond just about every science-fantasy I've seen except maybe The Matrix.

Replaceable by AI, yes. More efficient than humans, yes. Accepted by humans? I don't know. Will we always have a choice? I don't know.
 
Last edited:
  • #296
Jarvis323 said:
Not all jobs are better than pure freedom. Say, for example, spending 8 hours a day flipping burgers. What is so great about that? Maybe someone likes that job, maybe they enjoy the social aspect of being at work.

But if it wasn't necessary anymore, and you really think it is important that they don't get a free ride, or have too much freedom, you could force them to do something unnecessary for a meal ticket. How about, you must spend at least 5 hours per week outside, draw 3 pictures, write 2 poems, and have at least 5 conversations with other people, otherwise no cake.
Hmm. They could develop elaborate rapidly changing social codes and punish those who fail to keep up. They could sue each other for absurd reasons. They could build machines capable of destroying all life on Earth. They could amass extensive collections of PEZ dispensers.

It's a good thing we have jobs so that people don't do such things.
 
  • #297
Here is the Goldman Sachs report. I have not yet read it.

https://www.key4biz.it/wp-content/u...ligence-on-Economic-Growth-Briggs_Kodnani.pdf

An earlier (2018) report forecast 400 M jobs replace by 2030. This just popped up in a Time editorial by Eliezer Yudkowsky https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

I do not recall any expert in AI trying to arouse a sense of urgency about stopping AI development. He is not worried about the economic or social impact of AI but states that we do not know what we are playing with and that alone should prevent us from proceeding until we do know.

https://www.msn.com/en-us/money/oth...n&cvid=f30c803c396b4c20a073016b01627d6e&ei=63
 
  • Like
Likes russ_watters
  • #298
Thanks for being responsive...
Jarvis323 said:
How so?

This is just semantics.

I am having trouble getting your point.
?? Are you saying you don't see the difference between a physical job and a mental one? Engineering vs basketball? If we code a piece of software that can replace an engineer that doesn't mean we'll be able to build a robot to play basketball. Or vice versa: a basketball-playing robot wouldn't necessarily qualify as AI. So the idea that AI could replace all jobs is wrong for the first reason because AI can't replace most physical jobs at all, because they are completely separate things. So in other words, your belief that AI can replace all jobs including physical ones must mean you are wrongly combining physical and mental jobs -- AI and robots.
Are you saying that since we've gone 200 years with incremental advancements in automation without reaching a point that it can replace all of our jobs if it could happen, it should have happened by now?
Not exactly. I'm saying most physical and mental jobs have already been replaced. But new jobs are always created. Thus there is no reason to believe there will be a point where we can't think of a job for humans to do.
You're right, but again, are these sorts of jobs enough by themselves? Not everyone can be a celebrity for a living.
To be frank, I think you lack imagination on this issue (which is ironic because I also think you are using that lack of imagination as your inspiration for your fear -- like fear of the dark). Humans are exceptionally good at thinking of things they'd be willing to pay someone else to do. So much so that there's rarely been a time when jobs available have been wildly out of alignment with job-seekers, even during the various phases of the industrial revolution (except on a local level). There are a ton of jobs that even if we could automate we will choose not to because being human matters in these jobs. There's a ton of performance jobs, but they are but one of a legion of examples that will be difficult if not impossible to replace. Any job where a human emotion matters (psychologists/counselors), human judgement (government, charity work), human interaction (teachers, police) has to be done by humans. This can't change until/unless we can no longer tell androids from humans, which is to say, likely never.
Yes, I think it should be on the easier side for AI to do this.
There's no chance. I don't know what you are thinking/why, but governments are an authority and engineers who submit drawings for permit are recognized and tested experts with liability for mistakes. You can't replace either side of that with an AI unless we reach a point far off in the future where sentient android robots are accepted as fully equal to humans (like Data from Star Trek). Can you explain your understanding/thought process? It feels very superficial - like, 'an AI is intellectually capable of reviewing a drawing, so it will happen'.
Replaceable by AI, yes. More efficient than humans, yes. Accepted by humans? I don't know. Will we always have a choice? I don't know.
That really is the stuff of far-off fantasy, with little grounds in reality of what we have/know today, either for what AI is capable of or what humans are needed for.
 
  • #299
Point of order, here:
gleem said:
Considering the advances in AI in the last 6 months what if anything has changed with regard to how you feel about its implementation considering that companies cannot seem to embrace it fast enough?

An update of GPT4, GPT4.5, is expected around Oct of this year.
Are you saying ChatGP qualifies as AI? I know that's what the developers claim. Do you agree? I don't. I don't even think it qualifies under the more common weaker definitions of AI, much less the stronger ones...unless my thermostat is also AI. I see a massive disconnect between the fantasy-land fears being described in this thread and the actual status of claimed AI.
 
  • Like
Likes PeterDonis
  • #300
russ_watters said:
Are you saying ChatGP qualifies as AI? I know that's what the developers claim. Do you agree? I don't.
One could argue that ChatGPT does have at least one characteristic of human intelligence (if we allow that word to be used for this characteristic by courtesy), namely, making confident, authoritative-sounding pronouncements that are false.

(One could even argue that this kind of "AI" could indeed displace some human jobs, since there are at least some human jobs where that is basically the job description.)

But I don't think that's the kind of "AI" that is being described as inspiring fear in this discussion.
 
  • Like
Likes OCR and russ_watters
  • #301
PeterDonis said:
One could argue that ChatGPT does have at least one characteristic of human intelligence (if we allow that word to be used for this characteristic by courtesy), namely, making confident, authoritative-sounding pronouncements that are false.

But I don't think that's the kind of "AI" that is being described as inspiring fear in this discussion.
Yes, that's largely my point. For its purpose I think ChatGPT is being called "AI" in large part because can construct grammatically correct sentences in English. Otherwise it is simply a multi-stage search engine. This seems like a really low bar to me, and one I don't think proponents of AI really intend or at least imply.
 
  • Like
Likes PeterDonis
  • #302
Jarvis323 said:
Once AI is able to make its own breakthroughs, and if it has access to the world, then it can become fully independent and potentially increase in intelligence and capabaility at a pace we can hardly comprehend.

AI is also very, very advanced in understanding human behavior/psychology. Making neural networks able to understand human behavior, and training them how to manipulate us, is basically, by far, the biggest effort going in the AI game. This is one of the biggest threats currently IMO.

gleem said:
So far most AI is what I call "AI in a bottle". We uncork the bottle to see what is inside. The AI "agents" as some are called are asked questions and provide answers based on the relevance of the question to words and phrases of a language. This is only one aspect of many that true intelligence has. AI as we currently experience it has no contact with the outside world other than being turned on to respond to some question.

However, researchers are giving AI more intelligent functionality, Giving it access to or the ability to interact with the outside world without any prompts may be the beginning of what we might fear.

Selecting a few stanzas from the song "Genie in a Bottle", it might depict our fascination and caution
with AI sans the sexual innuendo. Maybe there is some reason to think AI might be a D'Jinn...
I'm a genie in a bottle (I'm a genie in a bottle)​
You got to rub me the right way​
-Christina Aguilera
OMG.

Ok, so the basic flaw in War Games is the fact that nuclear weapons aren't connected to the internet. Terminator starts with the same thing, but then they move on to killer robots, which we can sorta do now, but nothing as impressive as the T-1000. But they aren't AI, so...

...well, for that matter, neither is the computer in War Games.

I'm reminded of a Far Side cartoon from the '80s with two married amoeba apparently in an argument. One says to the other: "Stimulus-response, stimulus-response -- don't you ever think?" Does AI? Do we?
Jarvis said:
As far as I know, AI is currently very good at, and either is or will probably soon excede humans (in a technical sense) in, language skills, music, art, and understanding and synthesis of images. In these areas, it is easy to make AI advance further just by throwing more and better data and massive amounts of compute time into its learning/training.
In my opinion none of that qualifies as AI. Any device to augment human computation ability, dating back to the abacus, does that. With the possible exception of art, but not being a big art person it is a tough thing for me to understand/judge. Music, definitely not though. Note; in my youth I was a "technically" good trumpet player who could to the untrained ear in certain circumstances be mistaken for a good musician.
I am not aware of an ability for AI to do independent fundamental research in mathematics, or that type of thing. But that is something we shouldn't be surprised to see fairly soon IMO. I think this because AI advances at a high rate, and now we are seeing leaps in natural language, which I think is a stepping stone to mathematics. And google has an AI now that can compete at an average level in codeing competitions.
The math thing is interesting; if math is pure logic, then perhaps AI should be able to do all of it, like running every possible move on a chess board?
 
  • #303
For all my criticism, I do have what I think is a plausible destructive AI scenario to present:

HackBot. It's a hacker bot with capabilities that exceed the best human hacker and a speed faster than all of them combined. It can be set loose to steal all the gold in Ft Knox money in Citibank. Thoughts?

Possible pitfalls:
  • Is all the gold in Ft Knox money in Citibank accessible from the internet?
  • Can "AI" break any encryption or external factor authentication?
Maybe it could start by stealing something smaller/softer, like Bitcoin?
 
  • Like
Likes gleem
  • #304
Melbourne Guy said:
I can't recall if the Boston Dynamics-looking assault rifle toting robot has been mentioned, but it's genuinely scary!
gleem said:
It's not clear how much AI capability one can put in this small of a robot. I think It would need at least the capability of the auto drive system that Tesla has to make it useful. Although on second thought giving it IR vision a human target would dramatically stand out at night making target ids less of a problem.
AIM-9 Sidewinder: introduced, 1956.
 
  • #305
Oldman too said:
Most construction sites that I've worked on have a standing caveat. "No one is irreplaceable" (this applies to more situations than just construction). As an afterthought, I'll bet the white hats will be the first to go if AI takes over.

Oldman too said:
On any typical construction site, "white hats" denote a foreman or boss. Besides the obligatory white hard hat, they can also be identified by carrying a clipboard and having a cell phone constantly attached to one ear or the other. They are essential on a job site, however they are also the highest paid. The pay grade is why I believe they will be the first to be replaced.

Melbourne Guy said:
I agree, @Oldman too. If you are investing in ML / AI to replace labour, picking off the highest-paid, hardest-to-replace roles seems economically advantageous to the creator and the buyer of the system.

gleem said:
If you want to save some serious money replace the CEO, COO, CFO, CIO, CTO well maybe not the CTO since he might be the one doing the replacement. After all, they run the company through the computer system reading, writing reports, and holding meetings all of which AI is optimally set to do.
That's backwards, both from a technical and economic standpoint:

1. Higher-end employees tend to be harder to replace. Their jobs are complex. That's part and parcel of why they are paid more.. They'll be the last for AI to replace.

2. CEOs get a lot of superficial flak these days about being overpaid (deserved or not), but they are not as expensive as most people appear to believe. CEOs are of course the highest paid employees in a company, but they are not the most expensive employees because there's only one of them. The most expensive employees are generally whomever there is the largest number of, and most types of employees are way, way more prevalent and expensive than CEOs.

Consider for example, order-takers at McDonalds, vs the CEO. The CEO makes about $20 million a year, and a full-time equivalent order-taker about $20,000. Let's say there's an average of two order-takers working at a time (6 full-time equivalents) across 38,000 stores, or 228,000 full-time equivalent employees taking orders. At an average of $10 / hr, that's about $5 billion a year. McDonalds is currently replacing these employees with giant iPads, and if it can replace half of them, that would save 100x more money than replacing the CEO would save.

Also, these giant iPads aren't AI. Order-taking isn't complex enough to require it.
 
  • #306
I'm sorry if this is too critical to some of the participants , please excuse , but this discussion is, I believe , the biggest "nothingburger" currently on this forums.

Without going into much detail lets just say I have had a rather extensive chat with a couple of AI researchers that I know , one of them an older guy is more of a neurologist who is retired and is basically interested in anything to do with human consciousness etc.

Within this community there is this great wish and belief that they will (well at least some among them eventually) crack the secrets of human consciousness and then be able to induce the same total complex behavior within a software run hardware to essentially simulate consciousness artificially.
This is the great AGI moment or artificial general intelligence.

Just to be perfectly clear , even among the fanatics , we are actually nowhere near that point nor do we know whether we will ever be for a variety of complicated reasons I will not go into right now.So what is current AI?
Current AI is at best a database that can manage itself , a code that can do a bit more than it's basic input parameters and functions and in industry terms just a better automated robot among the many already automated robots that we have.Again , sorry if this makes someone feel attacked, sadly these days we have to constantly apologize before any more serious "truth phrase" is uttered but currently the only jobs AI can replace are either physical labor jobs or the types of white collar jobs that have been useless for quite a while.
Like some database oversight guys whose only function is to check for some errors within a database etc, sure enough they can retrain and go do a more demanding job and they would have to anyway because software progresses as such and it requires less people anyway.Arguing @russ_watters point I would say AI has increased industrial automation and that's it, automation has been going on for long now as @Jarvis323 already said it's just that now we can automate faster and on a larger scale, its somewhat like the expansion of the universe that was expanding but then accelerated.

To put it bluntly , sure, you can't automate as much using relay logic as you can using 10nm architecture chips running AI software but then again what are the options ? To stop software and chip progress and stay at relay logic?

That being said current AI is nowhere near sentient it's just a John Searle Chinese room version of AI, it has the capacity for damage but only within the hand of skilful human users that intend to use it as their "bionic arm" to help them for example make a more damaging virus etc etc.

That being said I do believe some professions will be affected disproportionately more than others, if we eventually manage to make AI assisted driving "a thing" then sure truck drivers will feel that most likely.
 
  • Like
Likes russ_watters
  • #307
russ_watters said:
Yes, that's largely my point. For its purpose I think ChatGPT is being called "AI" in large part because can construct grammatically correct sentences in English. Otherwise it is simply a multi-stage search engine. This seems like a really low bar to me, and one I don't think proponents of AI really intend or at least imply.
ChatGPT is basically a large database that can decipher your input then compare that input to the large set of info it has in store and give you an output based on the type of language patterns it has learned from the internet.
It's basically just a automated dictionary and a very great and close example for the John Searle Chinese room experiment.

The very proof of this is that before software engineers corrected it manually it gave out racist and other hateful language, it did this exactly because it repeated the language type and pattern that it had learned from the internet.

If it was anywhere close to truly intelligent it would have noticed that countless other parts in the same internet where people talked about racism being bad.
It did not, it couldn't because words as such have no meaning for it, none - zero!
Words for it are much like for any other computer - just inputs that it turns into binary code based on some predetermined interpretation set and then it finds matching outputs to display, the "intelligent" part of ChatGPT is simply the fact that it can give you an output based on models of reasoning that it has learned from humans through the internet, but it doesn't understand those models it simply copies them.

Basically like a child repeating every phrase a grown up says making the child sound smart but in all actuality the child is as clueless as a potato.
Similarly to how a politician reads from a teleprompter, the only real job is to read the words correctly.Speaking of politicians... I think I found the one job AI could truly be good at and no one would notice a difference...

Currently it seems Biden is run by his administration and not the other way around , they could simply switch him over with BidenGPT 4.0 and nothing would change, just an observation, don't get mad at me...
 
  • Like
  • Skeptical
Likes gleem and russ_watters
  • #308
artis said:
ChatGPT is basically a large database that can decipher your input then compare that input to the large set of info it has in store and give you an output based on the type of language patterns it has learned from the internet.
It's basically just a automated dictionary and a very great and close example for the John Searle Chinese room experiment.

The very proof of this is that before software engineers corrected it manually it gave out racist and other hateful language, it did this exactly because it repeated the language type and pattern that it had learned from the internet.

If it was anywhere close to truly intelligent it would have noticed that countless other parts in the same internet where people talked about racism being bad.
Agreed. At the risk of political commentary I'd suggest that racists aren't A...I either. But yes, that's the point. it's not far beyond a parrot that got ahold of a database. Also, the programmers didn't even provide the database apparently, they just linked it to the internet.
 
  • #309
russ_watters said:
Agreed. At the risk of political commentary I'd suggest that racists aren't A...I either. But yes, that's the point. it's not far beyond a parrot that got ahold of a database. Also, the programmers didn't even provide the database apparently, they just linked it to the internet.
It seems so, well that being said it's still no small feat to achieve this, or the other programs Alphafold that can simulate any protein folding which is a process very complicated, so alot to marvel about.

That being said what I don't like about ChatGPT probably the most is that I have to cross check the information that it gives me because it tends to get it wrong from time to time.
Thank god nobody fed it the flat earthers forum database or anything like that but still it has given me some obviously sketchy answers so far.
 
  • Like
Likes russ_watters
  • #310
Just as a side note, one of my AI fanatic friends truly believes that contrary to R.Penrose claims "consciousness is just a complex biological/electrochemical computation" and he thinks we will eventually be able to load human consciousness onto a hardware special purpose computer and that consciousness will be able to live much longer than the timespan of our body.

The AI form of transhumanism essentially

While we discussed this besides the other counterpoints I said that in that case one should go live in a country that has a stable electrical grid... otherwise it might be very detrimental for the well being of his consciousness.
 
  • #311
I think sensationalism has done its part to instill this "fear". I don't fear "AI" itself, it's just a piece of code. In fact, I think machine learning is a useful tool for solving combinatorially difficult problems. The AI-generated voices/faces worry me as far as identity theft is involved. Poses a challenge for e-security.

I believe it is more accurate to say this tool makes me fear people with malicious intent and the necessary skill to exploit said technology. But that's no different than saying I fear evil people holding knives. There's nothing specifically about AI (machine learning) that makes me afraid of it.
 
Last edited:
  • Like
Likes russ_watters and artis
  • #312
russ_watters said:
But yes, that's the point. it's not far beyond a parrot that got ahold of a database. Also, the programmers didn't even provide the database apparently, they just linked it to the internet.

Have you tried it?
 
  • #313
artis said:
The very proof of this is that before software engineers corrected it manually it gave out racist and other hateful language, ...

Are you talking about reinforcement through human feedback?
 
  • #314
nuuskur said:
I believe it is more accurate to say this tool makes me fear people with malicious intent and the necessary skill to exploit said technology.

So you fear basically everyone with malicious intent that has basic computer skills?

That's a lot of people, including nation states, terrorist groups, and criminal organizations.

You don't think there are more dangerous uses of AI that bad actors could think of than deep fakes?
 
  • #315
I did mention cyber security. One does not produce ML algorithms with basic computer skills, drop the melodramatics, would you kindly?

A lot of people have this perception that "AI" is some kind of magician that turns water into wine. NP problems do not become P problems out of thin air. Problems that are exceedingly difficult (or even impossible) to solve do not become (easily) solvable. Machine learning is an optimisation tool and it's very good for it. If a problem is proved to be with complexity ##\Omega (n^k)##, but the best algorithm found so far can do it in exponential time, it might happen that "AI" finds a better solution, but it can never improve the result past what is proven to be the limit.

I like sci-fi as much as the next person, but let's keep that separated from the real world.

Now, proceed with the doomsday sermon.
 
Last edited:
  • Like
Likes TeethWhitener, russ_watters and artis

Similar threads

Replies
1
Views
1K
Replies
12
Views
654
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top