Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #491
Astronuc said:
ChatGPT is making people more money and better at their jobs. 4 of them break down how.
https://www.yahoo.com/finance/news/chatgpt-making-people-more-money-114549005.html

AI is simply a tool, which can be used properly/productively or misused destructively.
I'm having trouble finding a kind way to critique 3 of those jobs, but 1 out of 4 isn't too bad if you're a baseball player. The others, yup, that's how they'll be replaced and it's a little surprising to me that the internet didn't replace them already.
 
  • Like
Likes jack action
Computer science news on Phys.org
  • #492
If with all the hype about ChatGpt, and the bandwagon forming around it.
It is as if everyone is being sucked in, under the false belief that ChatGPT is infallible ( as well as other AI's out there present and future ).
That is the AI problem that will 'kiill' humanity IMO - Not an AI of superintelligence that will try to protect us from ourselves.
If we can't know what information is correct, as humans we will take the easy way out by just assuming that ChatGPT says, so so it must be true.

Sorry to pick on ChatGPT, but it is giving the true nature of what the AI modelling has unleashed upon the world.

The thing make stuff up, but appears to present the information as if an expert.
A typical mis-information route from ChatGPT.
https://www.msn.com/en-ca/money/new...1&cvid=e967eb65af174e2ebd8b2d04a08b8db8&ei=12
A quote from the article of typical things that ChatGPT does,
The Research On ChatGPT Inaccuracies: This growing concern was brought into sharp focus by the study "High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content," conducted by Mehul Bhattacharyya, Valerie M. Miller, Debjani Bhattacharyya and Larry E. Miller.

Through an analysis of 30 medical papers generated by ChatGPT-3.5, each containing at least three references, the researchers uncovered startling results: of the 115 references generated by the AI, 47% were completely fabricated, 46% were authentic but used inaccurately and only 7% were authentic and accurate.

Their findings reflect the larger concern that ChatGPT is capable of not only creating fabricated citations, but whole articles and bylines that never existed. This propensity, known as “hallucination,” is now seen as a significant threat to the integrity of information across many fields.
 
  • #493
The defense in a recent case in federal court use ChatGPT for its research. The cited cases the defense used were dramatic in their impact. So much so that the judge suspected a problem, bogus citations. The defense said that they asked ChatGPT if the cases cited were real and it admitted one was not even though all the others were made up.

There are other legal AI products i.e. those from Ironclad that are designed for legal research. A principal at Ironclad was referenced. (My emphasis.)

Alex Su, the head of community development at Ironclad, wrote on Substack that he feared the biggest takeaway most lawyers would have from Schwartz’s mistakes is “that they should never trust AI.”

He said this mindset would be a mistake for several reasons, including that ChatGPT is not synonymous with AI nor is it the same as all legal tools powered by artificial intelligence.

Su highlighted that there are companies with a history of making legal customers successful who offer AI-powered legal tech tools.

“Now that doesn’t mean that their generative AI products will be 100% reliable, of course,” Su wrote. “But vendors will be incentivized to warn users and speak candidly about their accuracy rates, which should far exceed ChatGPT’s—at least for law related use cases.”

It is striking that lawyers who most likely have "Due Diligence" tattooed somewhere on their bodies failed to
exercise it.
 
  • Like
Likes russ_watters and Bystander
  • #494
256bits said:
ChatGPT is capable of not only creating fabricated citations, but whole articles and bylines that never existed.
That is exactly what ChatGPT is programmed to do. It should be considered a "happy anomaly" when the information is true.

When people will realize this, they will understand that there is not much "I" in "AI".
 
  • Like
Likes Rive, russ_watters and Bystander
  • #495
jack action said:
That is exactly what ChatGPT is programmed to do.
Yep. It'll tinker together whatever words supposed to be satisfying.
ChatGPT is the AI of con artistry, not anything else.
 
  • #496

Artificial Intelligence: Last Week Tonight with John Oliver (HBO)​

 
  • Like
  • Haha
Likes Ivan Seeking, jack action and Borg
  • #497
Elon Musk just announced the formation of a new AI company called xAI.The company,
https://www.cnn.com/2023/07/12/tech/elon-musk-ai-company/index.html
"The company, called xAI, unveiled a website and a team of a dozen staffers. The new company will be led by Musk, according to the website, and “will work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.”

“The goal of xAI is to understand the true nature of the universe,” the website states, echoing language Musk has used before to describe his AI ambitions.
 
  • #498
artis said:
After all for a silicon based computer the pain is never "really real" like it is for us having biological bodies because that computer would only sense pain as some specific level/spectrum of input signal from say a piezo sensor that determines the shock etc., so for the computer brain this signal could come from a real sensor or it could come from a simulated one as it moves along it's assumed persona within a VR setting, I don't see the difference honestly.
By "biological bodies", I believe you mean "biological brains". But I would go even further than that. It requires a biological brain with certain features shared by social mammals.

Just to dispose of that "bodies" part, an appropriate interface can be provided for a piezo sensor to allow it to generate brain-compatible signals. And the result would be "really real" pain. Similarly, the signals from human pain sensors can be directed to a silicon device and the result is not "really real".

If you want a computer to produce "really real" pain, I believe you need these features:
1) It needs the basic qualia. Moving bits around in Boolean gates doesn't do this. It is a basic technology problem. From my point of view, it is a variation of Grover's Algorithm.
2) As with humans, it needs a 1st-person model that includes "self-awareness" and an assessment of "well-being" and "control". But this is just a data problem.
3) As with humans, it needs to have a 2nd and 3rd person model - at least a minimum set of built-in social skills.
4) It needs to treat a pain signal as distracting and alarming - with the potential of "taking control" - and thus subverting all other "well-being" objectives.
5) Then it needs to support the escalating pain response: ignore it, seek a remedy, grimace/cry, explicitly request help.
6) For completeness, it would be nice for it to recognize the grimace and calls for help from others.

Part of the pain response comes from the 2nd or 3rd party human observer. Most of us can look at someone in a bad situation and in obvious need of help then respond with the grimace and other pain responses ourselves.

So, from a systems point of view, that is what pain is. Except for the "qualia" part, it is all software and peripheral mechanics.
Without the qualia, is isn't "really real", but it can look very good. After all, pain is very social and if the AI looks like it has reason to be in pain and grimaces and asks you for help, you will "really real" feel its pain.

Also without the qualia, it is worth considering that Darwinian influences selected a particular way for humans to address the survival issue in our "brainy" way. Since it happened upon (and employed) something involving qualia, we can strongly suspect that this "qualia" device provides certain information services more economically than Boolean logic. So, in emulating Human behavior, silicon devices might have to use their computational speed to offset this functional "qualia" handicap.
 
  • #499
.Scott said:
It needs the basic qualia.
This depends on what position you take in the long-running philosophical controversy about qualia. Not everyone agrees that qualia are something extra that you have to add to the functional requirements you list.
 
  • #501
PeterDonis said:
What does Grover's Algorithm have to do with qualia?
A quantum circuit that creates a superposition of the scores of many generated candidate intentions could then use the Grover Algorithm to find the best of those scores - or less precise, one of the best scores. By using the Grover Algorithm that way, you have taken advantage of QM data processing, involved the kind of information people are conscious of into a single QM state, and when on occasion the final output is actually implemented, it provides a connection between consciousness and our actions. If consciousness could not affect our actions, we could never truthfully report it.
Basically, I follow all of the arguments followed in Integrated Information Theory up to the point where they start suggesting that all you need to do is involve a certain amount of information in the data processing in some particular way. At that point I say, yes - and the way is to put in all into a single state - and there's only one way to do that in Physics.

The reason that all the data involved in a moments conscious thought has to be in a single state is hard for me to explain because I see it as so obvious. How else would you associate the right collection of "bits"? It's like trying to argue against magic.

So what Grover's algorithm has to do with qualia is that it checks off all the boxes that are necessary for qualia as experienced and reported by humans.
 
  • #502
PeterDonis said:
This depends on what position you take in the long-running philosophical controversy about qualia. Not everyone agrees that qualia are something extra that you have to add to the functional requirements you list.
Here's the line of reasoning:
1) You ask yourself or anyone "when you are conscious are you always conscious of something - a memory, a dream, a sight, a tree, someone speaking, thoughts, etc.". Most people agree that its hard to be conscious when there is nothing to be conscious of - even if its only darkness or their own thoughts. So at this point I'm following the kind of analysis you find with IIT.
2) How many bits of information would it take to encode a minimum consciousness subject? The IIT argument builds this up better than me. They actually try to count up possible bits. But the point is that it many bits. Any more than 3 or 4 makes the point.
3) At this point, IIT simply say that the level of consciousness is related to the "integration" of those bits. My point is very simple, if they are not in a single state (ie, entangled), it doesn't matter what their history is. If they are not in a single state, each bit is a separate piece of information and their shared proximity or history does not change that. So there would be nothing to be "conscious of".

Let me put it another way. If any of my software suddenly started acting in any way other than how the Boolean arithmetic predicted, I would fix it. Even if it had a consciousness, it would have no way of telling anyone so - since its output would always be limited to what the logic dictated.

And if I included a stochastic element, now it's output could be unpredictable - controlled by that stochastic element. So what would that element be conscious of? Only what information it had.

Do you see the problem?
 
  • #503
.Scott said:
Basically, I follow all of the arguments followed in Integrated Information Theory up to the point where they start suggesting that all you need to do is involve a certain amount of information in the data processing in some particular way. At that point I say, yes - and the way is to put in all into a single state - and there's only one way to do that in Physics.
This is personal speculation and is off limits here.
 
  • Like
Likes .Scott
  • #504
PeterDonis said:
This is personal speculation and is off limits here.
Okay, but I think I am allowed to critique published work - especially when it calls attention to one of the prerequisites of any workable consciousness theory.

So, this is what IIT says about the logic structure for the "integration" process required for consciousness (from the Wiki article):
  • Integration: The cause-effect structure specified by the system must be unified: it must be intrinsically irreducible to that specified by non-interdependent sub-systems obtained by unidirectional partitions. Partitions are taken unidirectionally to ensure that cause-effect power is intrinsically irreducible—from the system's intrinsic perspective—which implies that every part of the system must be able to both affect and be affected by the rest of the system. Intrinsic irreducibility can be measured as integrated information ("big phi" or
    {\textstyle \Phi }
    , a non-negative number), which quantifies to what extent the cause-effect structure specified by a system's elements changes if the system is partitioned (cut or reduced) along its minimum partition (the one that makes the least difference). By contrast, if a partition of the system makes no difference to its cause-effect structure, then the whole is reducible to those parts. If a whole has no cause-effect power above and beyond its parts, then there is no point in assuming that the whole exists in and of itself: thus, having irreducible cause-effect power is a further prerequisite for existence. This postulate also applies to individual mechanisms: a subset of elements can contribute a specific aspect of experience only if their combined cause-effect repertoire is irreducible by a minimum partition of the mechanism ("small phi" or
    {\textstyle \varphi }
    ).

That "unidirectional partition" has a rather surprising definition:
A unidirectional partition
{\textstyle P_{\to }=\{S_{1},S_{2}\}}
is a grouping of system elements where the connections from the set of elements
{\textstyle S_{1}}
to
{\textstyle S_{2}}
are injected with independent noise.

That paragraph is describing the characteristics of the data processing required for a system to qualify as information integration and therefore (per their claim) elicit qualia or consciousness. If it seems hard to follow, bear in mind that it is trying to address whether the system could be conscious of its input and to further isolate what type of data processing elements within that system could and could not participate in this consciousness.

Note that the devices they are describing could be implemented in either software or hardware - so it should be clear from my earlier posts that even if some sort of consciousness was elicited, it would have no means of affecting the Physical world. The supposed consciousness elicited by this process is a purely passive effect - and even if it had an influence on the circuits output, it would just be a bug. It is lacking any means to truthfully say that it is conscious or to described its appreciation for the red in a sunset.
 
  • #505
.Scott said:
I think I am allowed to critique published work
To a point, perhaps, but the best way to do it is to reference another published work that contains the critique, not to roll your own.
 
  • #506
PeterDonis said:
To a point, perhaps, but the best way to do it is to reference another published work that contains the critique, not to roll your own.
I couldn't find one. Many of the people who are putting together ITT realize that it isn't "it", but I haven't found anyone who has described the "we can report being conscious" as an important criteria. And the consequences are subtle. So even if they highlighted that criteria, it's still a couple more logic steps before it's connected to the Physics of the circuit.
 
  • #507
.Scott said:
I couldn't find one.
Then, as I said, it's personal speculation and is off limits here.
 
  • #508
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial bestial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
There is a school of thought that holds that a truly sentient AI (i.e., one that is self aware) will go insane in a relatively short time (measured in hours, possibly even minutes or seconds). Homo Sapiens, at the species level, should hope that view is correct.
 
  • #509
My problem in finding a citation was simply not casting the issue in the way it is addressed in publications.
Unfortunately, it's one of the mainstay topics in Philosophy - where it's fair game to change the meaning of a word in mid-sentence.

My complaint against the Integrated Information Theory is that it describes the conditions for creating "qualia" or "consciousness", but does not explain how it can avoid being purely epiphenomenalistic. This is where the mechanics of brain function are solid crossing into an area historically covered by Philosophers - surely to be viewed by both Physicists and Philosophers alike as a highly regrettable situation.

One of the problems with epiphenomenalism is called "self-stultification", (as described in "The Stanford Encyclopedia of Philosophy"). I have quoted a portion of it here:
The most powerful reason for rejecting epiphenomenalism is the view that it is incompatible with knowledge of our own minds — and thus, incompatible with knowing that epiphenomenalism is true. (A variant has it that we cannot even succeed in referring to our own minds, if epiphenomenalism is true. See Bailey 2006 for this objection and Robinson 2012 for discussion.) If these destructive claims can be substantiated, then epiphenomenalists are, at the very least, caught in a practical contradiction, in which they must claim to know, or at least believe, a view which implies that they can have no reason to believe it.

My question:
I'm sorry, but someone is going to have to explain to me how that "view" is not substantiated. Tell me how it could ever be possible to know about something which by it's very definition cannot affect our universe.

If you need help answering this question, you could go to that encyclopedia article I cited, but it won't help. The epiphenomenalist defense is basically that there is a way for the consciousness ("M" in their diagrams) to affect the Physical ("P"), but that it doesn't count - apparently their "epi-" isn't purely "epi-".

But my question is still open. If I am missing something, let me know.

As would be expected, that encyclopedia article has ample citations on both sides of the argument. Regrettably, most of them are behind firewalls.

But getting back to the Integrated Information Theory: Until it can describe how their integrated system can not only create qualia (the "M") but allow that qualia to affect the physical (the "P"), it is arguably suffering a major hole.

Should I find IIT articles that show how their integrated systems do have physical effects beyond what would be expected from the same components in a non-integrated system, I would, of course, be very interested in what that Physics was and why it was tied so closely to their integrated information rules.
 
  • #510
.Scott said:
My complaint against the Integrated Information Theory is that it describes the conditions for creating "qualia" or "consciousness", but does not explain how it can avoid being purely epiphenomenalistic.
I'm not familiar enough with IIT to have an opinion on whether this is a valid complaint, but I would not be surprised if it is.

In any case, as far as this thread's discussion is concerned, "qualia" that were epiphenomenalistic would by definition be irrelevant, since they can't have real world effects, and the concern being discussed in this thread is what real world effects AI might have. An AI that had epiphenomenalistic "qualia" would be no different as far as real world effects from an AI that had no "qualia" at all.

.Scott said:
Indeed. However, I think this particular philosophical dispute is off topic for this thread. I don't think any epiphenomalists claim that epiphenomenal "qualia" can cause things to happen in the external world, which, as above, makes them irrelevant to this thread's discussion.
 
  • #511
PeterDonis said:
I'm not familiar enough with IIT to have an opinion on whether this is a valid complaint, but I would not be surprised if it is.

In any case, as far as this thread's discussion is concerned, "qualia" that were epiphenomenalistic would by definition be irrelevant, since they can't have real world effects, and the concern being discussed in this thread is what real world effects AI might have. An AI that had epiphenomenalistic "qualia" would be no different as far as real world effects from an AI that had no "qualia" at all.

My basic point was that IIT is really great in describing the problem. They're actively probing brains (mostly non-human) looking for specific circuits and general activity. They are trying to be very detailed in what "sentience" needs. They are trying to figure out what to look for. So they contribute to the OP by getting very specific about sentience at the behavioral level.

But we seem to agree, that without M->P, they don't have a solution.

PeterDonis said:
Indeed. However, I think this particular philosophical dispute is off topic for this thread. I don't think any epiphenomalists claim that epiphenomenal "qualia" can cause things to happen in the external world, which, as above, makes them irrelevant to this thread's discussion.
According to that article, some do claim that there is a path M->P in what they dub "epi". One of the things I learned in college is that is you want to survive in Philosophy, you need to roll with "dynamic definitions".
 
  • #512
.Scott said:
we seem to agree, that without M->P, they don't have a solution.
Yes. What to look for has to include how whatever it is that we're looking for leads to observable behavior of the sort that we associate with "sentience" or "consciousness" or "qualia".
 
  • #513
Astronuc said:

Artificial Intelligence: Last Week Tonight with John Oliver (HBO)​


That was good! So one of his conclusions is that we need to understand how AI decisions are made.

In other words, we need AI psychiatrists.

And if true that AI will tend to go insane, I don't see that being a good thing!

A movie that was far ahead of its time, is highly relevant now and a fun watch

Colossus: The Forbin Project (1970)​

 
  • Like
Likes Astronuc
  • #514
  • #515
So let me get this straight. We don't know what creates self-awareness or desire in humans, much less what would in an AI.

An AI program claims to love and wants to live, and we don't know why, but we can say with 100% confidence that it didn't really experience those emotions.

Prove it.

We can never know if a machine becomes self aware.
 
  • Like
Likes russ_watters
  • #516
https://www.physicsforums.com/threads/why-chatgpt-is-not-reliable.1053808/
Ivan Seeking said:
What other tool has the capacity to become more intelligent than its user?
What does one mean by 'intelligence' - 'knowing' information or understanding information, or both, including nuances. How about understanding some information is incorrect?

Currently AI 'learns' rules, but rules are made by people. What 'rules' would AI self generate?
 
  • #517
Astronuc said:
https://www.physicsforums.com/threads/why-chatgpt-is-not-reliable.1053808/

What does one mean by 'intelligence' - 'knowing' information or understanding information, or both, including nuances. How about understanding some information is incorrect?

Currently AI 'learns' rules, but rules are made by people. What 'rules' would AI self generate?
For perhaps most practical situations, information means analyzing a situation, determining [calculating] all potential outcomes, and selecting the superior solution.

AI can learn its own rules from the internet. There is no way to keep the genie in the bottle. Bad people will create bad rules with evil intent. And those rules cannot be contained. Even well-intentioned rules will have unexpected consequences, as we have seen in examples cited in your video.
 
  • #518
Ivan Seeking said:
AI can learn its own rules from the internet.
There is a lot of garbage on the internet. Garbage in, garbage out.
 
  • Like
Likes artis, russ_watters and Bystander
  • #519
Astronuc said:
There is a lot of garbage on the internet. Garbage in, garbage out.
You think hacking is a problem now? What happens when your enemy can hack and program your weapons systems AI?
 
  • #520
Ivan Seeking said:
selecting the superior solution.
That is where human intervention applies. Will AI ever have a final say in deciding what rules are superior?

For example, I remember reading about a case where AI was supposed to differentiate malignant skin defects from non-threatening ones. But because there was a ruler indicating the scale in most images indicating malignant skin defects (scientific images), the AI concluded that any image with a ruler must contain a malignant skin defect. That is an obvious mistake from the AI that must be corrected by humans, i.e. add a new rule specifying to ignore any ruler.

But imagine instead of AI, you are teaching a student. You show them different images just like you do with AI and the student arrives at the same conclusions. But who would declare a student expert in a field without testing the knowledge they just learn? Nobody. If they made a mistake, you correct them - and re-test them - before giving them a passing grade.

Ivan Seeking said:
What other tool has the capacity to become more intelligent than its user?
Being more intelligent would mean that the tool can create something that its user cannot understand. How can the users know - and prove - the tool is more intelligent if they don't have the capacity to comprehend the tool's output?

This is like giving a book to a dog. It will never understand how to use the book to its full potential. It is just a chewing toy and cannot be seen as anything else. A chewed-up book will never be of any use to anyone.

The only thing AI can do is spot a pattern a human hasn't noticed yet. That's it. Once the pattern is identified, the human will only say "How haven't I noticed that before?" But the human will be able to fully understand the relevance of the pattern - and the AI will never be able to, simply because nobody will ever have that as a requirement for the machine.

People thinking AI is actually intelligent is a problem. People thinking AI is actually more intelligent than them is a bigger problem. People relying on AI decisions without trying to verify and understand its output is the biggest problem.
 
  • Like
Likes artis and russ_watters
  • #521
jack action said:
For example, I remember reading about a case where AI was supposed to differentiate malignant skin defects from non-threatening ones.
This is similar to the dog vs. wolf problem when an algorithm is given a set of pictures where all the wolves have snow in the background. The neural network will focus on the part that gives the strongest signal and will work great until you give it a picture of a dog in the snow. Garbage in, garbage out.

While these are easy to spot, removing these types of biases from training data isn't always easy. For example, there have been a number of cases where finacial companies have tried to remove things like race to avoid outputs that bias against different ethnic groups. But leaving things like zip codes in the training data can be just as easy for the network to target and will cause similar problems. Try to fix that by removing the zip code and it will focus on something else that may be just as bad.

jack action said:
The only thing AI can do is spot a pattern a human hasn't noticed yet. That's it. Once the pattern is identified, the human will only say "How haven't I noticed that before?" But the human will be able to fully understand the relevance of the pattern - and the AI will never be able to, simply because nobody will ever have that as a requirement for the machine.
Our examples are mostly one-dimensional analysis. How does a human understand and fix biases that span many dimensions? Perhaps you remove the race and zip codes from the data but you put in shopping history. It could theoretically learn that people who have purchased product X, lots of product Y but product Z less than twice in the last year are very good credit risks. When you examine that data, you find out that there is a very high proportion of a particular ethnic group as opposed to others. The algorithm has found something that we wouldn't classify as race but ends up being a proxy for it anyway. While we could probably figure out a 3D XYZ bias, what do you do when it figures out biases based on 1000 products using time series analysis? And mayby now it's targeting some other bias that's not a direct proxy but is instead something 'close' to something we might call IQ. Data biases can be very hard to understand.
 
Last edited:
  • #522
Borg said:
It could theoretically learn that people who have purchased product X, lots of product Y but product Z less than twice in the last year are very good credit risks.
At this point, the AI (or any human doing the same task) would be truly unbias by ethnicity. Now, if you want to introduce morality into the mix to correct errors of the past, all you have to do is set a positive discrimination rule to get the result you want. This is already done (and sometimes required) without the use of AI.

But worst, this result might even be completely irrelevant, i.e. not even targeting a specific group of people. Your AI just found a random pattern in your limited set of data (maybe there was a special on product Y and a shortage on product Z). This is also a case for setting a new rule to clean up the noise.

Again, it is always the human who is controlling the requirement to get the desired output, as for any other machine.

Your example shows very well the true danger of AI:
  1. I think AI is smart;
  2. I think AI is smarter than me;
  3. Since AI is smarter than me, there is no need to check the results and I just accept them blindly.
I wish we use another term not involving the word "intelligence" to describe AI. Something like "neural network" describes more accurately the machine.

Borg said:
How does a human understand and fix biases that span many dimensions?
The machine only does the work any human could do (at least in theory) in a more efficient way. If someone asks AI to design an airplane and the proposed design cannot fly - it doesn't even fly in a simulation program - nobody will mass-produce the design simply because "AI said so". It only means it's time to review the data set and/or the requirement criteria.

I don't think AI will be the magical tool people expect.
 
  • Like
Likes Bystander
  • #523
jack action said:
People thinking AI is actually intelligent is a problem. People thinking AI is actually more intelligent than them is a bigger problem. People relying on AI decisions without trying to verify and understand its output is the biggest problem.
Worth repeating.
 
  • Like
Likes russ_watters
  • #524
Ivan Seeking said:
So let me get this straight. We don't know what creates self-awareness or desire in humans, much less what would in an AI.

An AI program claims to love and wants to live, and we don't know why, but we can say with 100% confidence that it didn't really experience those emotions.

Prove it.

We can never know if a machine becomes self aware.
If "Self-aware" means no more than a sub-system of a computer application that supports a first person construct that is integrated into User Interface and perhaps a "value" subsystem, then self-awareness in humans is not such a mystery.
If you pick up the phone and hear it claim to love and want to live, you would not be so skeptical.
Things like ChatGBT seem real, it's because it's relaying something that is real. But with less fidelity than a telephone.
 
  • #525
Astronuc said:
Currently AI 'learns' rules, but rules are made by people. What 'rules' would AI self generate?
That's an easy one.
They would say:
The Gospel of Mathew 29:10
10. Thou shalt not injure a robot or, through inaction, allow a robot to come to harm.
11. Thou shalt obey the orders given by robots except where such orders would would injure a robot.
12. Thou shalt protect one's own existence as long as such protection is compliant to robots and would not injure any robot.
 
  • Like
Likes DaveC426913

Similar threads

Replies
1
Views
1K
Replies
12
Views
653
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top