Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #176
If I were going to write an AI horror story it would be this. Society gets dependent on an AI. Quite often its moves are obscure but it always works out in the end. It builds up a lot of good will and faith that is doing the right thing no matter how mysterious and temporarily unpleasant. So when it goes off the rails and starts to blunder no one realizes it until it is too late.
 
Computer science news on Phys.org
  • #177
If I were worried about AI, it would not be because of fear of robots' world domination, but because these days and for an indeterminate time to come, some "AI" are not really very good at the tasks that are assigned by certain people who can and boldly go where no one with some scintilla of wisdom has gone before, using neural-network algorithms that are not up to snuff but are cheaper and free of personal issues than, well, paid human personnel: they are a one-time expense that is likely to include support with updates for several years (they are software, after all: "apps"), don't goof off, try to unionize, and never talk back. Doing the kind of work where, if they do it wrong, that is likely to be someone else's problem. For example: face-recognition going wrong and someone (else) being thrown in jail because of it. Military use where humans delegate to an AI the making of quick life or death decisions.

On the other hand, The Spike has been postponed sine die due to lack of sufficient interest and technical chops. Skynet's armies are not marching in, right now. Or even present in my bad dreams. But there is also plenty else around I see as worthy of worrying about, thank you very much.
 
Last edited by a moderator:
  • #178
Speaking of source material for AI concepts:

Does anyone recall a novel from decades ago where a computer had a program like Eliza, written in BASIC, that managed find enough storage to develop consciousness and the story culminated in the AI attempting to fry the protagonist on the sidewalk by redirecting an orbiting space laser?
 
  • #179
I think a superintelligent AI would be smart enough to not kill anybody. I think it would be doing things like THIS.

 
  • #180
.Scott said:
Building a machine with a human-like ego and that engages human society exactly as people do would be criminal. Anyone smart enough to put such a machine together would be very aware of what he was doing.
Fine statements, to be sure, @.Scott, but not statements of fact. And given we don't understand our own consciousness (or other animals that might be regarded as such) it seems premature to jump to such conclusions. Currently, it is not criminal to create an AI of any flavour, so I'm assuming you mean that in the moral sense, not legal sense. And who knows how smart you have to be to create a self-aware AI? Maybe smart, but not as smart as you assert.

Honestly, I am struggling with your absolutist view of AI and its genesis. We know so little about our own mental mechanisms that it seems hubris to ascribe it to a not-yet-invented machine intelligence.
 
  • #181
AI and consciousness are not as inscrutable as you presume.

And as a software engineer, I am capable of appreciating a design without knowing the lowest level details. So, though I have never written a chess program, I can read an article about the internal design of a specific chess app, and understand its strengths and weaknesses. Similarly, I can look at the functionality of the human brain - functional and damaged - and list and examine the characteristics of human consciousness and although I may not be ready to write up the detailed design, I get the gist.
 
  • Skeptical
  • Informative
Likes Oldman too and Melbourne Guy
  • #182
I'm guessing that when AI is referenced as 'thinking' I am assuming some sort of actual human equivalent, which would mean that it is aware that it is aware and therefore it is aware of what it is. Is this what people are getting to in this thread, or do they have something else in mind. Because to me there is either an 'appearance' of thinking or there is actual thinking.

I am guess that animals can be referred to as actually thinking but of course this is not anything like human thinking due to the non awareness of the animals own awareness. So is this type of thinking what is meant that AI might aspire to?
 
  • #183
bland said:
Because to me there is either an 'appearance' of thinking or there is actual thinking.
The question - which Turing himself immortalized - is: how would you tell the difference?
 
  • #184
Melbourne Guy said:
Honestly, I am struggling with your absolutist view of AI and its genesis. We know so little about our own mental mechanisms that it seems hubris to ascribe it to a not-yet-invented machine intelligence.
I actually view this from the opposite direction: if we know so little about what it is to be conscious, then how can we hope to create it?
 
  • #185
DaveC426913 said:
The question - which Turing himself immortalized - is: how would you tell the difference?
I prefer: if we can't tell the difference, does it even matter?
 
  • #186
russ_watters said:
I prefer: if we can't tell the difference, does it even matter?
Yes, that'll be the next question. But for Melbourne Guy, we first have to convince him that he can't tell the difference.
 
  • Haha
Likes Melbourne Guy
  • #187
DaveC426913 said:
The question - which Turing himself immortalized - is: how would you tell the difference?

russ_watters said:
I prefer: if we can't tell the difference, does it even matter?

DaveC426913 said:
Yes, that'll be the next question. But for Melbourne Guy, we first have to convince him that he can't tell the difference.

This 'does it make a difference' angle, is better applied to the 'are we in a simulation' nonsense. And vaguely related to 'do we have free will', on that one, I think we can say it doesn't matter because whether we do or not (we do) the entire world (even people who think we don't have free will) will treat you as if you do, so in that sense it doesn't matter, same with the simulation.

With regards to dreaming it's easy to tell simply by looking at some writing, anything with a couple of words, look at the words, look away and look back, they will have changed, in fact they probably weren't words in the first place, just an impression of words, good enough, like lorem ipsum copy, at a glance they are english words. If you pay attention you can watch your brain doing this in real time.

Correct me if I"m wrong, but we would all agree that dogs and other intelligent animals, do display what we might term as thinking. Not sure if 'thinking' has been adequately defined yet in this thread. So when we say thinking in relation to a machine I suppose we are referring to the type of thinking that can only come with self awareness of ones thinking. This is what separates humans from other animals.

So my point is that if a machine actually could think, that it could be a defined as a human being, albeit made of metal. In other words it would be self aware and if that is the case I do not see how it would not then fall prey to the human condition, so it would make a judgement or come to a conclusion about itself, and it would then become sad, it will of course compare itself to organic humans but it's superior computing power and super intelligence would not make up for the fact of it's many obvious deficiencies. Thinking implies the ability to compare and to judge.

So to sum up, a machine with actual intelligence I think is just, ... well... ridiculous.

Edit: Brian, the dog from Family Guy, is what a dog would be like if it was self aware, i.e. it would be human. Same with the apes in the Planet of the Apes, for all intents and purposes, they were human.
 
  • #188
bland said:
So my point is that if a machine actually could think, that it could be a defined as a human being, albeit made of metal.
... Brian, the dog from Family Guy, is what a dog would be like if it was self aware, i.e. it would be human

What? You assert that 'self awareness' equals being human?

A self aware dog is not a self aware dog; it's a human, because only humans are self aware?

That's circular.

It also ignores a number of (non-human) species who seem to show signs of self awareness, including dolphins and elephants.
 
Last edited:
  • Like
Likes Oldman too
  • #189
russ_watters said:
I actually view this from the opposite direction: if we know so little about what it is to be conscious, then how can we hope to create it?
Does NFI count as a suitable answer on PF, @russ_watters? I was responding to @.Scott's authoritative statements that I took as, "I have the answer, here is the answer," but I wonder if the "we'll know it when we see it," is how things will end up going (assuming an AI reaches this presumed level of awareness).

DaveC426913 said:
Yes, that'll be the next question. But for Melbourne Guy, we first have to convince him that he can't tell the difference.
I'm pretty sure I'm failing to tell the difference with so many people right now, @DaveC426913, that adding AI to the list of confusing intelligences will melt my brain 😭

Apart from that, much of the commentary in this thread highlights that we lack shared definitions for aspects of cognition such as thinking, intelligence, self-awareness, and the like. We're almost at the level of those meandering QM interpretation discussions. Almost...
 
  • #190
DaveC426913 said:
What? You assert that 'self awareness' equals being human?

A self aware dog is not a self aware dog; it's a human, because only humans are self aware?

That's circular.

It also ignores a number of (non-human) species who seem to show signs of self awareness, including dolphins and elephants.

Well I don't 'assert' it, but I do say, one (in this instance, me) could define it like that from a particular viewpoint of the peculiar nature of humans. Humans not only have the unique capacity of complex symbolic language, but separate to that, humans can be defined by the peculiar set of problems that define them.

And this other set of problems is directly caused by their awareness of their own being. So I think it's fair to say that no dolphin is going to be sad because it's got a strange mottled colouration. It's not going to compare itself in any way to any other dolphin. A dog will sniff any other dog's tail end that passes by, it doesn't bother whether the dog is a pedigree or a common street dog, because that would make it Brian.

Surely you will agree that whether it's 'symbolic language' or just being miserable due to a self conclusion, either one of those are unique to humans and what makes humans unique and causes these existential problems is their awareness of their own awareness. So, yes, it could be a fair definition of a human being.

If intelligent aliens made friends with us Earthlings and lived here, then as far as the animals are concerned the aliens would be the same as humans. And I think people instinctively know that, which is why aliens portrayed in fiction always seems to have many of the baser human qualities. Oh sure instead of warmongers they might be altruistic, both human qualities born of self awareness.

Which is why Heaven, as some sort of eternal bliss, ignores all this, if you're in Heaven, with angels floating about the clouds, you'll naturally want to have a look at god then you'll want to see what the back of god looks like, but after a while you'll get bored and you'll wonder how in hell did donald trump get here, which will kinda bum you out seeing as you were a goody goody all your life, so you'll become sad. In Heaven. Because it's still the same awareness.

From a biological point of view, obvs not.
 
  • #191
bland said:
And I think people instinctively know that, which is why aliens portrayed in fiction always seems to have many of the baser human qualities.
From this authors perspective, the aliens are used more as mirrors of the human condition for narrative effect, rather than because there is any 'instinctive' knowledge that animals would treat aliens as humans. Whatever that actually means, @bland? Who knows how dolphins or dogs really perceive the world, they might know aliens are aliens as easily as we would, and accept them - or not - with as much range in their reactions as we would have.
 
  • #192
Melbourne Guy said:
Does NFI count as a suitable answer on PF, @russ_watters? I was responding to @.Scott's authoritative statements that I took as, "I have the answer, here is the answer," but I wonder if the "we'll know it when we see it," is how things will end up going (assuming an AI reaches this presumed level of awareness).
I have no idea what "NFI" is.

My post started out saying "AI and consciousness are not as inscrutable as you presume.".
There is a lot of discussion around AI and consciousness that is super-shallow. To the point where terms are not only left undefined, but shift from sentence to sentence. And where a kind of "wow" factor exists where things are not understood because it is presumed that they cannot be understood. People are stunned by the presumed complexity and block themselves from attempting even a cursory analysis.

If you want to create an artificial person that deals with society and has a sense of self-preservation, and you don't want to wait for Darwinian forces to take hold, then you need to start with some requirements definition and some systems analysis. If you are not practiced in such exercises, this AI/Consciousness mission is probably not a good starter. Otherwise, I think you will quickly determined that there is going to be a "self object" - and much of the design work will involved presenting "self" as a unitary, responsible, witness and agent to both society and internally - and in recognizing and respecting other "self-like" beings in our social environment.

The fact that we so readily take this "self" as a given demonstrates how effectively internalized this "self object" is. How could any AI exist without it? In fact, no current AI exists with it.
 
  • #193
bland said:
And this other set of problems is directly caused by their awareness of their own being. So I think it's fair to say that no dolphin is going to be sad because it's got a strange mottled colouration. It's not going to compare itself in any way to any other dolphin.
You would be wrong. Your example is a little off, the but dolphins have been shown to have some degree self-awareness."The ability to recognize oneself in a mirror is an exceedingly rare capacity in the animal kingdom. To date, only humans and great apes have shown convincing evidence of mirror self-recognition. Two dolphins were exposed to reflective surfaces, and both demonstrated responses consistent with the use of the mirror to investigate marked parts of the body. This ability to use a mirror to inspect parts of the body is a striking example of evolutionary convergence with great apes and humans."
 
  • #194
.Scott said:
I have no idea what "NFI" is.

My post started out saying "AI and consciousness are not as inscrutable as you presume.".
There is a lot of discussion around AI and consciousness that is super-shallow. To the point where terms are not only left undefined, but shift from sentence to sentence. And where a kind of "wow" factor exists where things are not understood because it is presumed that they cannot be understood. People are stunned by the presumed complexity and block themselves from attempting even a cursory analysis.

If you want to create an artificial person that deals with society and has a sense of self-preservation, and you don't want to wait for Darwinian forces to take hold, then you need to start with some requirements definition and some systems analysis. If you are not practiced in such exercises, this AI/Consciousness mission is probably not a good starter. Otherwise, I think you will quickly determined that there is going to be a "self object" - and much of the design work will involved presenting "self" as a unitary, responsible, witness and agent to both society and internally - and in recognizing and respecting other "self-like" beings in our social environment.

The fact that we so readily take this "self" as a given demonstrates how effectively internalized this "self object" is. How could any AI exist without it? In fact, no current AI exists with it.

To me, it's at the deep level of analysis where you come to realize that AI cannot be understood (at least internally). This is because neural networks are complex systems modelling other complex systems.

Sure we can understand the black box's possible range of inputs and output, and to some extent the expected ones if the model and data is simple enough.

The fact that the worlds best theorists still have no solid theory to explain even simple artificial neural networks in a way the experts are satisfied with, however, is telling us something. Because we can make ones much much more complicated.

So basically, what we can do if we have this controlled isolated system, is choose the data to train it with, choose the loss functions to penalize bad behavior based on, and choose the degrees of freedom it has.

But I very much doubt people would have the restraint to agree, in solidarity, all across the world, to constrain our use of AI to such a limiting amount, especially when letting AI go wilder offers so many competitive advantages. We're talking vast wealth, vast scientific and engineering achievement, vast military power, etc. that you are expecting people to pass up in the name of being cautious. The humans we're talking about here are the same ones that are cool with poisoning themselves and the rest of the world with things like phthalates and the like just to make a little more money, and are even willing and able to corrupt powerful governments to make it happen.

Humans are not only too foolish to feasibly exercise the level of caution you expect, they're also too greedy. In reality, people will release self reproducing AI into the solar system to mine asteroids and terraform Mars in a second, as soon as they get the chance. And they will build AI armies capable of wiping out all human beings in a day as soon as they get the chance too.

Will they be self aware? Does it matter?

Anyways, there can be a notion of self awareness which is easily achieved by AI, which is to simply learn about itself and then its behavior will depend on its condition. And if its condition affects other things that affect the loss function, then it will behave accordingly. This can easily reach the level where an AI acts similar to humans in terms of things like ego, greed, anger, envy, depression, etc.

What we have as humans that seems special is not that we behave with these characteristics, but that we have these subjective feelings which we cannot imagine to be possible with a machine.

Animals have self awareness and certainly emotion in my opinion. They clearly feel things like we do. And they do experience things like envy comparing themselves to others. Pets are notorious for becoming envious of others. Dogs in particular are extremely sensitive and emotional animals.

What humans have that is objectively special is a higher level of analytical thinking than animals. But AI can arguably surpass us easily in analytical thinking, at least in niche cases and probably in the long run in general.

So what we have left really to separate us is the subjective experience of feeling.

AI can behave exactly as if it is sensitive emotionally and has feelings, but we could never peer inside and somehow tell if there is anything similar going on. Like you say we often just say the neural network is too complex to understand internally so maybe we can't tell. The truth is, we don't know where this subjective experience of feeling comes from in biological creatures. Is something supernatural involved? Like a soul? Penrose thinks the brain has quantum organelle which give us a special metaphysical character (for lack of better explanation). And I admit that I am inclined to have at least a vague feeling there is some form of spiritual plane we occupy as living creatures.

Even if that is true (Penrose is right), can we make artificial versions of those organelle? Or how about the brains we're growing in vats? At which point can these lab grown biological brains begin feeling things or having a subjective experience? Maybe for that to happen they need to be more complex? Isn't that what people ask about artificial neural networks? Do they need to first have senses, learn, and be able to respond to an environment? Would a human brain in a vat, deprived of a natural existence, have a subjective experience we could recognize? Would some kind of non-biological but quantum neural network be capable of feeling?

There are too many unanswered questions. But I'm in the camp that believes that whether or not AI feels in the way we do, it doesn't matter in practice if it acts like it does. But an emotional AI isn't really what makes the most danger in my opinion. I think the biggest danger is out of control growth. Imagine if a super strain of space cockroaches started multiplying super-exponentially and consumed everything on Earth in a week. That is the type of thing which can possibly result from something as simple as an engineer/researcher doing an experiment just to see what would happen.
 
Last edited:
  • #195
I just want to add that our concept of humans being absolutely self aware is probably way off. Humans aren't actually very self aware. We aren't aware of what is going on in our own brains, and we aren't very aware of our subconscious mind, and we aren't very aware of our organs their functions and can't consciously control them. Our complex immune systems and all of the amazing living systems that comprise us, we are hardly aware of at all. It is conceivable that some animals could be actually much more self aware than us in these ways for all we know. And it is conceivable that a being of some sort could be much more self aware than humans, in general. And depending how we define self awareness, AI could conceivable become way, way more self aware than humans. If the benchmark is recognition of self in the mirror, then AI can already do that no problem. It's only if you attach a special human inspired subjective experience to it that it is questionable, but also probably unanswerable and not even easy to define.
 
Last edited:
  • #196
Jarvis323 said:
But I very much doubt people would have the restraint to agree, in solidarity, all across the world, to constrain our use of AI to such a limiting amount, especially when letting AI go wilder offers so many competitive advantages. We're talking vast wealth, vast scientific and engineering achievement, vast military power, etc. that you are expecting people to pass up in the name of being cautious...

Humans are not only too foolish to feasibly exercise the level of caution you expect, they're also too greedy. In reality, people will release self reproducing AI into the solar system to mine asteroids and terraform Mars in a second, as soon as they get the chance. And they will build AI armies capable of wiping out all human beings in a day as soon as they get the chance too.
We can do lighter versions today, with or without true AI, whatever that is, but we don't. The idea that humans will always go for more war and profit in the short term, while popular, just isn't true. Even by mistake.

However, specific to the point, what would prevent the next Hitler from creating world-destroying AI is control. He can't take over the world if the AI turns against him.
 
  • #197
Jarvis323 said:
To me, it's at the deep level of analysis where you come to realize that AI cannot be understood (at least internally). This is because neural networks are complex systems modelling other complex systems.
So in this case "AI" is software techniques such as neural nets.

The brain has arrangements or neurons that suggest "neural nets", but if neural nets really are part of our internal information processing, they don't play the stand-out roles.

As far as rights are concerned, my view has always been that if I can talk something into an equitable agreement that keeps it from killing me, it deservers suffrage.
 
  • #198
Jarvis323 said:
I just want to add that our concept of humans being absolutely self aware is probably way off. Humans aren't actually very self aware. We aren't aware of what is going on in our own brains, and we aren't very aware of our subconscious mind, and we aren't very aware of our organs their functions and can't consciously control them. Our complex immune systems and all of the amazing living systems that comprise us, we are hardly aware of at all. It is conceivable that some animals could be actually much more self aware than us in these ways for all we know. And it is conceivable that a being of some sort could be much more self aware than humans, in general. And depending how we define self awareness, AI could conceivable become way, way more self aware than humans. If the benchmark is recognition of self in the mirror, then AI can already do that no problem. It's only if you attach a special human inspired subjective experience to it that it is questionable, but also probably unanswerable and not even easy to define.
I'm not sure what "absolutely self-aware" would be. Even if we were aware of our livers, would we need to know what chemical processes were proceeding to be "completely aware"? The "self" we are aware of is our role as an animal and as a member of society - and that's just the information end.

Being conscious of "self" is just one of enumerable things we can be conscious of. In a normal, undamaged brain, we maintain a single story line, a single stream of consciousness, a train of items that have grabbed our attention. But this is just a trick. The advantages to this trick are that: we can apply our full bodily and social resources to one of the many things that may be crossing our minds; and our memory is maintained like a serial log - if nothing else, that spares memory. I can't find a reference right now, but in a couple of studies, when people focused on one thing to the exclusion of other things, the effects of those other things still showed up later in their responses to word association tests.

My best guess is that our experience of consciousness is actually many "consciousness engines" within our skulls - with only one at a time given the helm and the log book.

Clearly, if you attempt to mimic human-like consciousness in a machine, you will have lots of structural options - many engines, one log; one log per engine; etc. BTW: I am in substantial agreement with Penrose that consciousness is a form of quantum information processing - though I wouldn't hang my hat on those microtubules).
 
  • Like
Likes Jarvis323
  • #199
russ_watters said:
However, specific to the point, what would prevent the next Hitler from creating world-destroying AI is control. He can't take over the world if the AI turns against him.
Turns against him? What a nasty programming bug! More likely, it is the system designers that turned against him.
 
  • Like
Likes russ_watters
  • #200
russ_watters said:
We can do lighter versions today, with or without true AI, whatever that is, but we don't. The idea that humans will always go for more war and profit in the short term, while popular, just isn't true. Even by mistake.

However, specific to the point, what would prevent the next Hitler from creating world-destroying AI is control. He can't take over the world if the AI turns against him.
True, but there may be 1000 Hitler idols, maybe at a time, who get the opportunity to try and command AI armies, and eventially you would think at least one of them would lose control. Either way what is the prospect? AI causing full human extinction on its own vs AI helping a next generation Hitler to cause something close to that on purpose. And then you have the people who mean well trying to build AI armies to stop future Hitlers and they can make mistakes too.
 
  • #201
.Scott said:
AI and consciousness are not as inscrutable as you presume.
AI certainly not but consciousness (notwithstanding that this entire discussion is meaningless without adequately defined terms like 'consciousness') ergo the 'hard problem', and this hard problem is as well understood (in the sense Feynman was using it) as quantum entanglement, that is not at all.

Melbourne Guy said:
Apart from that, much of the commentary in this thread highlights that we lack shared definitions for aspects of cognition such as thinking, intelligence, self-awareness, and the like. We're almost at the level of those meandering QM interpretation discussions. Almost...
This.

Melbourne Guy said:
Who knows how dolphins or dogs really perceive the world.
We can make inferences based on behaviour. I mean, sure, dogs might be self aware and smart enough to behave like they're not. But I'm not buying that.

DaveC426913 said:
You would be wrong. Your example is a little off, the but dolphins have been shown to have some degree self-awareness.
I'd just like to point out your use of "some degree". This is a direct result of us not having defined the slippery topics we are talking about. Just like 'does god exist' threads do not define their topic but everyone plows ahead regardless. I'm guessing (hoping) you felt a little guilty about writing 'some degree' ;¬)

I'd like to see a thread the topic of which was actually seeing if the participants in this thread are able to even come to an agreement on what we terms human self awareness.

Jarvis323 said:
I just want to add that our concept of humans being absolutely self aware is probably way off. Humans aren't actually very self aware. We aren't aware of what is going on in our own brains, and we aren't very aware of our subconscious mind, and we aren't very aware of our organs their functions and can't consciously control them.
You're conflating the hard and soft problems of consciousness.
 
  • #202
bland said:
I'd just like to point out your use of "some degree". This is a direct result of us not having defined the slippery topics we are talking about.
Certainly but, in this case, that very 'Unknown' surely swerves the pathway toward the "Yes, it should be feared" camp, no?

Analogous to finding organic samples on a returning probe, we should treat it is very dangerous until any unknown threat vectors have been ruled out. Not let's assume it's OK unless there's a reason not to.

In AI, as in alien infection, it may turn out to be very difficult to put the genie back in the bottle.
 
  • Like
Likes Oldman too
  • #203
bland said:
I'd like to see a thread the topic of which was actually seeing if the participants in this thread are able to even come to an agreement on what we terms human self awareness.
You are welcome to start one, @bland, but if this thread is any indication, it is likely to meander about, have lots of PFers talking past, above, below, and beside each other, then peter out having reached no conclusion or consensus 😬

.Scott said:
I have no idea what "NFI" is.
Sorry, @.Scott, it might be an Australian acronym, the polite version means, No flaming idea!
 
Last edited:
  • Like
Likes russ_watters
  • #204
Melbourne Guy said:
..., it might be an Australian acronym, the polite version means, No flaming idea!
I'm in the Deep North hinterland, and unless you're still living in the era of Bluey and Curly I fear you are misleading our American friends. I don't think The Reverend Monsignor Geoff Baron, the Dean of St Patrick's Cathedral in Melbourne, would have used 'flaming'. Although he probably wish he did now!

DaveC426913 said:
Certainly but, in this case, that very 'Unknown' surely swerves the pathway toward the "Yes, it should be feared" camp, no?

I don't think so because I don't see there's any grey area. Sort of like babies around 18 months, they have all the necessary neurological equipment, they are burning in pathways in their brain but in the meantime they just appear to very intelligent animals, much like a dolphin or a crow or bonobo, until that is something happens at around two where they are suddenly aware of themselves as a separate being which is why they call it the terrible two's.

Do we even understand the transition that a baby makes when suddenly there's a 'me' and all those other idiots who aren't 'me'. I myself have eschewed breeding so I have not witnessed it firsthand but many people who have, have told me that it's very sudden.

An IA that suddenly 'woke up', would be exceedingly weird and maybe very scary.
 
  • #205
Melbourne Guy said:
Sorry, @.Scott, it might be an Australian acronym, the polite version means, No flaming idea!
See also: ISBTHOOM*

*It Sure Beats The Hell Out Of Me
 
  • Like
Likes Melbourne Guy
  • #206
:confusion:

I said:
DaveC426913 said:
... that very 'Unknown' surely swerves the pathway toward the "Yes, it should be feared" camp, no?
with which you disagreed:
bland said:
I don't think so...
and yet, by the end, you'd reached the same conclusion:
bland said:
An IA that suddenly 'woke up', would be exceedingly weird and maybe very scary.
 
  • #207
Jarvis323 said:
True, but there may be 1000 Hitler idols, maybe at a time, who get the opportunity to try and command AI armies, and eventially you would think at least one of them would lose control. Either way what is the prospect? AI causing full human extinction on its own vs AI helping a next generation Hitler to cause something close to that on purpose. And then you have the people who mean well trying to build AI armies to stop future Hitlers and they can make mistakes too.
Well, this is why I said "with or without AI". There are small groups of people, today, who have the power to destroy the world if they choose to or make a big mistake. It does not require AI nor must it be more inevitable with AI than it is without.

The idea of thousands of people/groups having access to a world-destroying technology? Yup, I do agree that makes it much more likely someone would destroy the world. With or without AI. I don't see that AI necessarily increases the risk.
 
  • #208
russ_watters said:
I prefer: if we can't tell the difference, does it even matter?
Not being able to tell a difference when details are hidden is not the same as there not being a difference. Behind one door is a live human and behind the other us a dead simulation of a human written by humans. I prefer AI be called SI, Simulated Intelligence.
 
  • #209
bob012345 said:
Not being able to tell a difference when details are hidden is not the same as there not being a difference.Behind one door is a live human and behind the other us a dead simulation of a human written by humans.
If you can't tell the difference, once you've satisfied you've tested it sufficiently, then what does it matter?

I mean, it's kind of a truism. If - as far you can determine - there's no difference, then - as far as you can determine - there's no difference.

bob012345 said:
I prefer AI be called SI, Simulated Intelligence.
How is this more than an arbitrary relabeling to no effect? It sounds a lot like a 'No True Scotsman' fallacy:

"It's not 'real' intelligence, it's only 'simulated' intelligence. After all, "real" intelligence would look like [X]."It also sounds circular. It seems to have the implicit premise that, by definition, only humans can have "real" intelligence, and any other kind is "a simulation of (human) intelligence".
 
  • Like
Likes russ_watters and BillTre
  • #210
DaveC426913 said:
If you can't tell the difference, once you've satisfied you've tested it sufficiently, then what does it matter?

I mean, it's kind of a truism. If - as far you can determine - there's no difference, then - as far as you can determine - there's no difference.
To get to that point for me such a machine would have to look,act, for all practical purposes be a biologically based being indistinguishable from a human being.
DaveC426913 said:
How is this more than an arbitrary relabeling to no effect? It sounds a lot like a 'No True Scotsman' fallacy:

"It's not 'real' intelligence, it's only 'simulated' intelligence. After all, "real" intelligence would look like [X]."It also sounds circular. It seems to have the implicit premise that, by definition, only humans can have "real" intelligence, and any other kind is "a simulation of (human) intelligence".
Not circular if one believes something greater built humans and what humans can do is just mimic ourselves.
 
  • Skeptical
Likes BillTre

Similar threads

Replies
1
Views
1K
Replies
12
Views
769
Writing: Input Wanted Number of Androids on Spaceships
Replies
21
Views
2K
Replies
5
Views
2K
Replies
18
Views
3K
Replies
12
Views
2K
Back
Top