Why is 'AI' often so negatively portrayed?

  • Thread starter n01
  • Start date
  • Tags
    Ai
In summary, people portray AI negatively because it sells well and people are afraid of what it might entail.
  • #36
jack action said:
This brings us back the original OP: Why do you think AI has to destroy humans and why AI will necessarily be better for the world?

Do you think humans would be content being the second most intelligent beings on Earth? How do humans treat dolphins ?
War sounds inevitable. Superior beings would see human presence as a threat.But this talk about beings is contrary to my point in #27.
 
Physics news on Phys.org
  • #37
anorlunda said:
Do you think humans would be content being the second most intelligent beings on Earth?
What makes you think AI would be smarter than human beings? (How do we define «smarter»?)
anorlunda said:
How do humans treat dolphins ?
AFAIK, there is no war between humans and dolphins.
anorlunda said:
War sounds inevitable. Superior beings would see human presence as a threat.
So human beings see ants and dolphins as a treat?

Which brings another question: Are lions smarter than gazelles because they seek to kill them? I know, it's a straw man argument :biggrin:, but linking intelligence with destruction is an argument that I don't understand. If it was the case than I must be a terrible human being as I don't seek to destroy every life form that I meet (or maybe I'm not smart enough? :)):oops::frown::woot:o0)).
 
  • Like
Likes Drakkith
  • #38
russ_watters said:
In War Games, the NORAD computer was accessed via a phone line. Obsolete today, but I would think today that the computers controlling the nuclear weapons are not on the internet.

Nuclear weapons are not controlled by computers in such a way as to allow a completely remote launch. All ICBM launches have to be initiated locally by two officers sitting in a launch facility somewhat near the silos. You'd have to physically re-wire the entire system to allow for a completely remote launch.

Sub-launched missiles are even more disconnected from a remote launch. Subs are manned craft which can't even be communicated with unless they purposely trail a huge antenna behind them or float an antenna on a buoy.

Air-dropped/launched weapons are similar to the subs except that the aircraft is easier to communicate with. Still, like a sub, the aircraft cannot be controlled remotely at all (for now at least), so there is no way to launch a nuclear weapon remotely. Heck, you need hundreds of people just to get the aircraft ready for takeoff and to load the weapons in the first place.

anorlunda said:
Do you think humans would be content being the second most intelligent beings on Earth? How do humans treat dolphins ?
War sounds inevitable. Superior beings would see human presence as a threat.

The wants and needs of an AI truly superior to humans are impossible to predict right now. Perhaps it would be content to get lost in its own thoughts as it takes in data from the internet. Perhaps it would choose to completely ignore us and go on its merry way. Perhaps it would see us as children and decide it is morally wrong to do us any harm. Who knows? I think it's important to keep in mind that human beings think the way that we do because evolution drove us to be this way. It was beneficial given the conditions we evolved under. The same is not true for an AI. The conditions will be very different and there is little reason that I can see to think that conflict is a likely outcome.
 
Last edited:
  • #39
n01 said:
Are people just afraid of what AI might entail?
Perhaps, but still ...[COLOR=#black].[/COLOR] I consider
 
Last edited:
  • Like
Likes symbolipoint and n01
  • #40
Drakkith said:
The wants and needs of an AI truly superior to humans are impossible to predict right now. Perhaps it would be content to get lost in its own thoughts as it takes in data from the internet. Perhaps it would choose to completely ignore us and go on its merry way. Perhaps it would see us as children and decide it is morally wrong to do us any harm.

"Wants and needs of an AI truly superior to humans" - plot driver (theme?) of William Gibson's Neuromancer.

(And yet another fictional representation of a non-evil AI, contrary to the OP's initial supposition.)
 
  • #41
At the risk of being possibly off topic, here's a link to a paper that might serve as some background reading.

A video about the paper:


And a link to the actual paper:
https://arxiv.org/pdf/1606.06565.pdf
 
  • #42
Some more possible background videos from Computerphile:
(These might not address the OP's question directly, but are at least indirectly relevant.)







At the very least they are fun topics to ponder.
 
Last edited:
  • #43
It appears we may be Outsourcing Science to AI before long, another "Brave New World" of Technology.
http://www.sciencemag.org/news/2017/07/new-breed-scientist-brains-silicon
"I want to be very clear," says Zymergen CEO Joshua Hoffman, heading off a persistent misunderstanding. "There is a human scientist in the loop, looking at the results and reality checking them." But for interpreting data, generating hypotheses, and planning experiments, he says, the ultimate goal is "to get rid of human intuition."
 
  • #45
I think it's simply because we're on the precipice of a mind boggling social disruption but we haven't quite gone over it yet. It's simply new and untested. It has the potential for destruction on an unimaginable scale, or it could ferry us into a new golden age. Humans had the same reservations about unlocking the power of the atom. The main horror is that we don't know where the major breakthrough will come from, and we don't like being out of control. It's understood that the invention of a truly intelligent machine could outsmart every banker and investor in the world and have total control over the stock market before we can even notice.

War is an even scarier proposition. If two advanced states end up warring, the AI race will heat up. It's a paradigm shifting technology, and the side that gets there first will overwhelm everyone else. If Hitler figured out the bomb before us, I'm not sure the allies still would have won the war. It's a terrifying thought that we didn't get there first by very much, but doing so completely changed the world order.

I've thought a bunch about the effects of AI on a planet long term. I've come to the conclusion that AI will be the masters of the universe. If we continue to build benign AI, we will become more and more dependent on it. Over generations, it'll just become a more and more important role in society. It'll control the economy, entertain us, server us, and shape our society. As society gets more complex, the need for humans to work will become less and less. Humans and AI will at first work together, but eventually the work will get too complex for humans and the AI will take over. There is a history of this. There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines. I think we'll be perfectly okay with that. Most humans currently believe that we are subservient to one or more gods.
 
  • Like
Likes anorlunda
  • #46
gleem said:
Elon Musk warns governors to regulate AI development before it is too late.

https://www.inverse.com/article/342...ernors-ai-is-fundamental-risk-to-civilization
When I hear the sayings of Elon Musk and the like, I'm always wondering: Is he overestimating AI or underestimating human kind?
newjerseyrunner said:
There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines.
There are a lot of people that think dogs have «won» that «battle»: Humans treat dogs like gods and fulfill their every need.
 
  • #47
jack action said:
There are a lot of people that think dogs have «won» that «battle»: Humans treat dogs like gods and fulfill their every need.
Actually, that's the opposite. The gods are the ones that provide to the faithful. In your analogy, we are the gods and they are our worshipers.
 
  • #48
Gods-Dogs, An Anagram ?
 
  • Like
Likes jack action
  • #49
newjerseyrunner said:
Actually, that's the opposite. The gods are the ones that provide to the faithful. In your analogy, we are the gods and they are our worshipers.
I'm not sure I agree. If we're the gods, why aren't they following us around, picking up our feces?
 
  • Like
Likes 1oldman2
  • #50
russ_watters said:
I'm not sure I agree. If we're the gods, why aren't they following us around, picking up our feces?
I'm pretty sure the sewage technology up until very recently was "dump it in the ocean and let Poseidon/God deal with it."
 
Last edited:
  • #51
newjerseyrunner said:
I think it's simply because we're on the precipice of a mind boggling social disruption but we haven't quite gone over it yet. It's simply new and untested. It has the potential for destruction on an unimaginable scale, or it could ferry us into a new golden age. Humans had the same reservations about unlocking the power of the atom. The main horror is that we don't know where the major breakthrough will come from, and we don't like being out of control. It's understood that the invention of a truly intelligent machine could outsmart every banker and investor in the world and have total control over the stock market before we can even notice.

War is an even scarier proposition. If two advanced states end up warring, the AI race will heat up. It's a paradigm shifting technology, and the side that gets there first will overwhelm everyone else. If Hitler figured out the bomb before us, I'm not sure the allies still would have won the war. It's a terrifying thought that we didn't get there first by very much, but doing so completely changed the world order.

I've thought a bunch about the effects of AI on a planet long term. I've come to the conclusion that AI will be the masters of the universe. If we continue to build benign AI, we will become more and more dependent on it. Over generations, it'll just become a more and more important role in society. It'll control the economy, entertain us, server us, and shape our society. As society gets more complex, the need for humans to work will become less and less. Humans and AI will at first work together, but eventually the work will get too complex for humans and the AI will take over. There is a history of this. There are two species on this planet that worked together in order to survive, but as complexity grew, the smarter one came to dominate and the lesser one ended up never working much at all: humans and dogs. I think we'll eventually become more like pets to god-like machines. I think we'll be perfectly okay with that. Most humans currently believe that we are subservient to one or more gods.

@newjerseyrunner, for the scenario you present to come to fruition, are you not assuming that progress in AI development will proceed more or less smoothly? Because, from my vantage point, that is far from obvious. Yes, we have made considerable advances in areas of machine learning, such as neural networks/deep learning, but it is far from clear to me that "strong" AI will necessarily emerge out of deep learning (from what I've read, the most impressive results from deep learning came about more through advances in computing power rather than anything particularly groundbreaking in the specific algorithms or theoretical underpinnings, much of which has been already laid out since the early 1990s, if not before then).
 
  • #52
newjerseyrunner said:
I'm pretty sure the sewage technology up until very recently was "dump it in the ocean and let Poseidon/God deal with it."
A stray dog doesn't need a human to live. It will just «work» to find its food and change territory as it gets soiled by its feces (which a «god» will clean up within a certain time). Finding new territories will most likely require fighting with others (offense and defense).

But if the dog «acts» cute enough, it might convince a human being to find the food, clean the territory and do the fighting with others instead of doing it itself. That's one way of answering the question «Who's leading who?» in the human/dog relationship.

Humans might just not be the gods of dogs, they might just have been lead (outsmarted?) to believe they were.
 
  • #53
StatGuy2000 said:
@newjerseyrunner, for the scenario you present to come to fruition, are you not assuming that progress in AI development will proceed more or less smoothly? Because, from my vantage point, that is far from obvious. Yes, we have made considerable advances in areas of machine learning, such as neural networks/deep learning, but it is far from clear to me that "strong" AI will necessarily emerge out of deep learning (from what I've read, the most impressive results from deep learning came about more through advances in computing power rather than anything particularly groundbreaking in the specific algorithms or theoretical underpinnings, much of which has been already laid out since the early 1990s, if not before then).
The major breakthrough recently has been how neural networks are connected to each other, but yes, hardware has pushed t along mostly. The thing is that neural networks are perfect for quantum computers. If quantum computers work out, neural networks potentials explode.

And no, I'm not saying it'll be steady. In fact, I leave open the possibility that it may not even be our global civilization that does it. We could go all the way back to a dark age and have to start it all over again. I propose that eventually we will do get there. It's even possible that an AI God would end up destroying everything, set us back to more dark ages, and start the whole thing over again. But some should be stable enough to last many human generations. And if one is really self stabilizing , it could live for millions of years as our benign overloads. Those would likely be the dominate beings of the universe, not green men in ships. All of this I find to be very likely progression for any creature capable of developing technology as advanced as itself over cosmological timescales.
 
  • #54
Borek said:
n01 said:
Are people just afraid of what AI might entail?
That's my bet.
Mine too.
And AI will be faster, for sure.
And, just like us.



But it's a long way away, IMHO.
If not, I'll find it fascinating to watch.

They won't have the weakness, of panic.
 
  • #55
OmCheeto said:
But it's a long way away, IMHO.

Don't be so sure. There's people like me who are working hard to make it happen before my dissertation defense, or actually for my dissertation defense...hopefully.

The reason AI gets a bad rap sometimes is twofold. One, it never delivered what it promised. The idea of AI has been around since the war (and not the Vietnam war), and, to make a gross understatement, it hasn't lived up to the hype. This is the AI insiders frustration, however, and I think the OP was referring more to the general bad rap AI can get in the popular culture.

The reason for the popular bad rap is simply that people don't understand it. And because they don't understand it, i.e., understand what's "under the hood" of AI architectures, they fear it. Basic fear of the unknown. As far as the average Joe or Joanne out there, a human-derived AI creation might as well be an alien from another galaxy. We know nothing about these aliens and they can be good or bad. But fear is stronger than tolerance, and it's much easier to just say AI is dangerous than to take the time to explore the research that is going on in this field. It's fascinating. I run dozens of computer simulations a day on spiking neuron network populations in order to simulate mammalian cortical processes in the attempt to develop a tractable architecture we can exploit for advanced information processing capabilities. These networks I deal with are like little children, they're naughty sometimes, they don't cooperate, and they don't don't make any sense. But they're learning...They're coming along and growing more cooperative with some love and attention. That's the way I look at it. They'll become what we make them become. From there, sure, they may take on a mind of their own, but so what? That's what evolution is all about.

You can't stop it. You can't stop what's coming...

Hahhaha
 
  • Like
Likes OmCheeto
  • #56
DiracPool said:
The reason for the popular bad rap is simply that people don't understand it. And because they don't understand it, i.e., understand what's "under the hood" of AI architectures, they fear it. Basic fear of the unknown. As far as the average Joe or Joanne out there, a human-derived AI creation might as well be an alien from another galaxy. We know nothing about these aliens and they can be good or bad. But fear is stronger than tolerance, and it's much easier to just say AI is dangerous than to take the time to explore the research that is going on in this field.
This point of view speaks more to me. It makes sense that someone who is in the field speaks like that.

What I don't understand is why people like Musk and Hawking speak of it with such fear. Aren't they in the field too (Anyway closer than I can be)? I'm curious to hear your thoughts about the speeches of these people.
 
  • #57
jack action said:
What I don't understand is why people like Musk and Hawking speak of it with such fear.

I would say that people with more means than the average lay person to see both possible positive and negative outcomes current level of research and use of AI can lead to on a global scale will find it natural to point out that we currently are not able to discern the technological preconditions that separate desirable scenarios from undesirable scenarios, and thus that we are unable to ensure that we all do not end up in one of the really bad scenarios. I am not surprised that knowledgeable people like Musk and Hawking (and many others) consider it prudent to point this out.

Some people seems to evaluate risk in the context of AI (or even in general) by trying to guess or estimate the most likely scenarios, and if these are all desirable scenario then they apparently find it unnecessary to analyse or even acknowledge the possibility of less likely scenarios. And in the context of AI they do this even when the actual probabilities are very hard to estimate correctly. To me, this is not prudent risk management.

Also, the use of the words "such fear" sounds to me like an attempt to portrait Musk and Hawking's statements as a result of phobia (irrational fear). However, if fear in this context is taken to mean a rational perception of danger then I will consider it an appropriate label.
 
  • #58
It's because it is a huge game changer. For good or for bad.

For instance, say AI becomes so advanced that it can do most human tasks. This could lead to:
1) Massive unemployment/ social strife

Or

2) Utopia in which the machines/programs do and build everything for us and with the cost of items reducing down to cost of raw materials, no one would need to work anymore.

Or 1) followed by 2).

Or neither of any of the above.

These are just some examples of what could happen. I'm sure there are more.

Another possibility is that the rules would change. Different societies in history have different economic systems. The hunter-gathers had a different system of trade and followed different economics. The current economic theories of society are based off of a post industrial revolutionary period. We can't predict that the post AI revolutionary period would have similar economic laws considering just how drastically significant an impact of a runaway AI development would be on human society.
 
  • #59
Here's a case where a little more AI wouldn't have hurt.
http://www.bbc.com/news/technology-40642968
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter.

"Steps are our best defence against the Robopocalypse," commented Peter Singer - author of Wired for War, a book about military robotics.
 
  • #60
1oldman2 said:
Here's a case where a little more AI wouldn't have hurt.
http://www.bbc.com/news/technology-40642968
"We were promised flying cars, instead we got suicidal robots," wrote one worker from the building on Twitter.

"Steps are our best defence against the Robopocalypse," commented Peter Singer - author of Wired for War, a book about military robotics.

We were promised flying cars, but we got better than that. We got the internet.
 
  • Like
Likes 1oldman2
  • #61
Decades ago, AI was grossly oversold, with everyone and his brother jumping on the bandwagon. Many people had too much "artificial" and not enough "intelligence". Progress was slow and idealistic theories ran into computer limitations.

That being said, I think that the current state of AI is very impressive (even if it is still being oversold). Significant capabilities like automated cars, license plate scanners, facial recognition, etc., etc., etc., are becoming practical.
 
  • Like
Likes russ_watters
  • #62
FallenApple said:
For instance, say AI becomes so advanced that it can do most human tasks. This could lead to:
1) Massive unemployment/ social strife

Or

2) Utopia in which the machines/programs do and build everything for us and with the cost of items reducing down to cost of raw materials, no one would need to work anymore.

Or 1) followed by 2).
This argument is along the line of one of Musk's arguments found in the article of post #44:
Musk said job disruption will be massive when A.I.-powered machines reach their potential, joking “I’m not exactly sure what to do about this,” before adding, “This is really like the scariest problem to me.” He noted the transportation job sector — which accounted for 4.6 million jobs in 2014 — “will be one of the first things to go fully autonomous.”

“The robots will do everything, bar nothing,” he said dryly.
I'm no expert on AI, so I cannot bring arguments about the potential of AI. But with this kind of argument, I have enough knowledge about it to know it is a logical fallacy. It is an argument along the line of «buggy whips will become obsolete when people start to travel in cars rather than in horse-drawn buggie, therefore one should be careful before encouraging the car industry.»

I repeat here what Musk said:
“This is really like the scariest problem to me.”
Really? Why don't Musk see that in a positive way instead? Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives. My grandfather quit school at age 12 and he did cut down trees with axes and saws as a job. Would it have been a good idea to not invent the chainsaw and the even more productive logging equipment that we have today such that I could have a job in the industry when I was 14 years-old. I don't know if it is better, but I instead went to university earning a baccalaureate in Mech. Eng. One thing is for sure, I don't think my life is worst than the life of my grandfather.

Thinking that if robots do the jobs we do today means that humans will cease to work, that is just plain wrong because the past is an indication of what the future holds. And people never stop being curious and embarking on new adventures.

When one serves me this kind of argument, it doesn't help me trusting the validity of his other arguments which only promote fear, often along the line of the end of the world (Lots of past examples related to this as well that prove those fears are unreasonable).
 
Last edited:
  • #63
UsableThought said:
jack action said:
Like those 4.6 million people will be able to spend their time doing something more valuable, enriching their lives.
What alternate universe is this going to happen in?
Sorry, I don't think I get your point. I just explained how my grandfather couldn't study past age 12 because he had to work. Years later, it wasn't my case. There are billions of people who can still follow that path and even go further.
UsableThought said:
And what world is it that has no possibility whatsoever of coming to an end because it never has ended before?
It is not about having no possibilities, it is about probabilities. And either you see the things you're doing in a positive way or, if you don't, you stop doing them and do something else.

I found weird someone like Musk who invests in AI and then spread fear about how it will destroy us. If you don't believe in it, stop doing it.

He wants laws. What laws? What laws could have been declared in 1900 about the future car society we live in today? How can you foresee the future about something you don't even master yet? Do we really think some evil genius will try to use the technology to destroy the world? Or is it more probable that people will adjust as they go along, for the greater good, like they always did in the past? Unstoppable technology that sneaks up on us without ever noticing it? I highly doubt that.

Another thing I don't understand is that fear that a being smarter than us (whatever that means) will try to get rid of us because we're less smart. Not only humans don't try to get rid of other life forms judged less smart, but as our knowledge grows, we recognize and value more and more the diversity of life in any shape or form. We spend an insane amount of energy trying to rescue and maintain other life forms, a very unusual trait among any other animals or plants. Why would a smarter being part ways from that tangent to go back to a cavemen mentality?
 
  • #64
jack action said:
I found weird someone like Musk who invests in AI and then spread fear about how it will destroy us. If you don't believe in it, stop doing it.
[rantmodeinitiated]
I just find it annoying that a billionaire has such a poor grasp of economics that he falls for a bumper-sticker style economic myth!

Here's how it works (and clearly, he and other aherents put no thought toward what happens after Step 1):
Step 1: New machine leaves millions unemployed.
Step 2: Large pool of unemployed workers reduces the cost of labor.
Step 3: Other industries hire more people because now it is cheaper to employ them. Or:
Step 3a: Unemployed people acquire new skills and get different new jobs.

Now, I'm not saying this process is pain-free - it can be extremely painful, especially for the individuals - but over the very long-term, the market adapts and adjusts and unemployment rates remain remarkably consistent.

[/endrant]
 
  • #65
jack action said:
What I don't understand is why people like Musk and Hawking speak of it with such fear. Aren't they in the field too (Anyway closer than I can be)? I'm curious to hear your thoughts about the speeches of these people.

That is a good point. But I'll refer to my earlier post when I say that people who don't know what's "under the hood" of how AI architectures are constructed are really just talking through their hat. I like Elon Musk, he's an inspiration to me and I intend to do everything in my power to help him realize his dream of colonizing Mars. Steven Hawking is a legend. But, iconic scientists as they are, I'm sure neither of them have much experience with working with neural network architectures, so how can we look to them for guidance or sanity?

The bottom line is no scientific quest is going to overscore the Manhattan project. You want to talk about a project that was going to manifest itself no matter what? That was it. And it aint ancient history, it really is our biggest threat to bring down the temple of the body of progress humans have made over the past 5000 years.

The threat of nuclear war and climate change are by far the most immediate threats to our existence. In that sense, I ally with Noam Chomsky who has been pushing this for years.

As far as the robots, sure, they can go awry, but again as I said in the earlier post, it's up to how we design them.

jack action said:
Another thing I don't understand is that fear that a being smarter than us (whatever that means) will try to get rid of us because we're less smart. Not only humans don't try to get rid of other life forms judged less smart, but as our knowledge grows, we recognize and value more and more the diversity of life in any shape or form. We spend an insane amount of energy trying to rescue and maintain other life forms, a very unusual trait among any other animals or plants. Why would a smarter being part ways from that tangent to go back to a cavemen mentality?

So this is a good sentiment, but the fact of the matter is that high intelligence does not equate with high altruism or morality. Yes, I love the apes and even the monkeys and want to preserve them, but that's becuase of a sentimental and altruistic sense that was burned into my brain for natural selection purposes a long time ago. And that's all good with me. But for the robots, we cannot assume anything, we need to make things explicit..

How? Well, if they were biological creatures, we could just do something with the genetics and make them dependent of dietary lysine, like in the Jurassic park movie.

Thank god for what we AI researchers are doing, though, LIFE WILL NOT FIND A WAY. Why? Becuase we are not dealing with life, we are dealing with in silico preparations, not in vivo preparations. Therefore, we can control the parameters to make sure nothing goes wrong...
 
  • #67
DiracPool said:
Thank god for what we AI researchers are doing, though, LIFE WILL NOT FIND A WAY. Why? Becuase we are not dealing with life, we are dealing with in silico preparations, not in vivo preparations. Therefore, we can control the parameters to make sure nothing goes wrong...

I am not sure why you think that non-living things will always be controllable, and I think you also miss that we humans are part of this, driving the technology forward having to agree on how to use it.

To be in control of a socio-technical system that uses a technology such as AI means that we at all times effectively can and will control the design and operation of the underlying systems so that we steer towards desirable outcomes and stay away from bad outcomes. This implies that several conditions must all be be established:
  1. We must be sufficiently able to predict the set of possible outcomes and their desirability ahead in time.
  2. We must have sufficient consensus on what is considered desirable and what is considered undesirable.
  3. There must exists a set of parameters that will allow us to steer our systems toward what we desire and away from what is not desired.
  4. We must have the collective will to actually apply this control.
  5. These conditions must all be established at all times.
Each of these conditions have obvious failure modes that could mean loss of control at the wrong time. Note, that conditions like number 4 depend heavily on "human nature" in a competitive environment and less on the technology itself. Of course, occasional loss of control do not imply we will get an undesirable outcome, but currently we are heading towards a situation where pretty much none of the conditions are established at any time, hence we have no idea what level of control we actually have.

The above can be said to have been true for pretty much any technology we have developed so far and yet we seem to be overall content with the outcome, so why would this be a problem for (general) AI technology? Currently there is heavy research in making systems self-learning, autonomous, adaptive and distributed to a higher degree (not necessary all at once) and each of these capabilities will (everything else being equal) weaken the conditions mentioned above far more than any of the previous "passive" technologies we have developed. For instance, to me it seems we already have serious issues with predictability of interconnected systems due to their complexities alone and instead of simplifying and thus gaining a higher degree of predictability we just add more "AI complexity" to the systems instead. If we look at the "traditional" AI singularity problem, then this is also a problem of loss of predictability both from increased complexity and from decrease of prediction time.
 
  • Like
Likes UsableThought
  • #68
DiracPool said:
So this is a good sentiment, but the fact of the matter is that high intelligence does not equate with high altruism or morality.
I did not meant it as altruism or morality. For my part anyway, I understand that I cannot live without other forms of life. We need fishes, weeds, reptiles, insects and bacteria. Most reptiles and insects don't inspire me a good sentiment of altruism! More often, I have to fight a sentiment of disgust, restraining myself from getting rid of them all! I think that it is the result of my intelligence connecting the dots between the survival of other species and my own survival.
Rubidium_71 said:
I really like the main point of that general though:
I don't think it's reasonable for us to put robots in charge of whether or not we take a human life...[America should continue to] take our values to war.
It's not really a question of fearing a robot uprising, but questioning our value system from an ethical point of view.

It is along the line of «Who is responsible in a car accident with driverless cars?» The car passenger, the car owner or the car manufacturer? That is not an easy question to answer and a more serious problem to solve in short term then AI taking over the world.
Filip Larsen said:
Currently there is heavy research in making systems self-learning, autonomous, adaptive and distributed to a higher degree (not necessary all at once) and each of these capabilities will (everything else being equal) weaken the conditions mentioned above far more than any of the previous "passive" technologies we have developed.
But the question is always how do you prepare for the unknown? So we need control. Control over what? How?

In 1900 when they begun dreaming about a car society, how could they have foreseen the problem the internal combustion engine would cause? How could they prepare for that? Talking about pollution? The pollution of the time was horse manure. That inoffensive gas going out the tailpipe was a god given gift! Even if you could go back in time and tell those people: «You know what? You should focus more on the electric car.» That is not even a guarantee that everything would be better today because we don't know what is the outcome of a world filled with electric cars. The reality is that it is our problem now, not the problem of people who died decades ago. And there will always be problems to solve.

The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

One should always advance with care into the unknown, that is good common sense.
 
Last edited by a moderator:
  • #69
The greatest concern is the development of a "general artificial intelligence". Do we need to be gods and create something in our own image. Wouldn't "smart" stuff be good enough. That is make tools that we control not competitors that we might not be able to.

Musk is as much an AI supporter as anybody. His "predictions" of our demise is a possibility if we are not careful in its implementation. He is a signer of the "Asilomar AI Principles" along with about 3500 others including 1200 AI/robotic researchers. These principles have been drafted by the AI community to help assure that AI will be a benefit to mankind.

His particular message at the governor's conference was to warn of the uncontrolled development and implementation of AI, market driven forces that migh try to exploit this technology for its economic advantage with little concern for possible unintended consequences.
jack action said:
In 1900 when they begun dreaming about a car society, how could they have foreseen the problem the internal combustion engine would cause? How could they prepare for that? Talking about pollution? The pollution of the time was horse manure. That inoffensive gas going out the tailpipe was a god given gift! Even if you could go back in time and tell those people: «You know what? You should focus more on the electric car.» That is not even a guarantee that everything would be better today because we don't know what is the outcome of a world filled with electric cars. The reality is that it is our problem now, not the problem of people who died decades ago. And there will always be problems to solve.

The kind of predictability and control you are talking about is just an illusion. You'll never achieve it.

If we build it correctly everything will be fine. Right. Man is not perfect and neither is his technology. He tend to mind his pocket book more than his future.

Today we are all connected through the internet and are becoming more dependent on it economically. In the past month we have seen a world wide virus attack. If one can take down the internet for a substantial time what will be the result.

Man is his own worst enemy. Every thing we do has a down side. Today the medical community (do no harm, right) is responsible for a growing opioid epidemic and responsible for a growing menagerie of super bugs. Technology kills or maims millions each year. Has man developed anything that did not have some unforeseen consequences? Is he learning anything from his past experiences? Will he ever? Shouldn't we be more cautious with the more complex technologies coming down the pike. You would think so.
 
  • Like
Likes Filip Larsen and UsableThought
  • #70
@Filip Larsen and @gleem have posted what I consider informed comments - that is, comments that are not merely opinion (although they include opinion) but in addition either list or else point to actual knowledge related to responsibly overseeing the development of AI - and not just at the basic level of coding, either. For example, going to the website gleem references for the Future of Life Institute and the 2017 Asilomar conference, we find on a list of principles agreed on at that conference. An interesting excerpt from that list:

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?
Look at number 2 above more closely - the last phrase in the sentence before we get to the bullet list of concerns. Read that phrase again:

thorny questions in computer science, economics, law, ethics, and social studies

And now look at the list of speakers from the conference page: https://futureoflife.org/bai-2017/

Most of the names don't mean anything to me. Why? Because I know nothing about AI. I could make a whole bunch of claims about AI and its perils, or lack of, based on my personal ideological spin to do with politics, economics, and social issues - but that would change the fact that I'm not a scholar when it comes to politics, economics, social issues, or AI. I'm not even a well-read amateur. I know nothing. Whereas the people listed in the conference, who developed this list of principles? Some of them look to be Hollywood types, brought on board for persuasion purposes perhaps; but others look to have solid scientific AI credentials.

In other words, they are experts. They might have a clue. Possibly you might recognize a name here & there and be able to dismiss that person for some reason or other - but can you dismiss all of these names?

To close, I really wish the General Discussion Forum held comments to the same standards of Quantum Physics, Classical Physics, etc. etc. Because it doesn't; and so we end up with some very bright people making sweeping claims about issues they really know very little about. You can look further than that and see that because PF is really set up for only the hard sciences, it doesn't know how to properly handle the "soft" sciences of economics, law, ethics, and social studies - all of which the folks at the Asilomar conference seem to think are important. Basically, PF unintentionally disses the soft sciences by not requiring the same level of sourcing required with the hard sciences. It's a shame.
 
Back
Top