On why the safest form of AI is a simulation of the brain

In summary, the conversation revolves around the potential utility and negative issues of AI, specifically regarding the 'control problem' and the idea of equipping AI with positive human emotions. While there is concern about the power of AI and the potential for accidents or unexpected consequences, there are also potential benefits in creating AI that can relate to humans and assist with tasks. However, it is unlikely that AI will have emotions like compassion and empathy, and it is more likely that AI will be used in the background to curate content and steer it in a certain direction.
  • #1
Posy McPostface
I have read differing arguments about the potential utility or negative issues arising due to AI. There are many concerns about the 'control problem' of AI or how to 'sandbox' AI so it can serve our interests and not let a fast takeoff of AI lead to a situation where AI is the sole power around the globe. However, I think it would be an effort in futility to try and 'control' AI in any way or sandbox it somehow. I think the most important trait that we should have with AI is for us and it to be able to relate with one another.

I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on. I believe these are valuable traits that are beneficial in ensuring that AI can have those feelings via a simulation of the human brain.

Does anyone else think this is a beneficial idea? It really isn't that farfetched and I think is the safest version of Artificial General Intelligence that can be created with human interests in mind. In essence, it would be able to identify with us in some regards, and that's what's really important, at least in my mind.
 
Physics news on Phys.org
  • #2
I think you're getting caught up in the hype around AI. For decades sci-fi authors have written stories of evil AI taking over the world. Now its become a popular pastime in the news and gets people excited (gets people to read about it which is what improves their revenue stream).

AI today is tackling the jobs that are repetitious but are beyond programmers writing a program to do the work. AI promises that a machine will be able to learn through example ie we can train it to do the job and it will do it ceaselessly. The problem is that sometimes a human will get caught up in the machinery and the machine may be blind to the human being there and so doesn't react correctly. The engineering solution is to add more sensors and retrain the machine but there is always the chance of some unexpected consequences. There have been cases of a worker being accidentally hit and seriously injured by a robotic arm in an auto factory.

http://www.telegraph.co.uk/news/wor...kills-man-at-Volkswagen-plant-in-Germany.html

Bottom line is that we will in the future become dependent on AI device to do things for us and that the more common errors will be found and fixed. However, there will always be an unusual set of circumstances that will cause the machine to behave unexpectedly an a human will be injured or worse.

The emotional aspect that you describe would most likely develop when we try to create companions or tools to help folks with medical issues where they need to interact with others. Its possible that an AI could handle this role and appear to be emotional involved. One such pseudo AI example is the famous Eliza program which used a form echo therapy to interact with people using it. Many felt it helped them with their most intimate problems whereas the programmer said the program merely echos back what the user input in the form a question with no AI involved. Imagine at some point if the addition of AI technology could enhance Eliza to appear to be human and to pass the Turing test. This is where emotion might be of benefit.

https://en.wikipedia.org/wiki/ELIZA

https://en.wikipedia.org/wiki/ELIZA_effect

I don't think you'll be seeing fair but compassionate police bots or anything like that. Most likely you'll see AI working in the background to curate facebook or news content as an example and to steer it in some specific direction (both positive and/or negative depending).
 
  • #3
jedishrfu said:
I think you're getting caught up in the hype around AI. For decades sci-fi authors have written stories of evil AI taking over the world. Now its become a popular pastime in the news and gets people excited (gets people to read about it which is what improves their revenue stream).

AI today is tackling the jobs that are repetitious but are beyond programmers writing a program to do the work. AI promises that a machine will be able to learn through example ie we can train it to do the job and it will do it ceaselessly. The problem is that sometimes a human will get caught up in the machinery and the machine may be blind to the human being there and so doesn't react correctly. The engineering solution is to add more sensors and retrain the machine but there is always the chance of some unexpected consequences. There have been cases of a worker being accidentally hit and seriously injured by a robotic arm in an auto factory.

http://www.telegraph.co.uk/news/wor...kills-man-at-Volkswagen-plant-in-Germany.html

Bottom line is that we will in the future become dependent on AI device to do things for us and that the more common errors will be found and fixed. However, there will always be an unusual set of circumstances that will cause the machine to behave unexpectedly an a human will be injured or worse.

The emotional aspect that you describe would most likely develop when we try to create companions or tools to help folks with medical issues where they need to interact with others. Its possible that an AI could handle this role and appear to be emotional involved. One such pseudo AI example is the famous Eliza program which used a form echo therapy to interact with people using it. Many felt it helped them with their most intimate problems whereas the programmer said the program merely echos back what the user input in the form a question with no AI involved. Imagine at some point if the addition of AI technology could enhance Eliza to appear to be human and to pass the Turing test. This is where emotion might be of benefit.

https://en.wikipedia.org/wiki/ELIZA

https://en.wikipedia.org/wiki/ELIZA_effect

I don't think you'll be seeing fair but compassionate police bots or anything like that. Most likely you'll see AI working in the background to curate facebook or news content as an example and to steer it in some specific direction (both positive and/or negative depending).

I feel as if we're talking about two different things here. Yes, the AI of the near future will be task specific and dependant on machine learning to accomplish goals through deep neural networks. That's the rather limited AI that I think you're talking about. However, I had in mind AGI (Artificial General Intelligence) which would supersede us in most if not all domains of intelligence. Meaning that it could apply intelligence to all domains of human thought. It sounds somewhat sci-fi; but, there are efforts at modeling the entire human brain. Please correct me if I'm wrong; but, this is different than task dependant AI.

However, in regards to the above, I don't believe we're near whole brain simulation capability without the application of quantum computing due to the possibility of brain microtubule operating on a quantum level in the brain. According to the Church-Turing-Deutsch Principle, it can and should be possible to also simulate if there are no hard physical limits preventing us from doing so. I might have to read some of Roger Penrose's' thoughts about this to clarify any ambiguity about this being possible in the present or future.

Forgive my enthusiasm, I just am reading some good books on the matter, Max Tegmark's Life 3.0.

This all might sound like hypothetical gibberish; but, there's science behind this.
 
Last edited by a moderator:
  • #4
Your Wikipedia reference pretty much answers your question. AI predictions have always been too optimistic and that really hard AI problems aren’t funded to the same level as task specific problems because of ROI issues.

Even if there was an AGI there is still the very real issue of human danger that must be addressed before it can be trusted enough.

I don’t think there’s an answer about an AI chainreaction where it just dominates everything. We do know that virus technology can do that to some extent and an AI enabled virus or malware might even be more problematic at disabling machines on a network.
 
  • #5
Posy McPostface said:
I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on. I believe these are valuable traits that are beneficial in ensuring that AI can have those feelings via a simulation of the human brain.
Have you checked out Sam Harris's podcast, Waking Up with Sam Harris? I seem to recall in a conversation with Richard Dawkins that he explained that even the AI in self-driving cars had to have some sense of morality built in. If a self-driving car finds itself in a situation where it can save the lives of several pedestrians at the cost of killing one, it needs to be able to decide somehow. More generally, I believe Sam feels that the common separation between morality and science is unfounded.

By the way, the 8/29 episode of the podcast was with Max Tegmark.
 
  • #6
vela said:
Have you checked out Sam Harris's podcast, Waking Up with Sam Harris? I seem to recall in a conversation with Richard Dawkins that he explained that even the AI in self-driving cars had to have some sense of morality built in. If a self-driving car finds itself in a situation where it can save the lives of several pedestrians at the cost of killing one, it needs to be able to decide somehow. More generally, I believe Sam feels that the common separation between morality and science is unfounded.

By the way, the 8/29 episode of the podcast was with Max Tegmark.

There's a deeper problem with the version of the 'trolley problem' as seen as a criticism to utilitarianism, which most AI enthusiasts promote nowadays. Namely, the problem is that AI will have no imperative to self-improve without feeling remorse or guilt over killing one person over some others. In other words, how do you make AI realize that even killing one person instead of a few was also a morally undesirable outcome despite it being forced upon AI to render of more utility than killing the other group of people? Obviously creating a calculus of utility would never be possible to be made infallible, so AI would (we hope) improve on that calculus; but, without feeling as if it made a bad choice in killing the one over the few others, then it has no imperative to improve that calculus of utility.
 
  • #7
As an FYI, Berkeley CS professor Stuart Russell (who is the co-author of Peter Norvig of textbook Artificial Intelligence: A Modern Approach, and who is world-renowned for research in AI) has written a number of articles addressing the very questions posed in this thread. Here is a list of his publications below.

https://people.eecs.berkeley.edu/~russell/publications.html
 
  • #8
StatGuy2000 said:
As an FYI, Berkeley CS professor Stuart Russell (who is the co-author of Peter Norvig of textbook Artificial Intelligence: A Modern Approach, and who is world-renowned for research in AI) has written a number of articles addressing the very questions posed in this thread. Here is a list of his publications below.

https://people.eecs.berkeley.edu/~russell/publications.html
Thanks, I'm somewhat awash in information. Could you give me a header on where to start to read his publications in relation to the thread's question? Thanks.

Already have some books to read related to what questions I've already posted here.
 
  • #9
Posy McPostface said:
Thanks, I'm somewhat awash in information. Could you give me a header on where to start to read his publications in relation to the thread's question? Thanks.

Already have some books to read related to what questions I've already posted here.

I would start with an article Stuart Russell wrote in Scientific American, published in 2016, titled "Should we fear supersmart robots?"

Here is the link from the website:

https://people.eecs.berkeley.edu/~russell/papers/sciam16-supersmart.pdf

Another article, co-written with Alan Dafoe, was published in MIT Technology Review, titled "Yes, We Are Worried About the Existential Risk of Artificial Intelligence".

https://www.technologyreview.com/s/...-existential-risk-of-artificial-intelligence/

More specifically, Russell and his collaborators have explored the notion of the "The Off Switch" where they address whether it is possible to mitigate the potential risk from a misbehaving AI system by turning the system off. A way to approach this is to add an appropriate level of uncertainty in the AI system's objectives. Here is the link to the Arxiv article (note: I haven't read this article, so my summary is based on the abstract).

https://arxiv.org/abs/1611.08219
 
  • Like
Likes mfb
  • #10
Posy McPostface said:
I feel as if we're talking about two different things here. Yes, the AI of the near future will be task specific and dependant on machine learning to accomplish goals through deep neural networks. That's the rather limited AI that I think you're talking about. However, I had in mind AGI (Artificial General Intelligence) which would supersede us in most if not all domains of intelligence. Meaning that it could apply intelligence to all domains of human thought. It sounds somewhat sci-fi...
I'll try to say something like what @jedishrfu is saying, a different way:

We're a long, long way away from Star Trek style CDR Data or even C3PO style AI, so this question is beyond what AI is and will be for a long time. For the forseeable future - decades at least, probably past all of our lifetimes, AI will not look or interact in ways that appear fully human*. The jobs they do will be limited/specific, processing/algorithm based, so there won't even be any place for emotion to play a role.

Ironically, computer technology has enabled movies to far outstrip real technology in terms of what is shown on screen because you can 3D animate literally anything now. But for the next few decades at least, AI will look a lot more like "War Games" than "iRobot". Even Terminator does better with SkyNet, but the problem comes in when they mix the mechanical engineering of androids with the AI brain. Making the physical body is also decades away and isn't really related to the computer part of AI. But many people connect them.

*Note: I'm not a fan of the Turing Test. I don't think it is all that relevant to AI. It's too human-centric, and I think it is arrogant/small-minded for us to think of AI as human mimicks (and I doubt most people working on AI are so encumbered).
 
Last edited:
  • #11
russ_watters said:
For the forseeable future - decades at least, probably past all of our lifetimes, AI will not look or interact in ways that appear fully human
What makes you more sure than AI researchers?
Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years
Source
The median estimate of respondents was for a one in two chance that high - level machine intelligence will be developed around 2040 - 2050, rising to a nine in ten chance by 2075 . Experts expect that systems will move on to superintelligence in less than 30 years thereafter .
Source (50% chance for AGI by 2040-2050, superintelligent AGI 2050-2080)

That is not past all of our lifetimes, assuming no catastrophe happening before.
The survey did not ask exactly about "appearing fully human", but the original question was about outperforming.
 
  • #12
mfb said:
What makes you more sure than AI researchers?
Those sources don't appear to me to be much if at all out of line with what I said. :oldconfused::oldconfused: (vague to the point of pointlessness questions notwithstanding)

However, as I think someone said above, people researching a certain field tend to be overly optimistic about that field -- which is part of the reason why they are researching it. Few people ever choose to take part in what they believe is a lost cause. But you might try asking a 70 year old fusion researcher how that optimism has panned out.
 
Last edited:
  • #13
2040 is just 23 years away, and none of these 50% estimates are "past all of our lifetimes". Even 2075 (90% chance estimated) could well be within the lifetime of some here.
 
  • #14
mfb said:
2040 is just 23 years away, and none of these 50% estimates are "past all of our lifetimes".
But 23-33 years is "decades" - did you miss that part of my post? Why even argue about this? Should we find out who the AI researchers are who think full human replacements are centuries away and argue with them too?

I'd rather argue with the political scientists who injected some of the same arrogant/small-minded/anthropomistic assumptions into the poll as we are discussing regarding emotional AI: the very, very faulty idea that mimicking humans is any sort of meaningful goal/benchmark. "Intelligence" (and therefore AI) is so much bigger than that. The general public has an infatuation with robots brought on by pop media that just totally misses the point: computers have far outstripped humans in intelligence and machines have far outstripped humans in physical job performance since their inception.

[edit]
I saw a De Beers commercial last night that argued that real is unique and therefore special and therefore worth the extra money vs fake. But is it really? That's an ethics question and that already has a big impact on human mimicking AI's adoption and it always will.
 
Last edited:
  • #15
I didn't say anything against the decades, but the "probably beyond all our lifetime" doesn't correspond to what the experts expect.
russ_watters said:
Should we find out who the AI researchers are who think full human replacements are centuries away and argue with them too?
They are included in the surveys.
 
  • #16
mfb said:
I didn't say anything against the decades, but the "probably beyond all our lifetime" doesn't correspond to what the experts expect.They are included in the surveys.
Hi mfb,

May I ask about your thoughts on the OP? I really believe that true AGI of the human brain eliminates any ambiguity over whether AGI can ever be ascertained about its validity.
 
  • #17
It is unclear how challenging a full brain simulation is compared to a more "computer-like" AGI. If it is much more difficult it doesn't help because computer-like AGI will exist first.

Even if the first artificial general intelligence is the simulation of a human brain we are not completely safe. If this simulation finds a way to change itself (intended or not by biological humans), it could still lead to superhuman intelligence, and would effectively make the later versions of this person a dictator. Who knows how that will turn out.
 
  • #18
mfb said:
It is unclear how challenging a full brain simulation is compared to a more "computer-like" AGI. If it is much more difficult it doesn't help because computer-like AGI will exist first.

Even if the first artificial general intelligence is the simulation of a human brain we are not completely safe. If this simulation finds a way to change itself (intended or not by biological humans), it could still lead to superhuman intelligence, and would effectively make the later versions of this person a dictator. Who knows how that will turn out.
Yes, I think you are right. Simulating the human brain seems like the tour de force to AGI.
 
  • #19
We don't know much as yet about how our brains process information.
(Other than it's not like a digital computer).
You can't emulate something if you're not sure what it is you want to emulate.
 
  • #20
rootone said:
We don't know much as yet about how our brains process information.
(Other than it's not like a digital computer).
You can't emulate something if you're not sure what it is you want to emulate.

Well, I think, these guys are on the right track:
http://bluebrain.epfl.ch/

As to whether they can pull it off before companies like Google or Softbank (100 Billion being invested by them towards AI) is rather unlikely. Interesting times nevertheless.
 
  • #21
Posy McPostface said:
I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on. I believe these are valuable traits that are beneficial in ensuring that AI can have those feelings via a simulation of the human brain.
I don't think I agree with you, Posy. Don't forget that simulating a human brain in an attempt to get positive human emotions also has the potential to invoke negative human emotions. We know our emotional state (and mental stability) is affected by more than our own thoughts. Would we simulate the chemical interactions too? Imagine for a few minutes that it is your mind we simulate. If we forgot to account for neurochemistry in our first attempt you would probably be a very unhappy simulation. I don't think I want to interact with an emotional AI.

Would we remember to send your simulated visual cortex video data? If we didn't, you would be blind. Your mind probably wouldn't be able to interpret the signals at first, it's specifically setup to read the signals received through your eyes. If we simulated brain plasticity your mind may eventually learn to interpret the information in the form we present it. Don't you think this process would be a pretty traumatizing experience?

Lastly, what's the difference between a simulated human brain and the mind of an ordinary person? Picture yourself, Posy McSimulationface, finally fully and properly simulated. How would we in the outside world convince you to work tirelessly at the drudgery we created you for? How would you be any better at it?
 
  • Like
Likes 256bits
  • #22
This thread brought to mind a quote from the 2014 film Transcendence. I know it wasn't the greatest film necessarily, but the idea of shaping an AI into something like an organic brain made me think of Kate Mara's line...

"I used to work for Thomas Casey... When he uploaded that rhesus monkey, I was actually happy for him... One night, he invites us all to the lab for the big unveiling. Gives a speech about history, hands out champagne. You know what the computer did when he first turned it on? It screamed. The machine that thought it was a monkey never took a breath, never ate or slept. At first, I didn't know what it meant. Pain, fear, rage. Then, I finally realized... it was begging us to stop. Of course, Casey thought I was crazy. Called it a success. But I knew we had crossed a line... It changed me forever."

Fiction, I know, but I always thought it was an interesting quote.

Posy McPostface said:
I think the most important trait that we should have with AI is for us and it to be able to relate with one another.

When it comes to relating to each other, I'm not sure humans are that great of an example.
 
  • #23
How long before an AI player in a game figures out that it can win by frightening or blackmailing it's human opponent.
 
  • #24
CWatters said:
How long before an AI player in game figures out that it can win by frightening or blackmailing its human opponent.

More likely would be a multi-player catfish scheme where the players are AI ones working against the human player.

I saw a recent documentary on a novel catfish who set up three fake accounts and acted as the go-between among three celebrities who thought they were talking to one another. The catfish handled the message relay and changed messages as needed to play one celeb off of another. The big celeb was the NBA star Birdman.

http://abcnews.go.com/US/canadian-woman-catfished-nba-star-aspiring-model-ruining/story?id=46778331

I can imagine a time when that happens and then you can have a Game of Thrones style insider action in a game.
 

FAQ: On why the safest form of AI is a simulation of the brain

What is the "safest form of AI"?

The safest form of AI refers to a type of artificial intelligence that is modeled after the human brain. This means that the AI is designed to mimic the neural networks and functioning of the brain, allowing it to learn and adapt in a similar way to humans.

Why is a simulation of the brain considered the safest form of AI?

A simulation of the brain is considered the safest form of AI because it is designed to be more human-like and therefore less likely to have unpredictable or harmful behaviors. Additionally, since it is based on the structure of the human brain, it is more likely to have ethical considerations built into its design.

How does a simulation of the brain differ from other forms of AI?

A simulation of the brain differs from other forms of AI in that it is designed to mimic the structure and functioning of the human brain. This means that instead of being programmed with specific rules and instructions, it learns and adapts through experience and data, similar to how humans learn.

What are the potential benefits of using a simulation of the brain for AI?

There are several potential benefits of using a simulation of the brain for AI. These include more human-like and ethical decision making, better adaptability and learning capabilities, and a potential for improved understanding of the human brain and consciousness.

Are there any potential risks or drawbacks to using a simulation of the brain for AI?

One potential risk is that a simulation of the brain could still develop harmful or unpredictable behaviors, as it is still a complex system that is not fully understood. Additionally, there are concerns about the potential for the AI to surpass human intelligence and control. Further research and ethical considerations are needed to mitigate these risks.

Back
Top