No Human-Level AI: Reasons & Consequences

  • Medical
  • Thread starter Mathy21
  • Start date
  • Tags
    Ai
In summary: I wanted to mention it regardless.In summary, the two ways to create human-level AI are both theoretically and practically infeasible. The first option, programming the AI directly, is impractical because of the complexity of human behavior and thought. The second option, using an evolving algorithm, is theoretically infeasible because robots will not have the same evolutionary pressures that humans had and simulating these pressures is practically infeasible.
  • #1
Mathy21
9
0
It is my strongest belief that human-level AI will never exist. Here is my reasoning: There are two ways in which to develop this AI...

1) Program the AI directly
2) Use some adaptive, evolving algorithm

Now because of the complexity of human behavior and thought the first option is practically infeasible.

The second is theoretically infeasible and here is why. If we consider why human intelligence came to be the general answer is that evolutionary pressures selected for us to have this intelligence because it gave us some survival advantage. Robots will not have the same evolutionary pressures that humans had and simulating these pressures is practically infeasible. So the evolving algorithm will never evolve to the complexity of the human mind.

Thoughts?
 
Biology news on Phys.org
  • #2
You can model "survival advantage" in the same way that you can model intelligence. Your second point is flawed.

- Warren
 
  • #3
chroot said:
You can model "survival advantage" in the same way that you can model intelligence. Your second point is flawed.

The fitness function that directed human evolution is no doubt extremely complicated. Implementing this fitness function would probably be just as difficult if not more so than directly programming human behaviors and thought. So from a practical standpoint I do not see how my second point is flawed.

If you disagree could you please elaborate?
 
  • #4
I think this is what the programming forum is for! I might have missed this.

One point of view:
The essence of human thought consists of only a few basic principles. Evolution endowed us with a lot of instincts and optimizations that helped us think better and survive. But the core system--what allows us to think at all--is relatively simple.

Furthermore, a huge amount of brain space is "wasted" doing things like breathing, seeing, controlling motor functions, etc, and coordinating all of the complex concepts that are specialized to those physical functions (like the concept of an "image of a dog"). A thinking AI would not necessarily need these things, and might only need a very simple form of input such as reading text files, making it much leaner and more efficient than the brain. Additionally, there is a demonstrated strong link between short-term memory capacity in humans, and reasoning ability. It may be over-optimistic (depending on the storage requirements of thoughts in an AI), but a thinking computer probably could remember a lot more concepts at a time than a human could, removing that bottleneck to cogitation. And of course the AI would be able to use computer tools (like math packages) as fast as it can think.

Finally, evolution has by no means finished optimizing humans for thought. Our incredibly limited short-term memory is a case in point. I think humans are not too far above animals in terms of thought capacity.

For these reasons, I believe that a proper AI algorithm capturing the essence of human thought is out there, and once it is found there will be huge consequences.
 
Last edited:
  • #5
I agree with 0rthodontist on the principle that just because our minds are complicated, doesn't mean the essential part of thinking is so. Perhaps it might be a lot harder to develop an AI that could be "downloaded" into an "empty" brain (perhaps that of a politician).

I'm not sure what the minimum fitness function is to develop AI, so I'm not yet willing to discount it as a practical alternative for some future time when people come up with better fitness functions.

Another alternative is to use a learning neural network composed of nodes roughly analogous to the brain's neurons in I/O function and initial connection. This fits your adaptive algorithm definition but is not refuted by your fitness function / adaptive pressure argument (though I cannot say that you don't have another refutation in mind).
 
  • #6
Your.Master said:
I agree with 0rthodontist on the principle that just because our minds are complicated, doesn't mean the essential part of thinking is so. Perhaps it might be a lot harder to develop an AI that could be "downloaded" into an "empty" brain (perhaps that of a politician).

I'm not sure what the minimum fitness function is to develop AI, so I'm not yet willing to discount it as a practical alternative for some future time when people come up with better fitness functions.

Another alternative is to use a learning neural network composed of nodes roughly analogous to the brain's neurons in I/O function and initial connection. This fits your adaptive algorithm definition but is not refuted by your fitness function / adaptive pressure argument (though I cannot say that you don't have another refutation in mind).

How would this neural net learn? No doubt it would need to have some sort of fitness function to guide the setting of the weights between the neurons. I think this fitness function approach is just too practically infeasible.

On the subject that just because our brains are complex doesn't mean thought is complex raises some interesting questions. I like to think about this with the analogy of flight. In nature all flying things fly by flapping their wings yet engineers designing flying devices of this nature have had very limited success. The huge break through happened with an increased understanding of fluid flow and with that the understanding that wings don’t have to flap in order to fly.

Imitating nature probably won't produce any higher-level AI but maybe if we increase our understanding of the "fluid flow" of intelligence we might not need to have "flapping wings" to design human-level AI. But just what is this “fluid flow” of intelligence?
 
  • #7
Mathy21 said:
The fitness function that directed human evolution is no doubt extremely complicated. Implementing this fitness function would probably be just as difficult if not more so than directly programming human behaviors and thought. So from a practical standpoint I do not see how my second point is flawed.

You're using the word "probably" to support a point that is really nothing more than an opinion. It's no better an argument than when the creationists say they feel the eye is too complex to have evolved by natural selection. You may feel that the fitness function is too complex to implement, but you provide no rational reason why anyone else should feel that way.

Because this point is nothing more than an opinion -- one which I am free to disagree with in equally hand-waving way -- it does not really constitute a valid argument.

- Warren
 
Last edited:
  • #8
It's possible that the development of the human brain relies on two simple rules:
1. A rule defining when a neuron should connect to another
2. A rule defining when a connection should go away

In fact, maybe rule 1 isn't necessary, maybe each neuron will always try to make as many connections as possible. The second rule would ensure that survival of the fittest connections is ensured.
If this is the case there's no reason why AI can't reach the level of human intelligence. In fact, maybe we can argue that AI has the potential to reach much farther than human intelligence since it can be trained continuously, faster, and more efficiently than what nature has been able to afford for humans.
 
  • #9
You bring up a good point, -Job-, that of natural economy.

Humans evolved intelligence not as a tremendous luxury, but because, for one reason or another, it was a requirement for survival.

Brains are very expensive organs, consuming at least 25% of the resources of the entire body. There is a significant selection pressure to keep the brain (and the subsequent intellect) to the minimum absolutely required for survival.

But we've managed to change our environment from tropical jungles to mini-malls, and, as a result, no longer need to use all of our intellect for raw survival. In our new, safe environment, we can divert some of that intellect to figuring out such triflings as the nature of the ratio of the circumference and diameter of a circle, or how galaxies looked in the early universe.

An artificial intelligence, on the other hand, might have no such pressure as economy. There may be absolutely no reason for an AI to limit its own size, or resource consumption. Sure, there are the physical limitations of the finite number of transistors, and the finite supply of electrical energy, but the resources are immense. There are now more computer chips in the world than there are grains of rice, and the energy density of processors is rapidly approaching that of the surface of the sun. An AI that could harness even a fraction of these resources -- supplied with an unending supply of electronic "food" by its human keepers -- could grow essentially without bound.

- Warren
 
Last edited:
  • #10
I agree, and we can approach this issue from a more mathematical point of view. For example suppose we want to get an AI to such a level that it can read as good as any human can (i.e. identify characters from image data, regardless of font, size, position, noise, etc). For this problem there is a domain, composed of the various image data, and a range, composed of computer characters. We add in a character for "unidentified". Now for each element in the domain we have a single corresponding element in the range. Image data of a character is either a specific character, or the "unidentified" character. We encode both the image data and the characters into binary strings (we already have standards for that). Now we interpret these binary strings as numbers, so we abstract away all the unnecessary stuff. What we are left with is a domain of numbers and a range of numbers.
Each number in the domain has a single corresponding element in the range. This means that the transformation from the domain into the range is a function. Therefore there is a function f(x) which, for image data x, generates the corresponding character value.
Any function can be modeled by a deterministic algorithm, hence an AI that can read as good as humans can is achievable.
What is happening in a neural network, or human brain, is the modeling of the function f(x) and the discarding of (x, y) pairs that are not very relevant to what it wants to accomplish ( which are just taking up space).
I think for many other human faculties the idea is the same. There is a domain and a range, and we are interested in the function that translates one to the other. Whether that function is implemented on a human brain, neural network or computer algorithm doesn't make much of a difference.
 
Last edited:
  • #11
In 1997 world chess champion Kasparov was upset by computer named Deep Blue - an incredible demonstration of AI that shocked many people. Last week the reigning world chess champion, Kramnik, suffered the same fate at the hands of Deep Fritz - a commercially available chess program that sells for about 50 bucks [albeit the computer that beat Kramnik was kinda pricey].
http://www.research.ibm.com/deepblue/home/html/b.html
http://sport.guardian.co.uk/chess/story/0,,1967372,00.html

AI is pretty powerful stuff these days. It's not hard to see how AI may someday rival, or even surpass human abilities to accomplish complex tasks - like doing science and engineering. There is already AI out there that does things like finite element analysis and numerical simulations faster and more precise than any human could ever hope to achieve. AI 'consciousness' does not yet appear to on the horizon, but AI may figure it out before we do. Perhaps we should start working on those emotion chips while there is still time. Perhaps our AI descendants will find we make cute pets. The future possibilities are somewhat . . . unsettling. Like 'Please don't spay me master . . .
 
  • #12
-Job- said:
I agree, and we can approach this issue from a more mathematical point of view. For example suppose we want to get an AI to such a level that it can read as good as any human can (i.e. identify characters from image data, regardless of font, size, position, noise, etc). For this problem there is a domain, composed of the various image data, and a range, composed of computer characters. We add in a character for "unidentified". Now for each element in the domain we have a single corresponding element in the range. Image data of a character is either a specific character, or the "unidentified" character. We encode both the image data and the characters into binary strings (we already have standards for that). Now we interpret these binary strings as numbers, so we abstract away all the unnecessary stuff. What we are left with is a domain of numbers and a range of numbers.
Each number in the domain has a single corresponding element in the range. This means that the transformation from the domain into the range is a function. Therefore there is a function f(x) which, for image data x, generates the corresponding character value.
Any function can be modeled by a deterministic algorithm, hence an AI that can read as good as humans can is achievable.
What is happening in a neural network, or human brain, is the modeling of the function f(x) and the discarding of (x, y) pairs that are not very relevant to what it wants to accomplish ( which are just taking up space).
I think for many other human faculties the idea is the same. There is a domain and a range, and we are interested in the function that translates one to the other. Whether that function is implemented on a human brain, neural network or computer algorithm doesn't make much of a difference.


A clever approach, that, in fact has been around for a long time -- look up semantic networks, for example. Unfortunately, reality has not cooperated. For example, speech recognition, once feedforward networks became practical, was thought to be a piece of cake. But, with the function analogy, every speaker has to have at least one "recognition function", and quite often that function will have to be changed. In spite of many years of practical work, our ability to use machines to recognize speach with machines is still very limited relative to human capacity. One function does not easily fit all, and, said function must constantly be changing to take into account the perpetual stream of perceptual and internal signals.

Look at the circuit diagram of a hard-disk, or a block of memory, or a radio receiver. Does a single function help in understanding such things?

The function approach, while elegant indeed, is not very useful at a practical level -- are we talking continuous, piece-wise continuous, single valued, analytic...? The plain fact is that the brain does a lot of things, and we don't have much of a clue about how many are done. But there is extraordinary research going on that, slowly to be sure, is increasing our knowledge of how the brain works.

My sense is that in the field of brain-science the computer analogy is going out of fashion--it no longer provides a useful guide into the mysteries of the mind. The research has gone way beyond the computer and AI analogies. They are products of a different age.

Indeed, quantum mechanics governs the very basic operations of the brain -- vision depends on the photodisassociation of rhodopsin, a quantum effect. But the normal, neural level operations of the brain are strictly governed by classical physics

Regards,
Reilly Atkinson
 
  • #13
Chronos said:
In 1997 world chess champion Kasparov was upset by computer named Deep Blue - an incredible demonstration of AI that shocked many people. Last week the reigning world chess champion, Kramnik, suffered the same fate at the hands of Deep Fritz - a commercially available chess program that sells for about 50 bucks [albeit the computer that beat Kramnik was kinda pricey].
http://www.research.ibm.com/deepblue/home/html/b.html
http://sport.guardian.co.uk/chess/story/0,,1967372,00.html
True, but I would not call it artificial intelligence.
Fritz is simply a set of algorithms constructed with the help of chess masters.
Now if a computer would use a neural net to recognize a good chess position from a bad chess position then it would begin to look like artificial intelligence.

Chronos said:
There is already AI out there that does things like finite element analysis and numerical simulations faster and more precise than any human could ever hope to achieve.
Again why would we want to call that artificial intelligence?
 
  • #14
MeJennifer said:
True, but I would not call it artificial intelligence.
Fritz is simply a set of algorithms constructed with the help of chess masters.
Now if a computer would use a neural net to recognize a good chess position from a bad chess position then it would begin to look like artificial intelligence.
What's the difference?

Incidentally, you do realize that a neural net is simply a set of algorithms too, right?
 
  • #15
Hurkyl said:
What's the difference?
In the case of rule-based algorithms (obviously rules pertaining to the subject matter) the intelligence is not emergent as in the case of neural networks but codified by the help of experts in the subject field.

Hurkyl said:
Incidentally, you do realize that a neural net is simply a set of algorithms too, right?
Yes but they do not relate to the subject matter as they do in case of rule-based algorithms.
The algorihtms in a neural net pertain to the topology of the net, stratifications, how connections are established and broken and connection strength and rules for changes of those strengths between connections.
 
Last edited:
  • #16
MeJennifer said:
In the case of rule-based algorithms (obviously rules pertaining to the subject matter) the intelligence is not emergent as in the case of neural networks but codified by the help of experts in the subject field.
So, you're saying that Deep Fritz is not an artifical intelligence, because it is a real intelligence?
 
  • #17
MeJennifer said:
Yes but they do not relate to the subject matter as they do in case of rule-based algorithms.
The algorihtms in a neural net pertain to the topology of the net, stratifications, how connections are established and broken and connection strength and rules for changes of those strengths between connections.
So? The algorhtms in Deep Fritz (not a rule-based system, BTW) pertain to manipulating zeroes and ones.

It just so happens that Deep Fritz manipulates zeroes and ones in a way that relates to playing chess. Just as our hypothetical neural net manipulates signals in a way that relates to the subject matter.


In principle one could program Deep Fritz to run on a neural net. Would that suddenly make you consider Deep Fritz an artifical intelligence? What if one used evolutionary techniques to produce a rule-based system? Would that not be an artifical intelligence?


(Incidentally, the term "artificial" in "artificial intelligence" is usually meant to apply to the entity displaying intelligence, not the method by which the entity was created. And that's certainly what I mean by the term when I use it)
 
  • #18
MeJennifer: Programs like chess rely mainly on search algorithms like "MinMax" these algorithms build a big search tree that look at every possible position (usually the search tree is "pruned" - with the alphabeta algorithm for example so not all possible positions are searched) in the next n (usually about 8) moves. then each final position is given a score - which are used to give the n-1 positions a score until all the different moves are given a score, then the computer makes the move with the highest score. (this was a very bad explanation of the MinMax algorithm - google for more if you're interested). If the computer could search all the possible moves until the end of game then it could beat a human everytime and no rules would have to be coded in by experts - the whole program would be about 100 lines long and it would never loose. but due to the fact that there're about 10^80 possible positions the computer can only search 8 moves ahead instead of 40, so instead of searching - after the 8th move the board is evaluated by parameters such as pieces and position. the experts are only needed to help with the evaluation.
 
  • #19
Well, there is some room for expert help with the pruning, as well as deciding when a position warrants extra searching.
 
  • #20
reilly said:
My sense is that in the field of brain-science the computer analogy is going out of fashion--it no longer provides a useful guide into the mysteries of the mind. The research has gone way beyond the computer and AI analogies. They are products of a different age.

I think that the problem with the brain-computer analogy is that it is largely misunderstood. Computers are models of the brain, up to the point of current knowledge. Von Neumann machines (current PCs) are the product of a primitive understanding of how our minds work. The idea was developed and implemented by attempting to emulate how the mind worked, but the understanding of the mind was subjective, so it was incomplete. Now we have a better understanding of how neurons function, and thus we gave birth to neural networks. However, because our knowledge of the brain is till so limited, modern neural nets can only approximate the behavior of the biological system that they were designed to emulate. People need to stop describing the brain as though it is a super computer, and realize that computers are simply primitive brains. (Very primitive, despite their superior "working" memory.)

Per the Chess debate: Artificial Intelligence is everywhere. It is predicting the stock market, analyzing marketing data, and making minute adjustments to the rotors of helicoptors. What seems to be the debate, currently, is whether or not they will ever understand their tasks (and finally be worthy of the title A.I.) While they aren't at that level yet, I feel they will be soon enough. How about we lower our qualifications. Does it have to be at the level of the human before it's recognized as intelligence? What about the lower animals? Are rabbits intelligent? Are snails? Are kangaroos? Where do we draw the line?

What we need to do is stop mimicking neural networks with von neumann style machines and start changing the hardware itself. You should check out New Scientist...there is some very exciting work being done already. One man has produced a collection of chips that use transistors in a revolutionary way. The chips work much like the primary visual cortex of the brain. Good stuff!
 

FAQ: No Human-Level AI: Reasons & Consequences

What is human-level artificial intelligence?

Human-level artificial intelligence refers to an AI system that can perform tasks and solve problems at the same level as a human being. This includes understanding complex concepts, reasoning, and learning from experience.

What are the reasons why human-level AI has not been achieved yet?

There are several reasons why human-level AI has not been achieved yet. One major reason is the complexity of the human brain and the difficulty in replicating its functions. Another reason is the lack of understanding of how human intelligence works and how to program it into a machine.

What are the potential consequences of achieving human-level AI?

The potential consequences of achieving human-level AI are still largely unknown and debated. Some experts believe it could bring about significant advancements in technology, medicine, and other fields. Others are concerned about the potential risks and ethical implications, such as loss of jobs and control over AI systems.

Is it possible to achieve human-level AI in the future?

It is currently unclear if human-level AI will ever be achieved in the future. While significant progress has been made in AI research, there are still many challenges and limitations to overcome. It is also important to consider the potential consequences and ethical implications before pursuing this goal.

What are some potential alternatives to achieving human-level AI?

There are various alternatives to achieving human-level AI, such as developing specialized AI systems for specific tasks rather than trying to replicate human intelligence as a whole. Another approach is to focus on improving human-AI collaboration and using AI to enhance human abilities rather than replacing them.

Back
Top