- #36
- 3,309
- 8,699
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.anorlunda said:It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet. The intelligence behind the flyball governor doesn't comes from the machine itself, it came from someone who saw a pattern between an object in rotation and the forces it created.
What people are afraid from AI comes from the fact that they link "intelligence" to "being alive". But there are no direct links between the two. Being alive is mostly about being able to replicate yourself. That relates to the concept of autonomy, the capacity to function independently. An entity doesn't need intelligence to replicate itself. And an intelligent machine that was designed to find patterns in, say, scientific papers, will not "evolve" to replicate itself.
Even if we assume that an intelligent machine will evolve to replicate itself - and we are really far from that - some people worry that the machines will go on to destroy humans. But that is just a very pessimistic assumption. They are plenty of life forms on this planet and none of them have a goal of destroying other life forms. And from what we understand, diversity is important for survival and there are only disadvantages when it comes to destroying other life forms. Why would a new intelligent life form (designed by us) comes to a different conclusion?