- #1
Ontoplankton
- 152
- 0
What are your thoughts on the future possibility of a technological singularity -- the creation of superhuman intelligence through technological means (such as artificial intelligence or augmentation of human brains)? This is discussed for example in http://www.kurzweilai.net/articles/art0585.html?m=1 .
How likely do you think it is that such an event will occur, and if it does, on what sort of time scale? In the interview, Kaku argues (rightly) that Moore's Law will get into trouble in 15-20 years or so, because of quantum effects. He argues that this should give us some breathing space until we need to worry about the intelligence of machines surpassing that of humans. Leaving aside that it's always a good idea to worry about future existential threats far in advance, I have a few other problems with this view -- for example, might 15 to 20 years of accelerating progress not already be enough for the creation of artificial general intelligence or the augmentation of existing human intelligence? And isn't there a good chance that at least one of the other technologies mentioned -- quantum computing, DNA computing, molecular nanotech, and so on -- will take over or even improve on Moore's Law?
What consequences would a technological singularity have for science, technology and life in general? I think these would be profound -- it's sometimes said that an intelligence that creates an even better intelligence is the last invention we would ever need to make.
What do you think can be done to ensure that if it happens, it will be beneficial rather than disastrous? Kaku mentions building in chips to shut robots off "when they start having murderous thoughts". I don't think this will do, at least not when they become truly intelligent and start designing even more intelligent versions of themselves. An AI need not have murderous thoughts to be dangerous -- it will very probably not even have traits such as aggression and egoism unless we build these in. Once an AI becomes sufficiently intelligent and complex, though, anything it decides to do could have negative consequences to humans, who may just be perceived as obstacles. A chip of the kind Kaku mentions would have to be able to recognize any thoughts that implicitly involve harming humans, even when the AI is trying to hide these thoughts, even if it turns superhumanly intelligent. To ensure that the AI doesn't behave in any way humans consider malevolent or amoral, such a chip would practically need to be a benevolent AI in itself.
Which leads one to the question: why not design an AI with benevolence toward sentient life in mind in the first place, rather than assume it will be hostile and work against it? Unlike what we're used to in humans, there is no reason to suppose an AI will develop its own agenda. An approach based on designing an AI to hold the moral views we do, for the same reasons we do -- or ultimately, views that we would like even better if we knew the reasons -- has the advantage that such an intelligence would not only not be hostile to us, but would actually want to help us. There is, I think, much that a transhumanly intelligent being could do to help solve human problems. Also, there would be no danger of the safety device (a chip, or pulling the plug) failing if the AI was designed not to have (or to want to have) any hostile intentions anyway. Therefore, I think this is both the most useful and the safest approach.
Such an approach is advocated by the Singularity Institute for Artificial Intelligence to create what they call "Friendly AI" (http://www.singinst.org), and is also defended by Nick Bostrom in a recent paper on AI ethics (http://www.nickbostrom.com/ethics/ai.html). I think it offers the best chance for humanity to make it through intact, if scientific and technological advances will indeed make the future as turbulent as some predict.
Any opinions on this are appreciated.
How likely do you think it is that such an event will occur, and if it does, on what sort of time scale? In the interview, Kaku argues (rightly) that Moore's Law will get into trouble in 15-20 years or so, because of quantum effects. He argues that this should give us some breathing space until we need to worry about the intelligence of machines surpassing that of humans. Leaving aside that it's always a good idea to worry about future existential threats far in advance, I have a few other problems with this view -- for example, might 15 to 20 years of accelerating progress not already be enough for the creation of artificial general intelligence or the augmentation of existing human intelligence? And isn't there a good chance that at least one of the other technologies mentioned -- quantum computing, DNA computing, molecular nanotech, and so on -- will take over or even improve on Moore's Law?
What consequences would a technological singularity have for science, technology and life in general? I think these would be profound -- it's sometimes said that an intelligence that creates an even better intelligence is the last invention we would ever need to make.
What do you think can be done to ensure that if it happens, it will be beneficial rather than disastrous? Kaku mentions building in chips to shut robots off "when they start having murderous thoughts". I don't think this will do, at least not when they become truly intelligent and start designing even more intelligent versions of themselves. An AI need not have murderous thoughts to be dangerous -- it will very probably not even have traits such as aggression and egoism unless we build these in. Once an AI becomes sufficiently intelligent and complex, though, anything it decides to do could have negative consequences to humans, who may just be perceived as obstacles. A chip of the kind Kaku mentions would have to be able to recognize any thoughts that implicitly involve harming humans, even when the AI is trying to hide these thoughts, even if it turns superhumanly intelligent. To ensure that the AI doesn't behave in any way humans consider malevolent or amoral, such a chip would practically need to be a benevolent AI in itself.
Which leads one to the question: why not design an AI with benevolence toward sentient life in mind in the first place, rather than assume it will be hostile and work against it? Unlike what we're used to in humans, there is no reason to suppose an AI will develop its own agenda. An approach based on designing an AI to hold the moral views we do, for the same reasons we do -- or ultimately, views that we would like even better if we knew the reasons -- has the advantage that such an intelligence would not only not be hostile to us, but would actually want to help us. There is, I think, much that a transhumanly intelligent being could do to help solve human problems. Also, there would be no danger of the safety device (a chip, or pulling the plug) failing if the AI was designed not to have (or to want to have) any hostile intentions anyway. Therefore, I think this is both the most useful and the safest approach.
Such an approach is advocated by the Singularity Institute for Artificial Intelligence to create what they call "Friendly AI" (http://www.singinst.org), and is also defended by Nick Bostrom in a recent paper on AI ethics (http://www.nickbostrom.com/ethics/ai.html). I think it offers the best chance for humanity to make it through intact, if scientific and technological advances will indeed make the future as turbulent as some predict.
Any opinions on this are appreciated.
Last edited by a moderator: