- #1
Posy McPostface
I have read differing arguments about the potential utility or negative issues arising due to AI. There are many concerns about the 'control problem' of AI or how to 'sandbox' AI so it can serve our interests and not let a fast takeoff of AI lead to a situation where AI is the sole power around the globe. However, I think it would be an effort in futility to try and 'control' AI in any way or sandbox it somehow. I think the most important trait that we should have with AI is for us and it to be able to relate with one another.
I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on. I believe these are valuable traits that are beneficial in ensuring that AI can have those feelings via a simulation of the human brain.
Does anyone else think this is a beneficial idea? It really isn't that farfetched and I think is the safest version of Artificial General Intelligence that can be created with human interests in mind. In essence, it would be able to identify with us in some regards, and that's what's really important, at least in my mind.
I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on. I believe these are valuable traits that are beneficial in ensuring that AI can have those feelings via a simulation of the human brain.
Does anyone else think this is a beneficial idea? It really isn't that farfetched and I think is the safest version of Artificial General Intelligence that can be created with human interests in mind. In essence, it would be able to identify with us in some regards, and that's what's really important, at least in my mind.