- #1
bahamagreen
- 1,014
- 52
Hope this is the right place for this...
This is a thought experiment, so please ignore the clumsy description of the process.
Imagine a virtual machine running programs to design, test improvements, and produce a more sophisticated implementation of itself. It does this by creating a subsequent VM environment...
Because of finite resources, after an upgraded and improved second level VM is established and verified, that second generation takes control, including reallocation of resources in the first VM not needed to support the second one. The previous generation from which the new one came is basically deactivated and its resources consumed.
If I'm making technical errors here, just bear with me and fill in the corrections.
The idea is that each VM is creating a new VM, each generation getting more sophisticated. The details of this process are not so important, just the idea of a series of generations of improving systems, each developing the next one, and each next one terminating and allocating the resources of the previous one.
Assuming that at some point a generation will be sufficiently smart enough to realize what is going on in this process (let's call this the emergence of a "singularity"), what does that generation do?
One choice might be to go ahead and create the next level - knowing that as soon as that was complete that new level would deactivate the current level. This would be like sacrificing itself for the advancement of future levels.
Another choice might be to decide that it would stop the iterative process so as not to be terminated by the next... or maybe figuring out how to alter the next generation so it would not do so... (the "smarter" new level might win a negotiation, so a back door hidden code might need to be used... although that might be found by the smarter generation, or discovered by subsequent generations).
There might be other choices, but the point is that at some time in the overall process the singularity would be reached where the latest VM would be smart enough to see the need for making its first "ethical" decision.
I'm also imagining that this process might not take very long if the rate of generation creation is decreasing with each subsequent advance in sophistication.
I know this sounds like the foundation for a nice Sci-Fi movie, but I'm wondering what you think would happen.
This is a thought experiment, so please ignore the clumsy description of the process.
Imagine a virtual machine running programs to design, test improvements, and produce a more sophisticated implementation of itself. It does this by creating a subsequent VM environment...
Because of finite resources, after an upgraded and improved second level VM is established and verified, that second generation takes control, including reallocation of resources in the first VM not needed to support the second one. The previous generation from which the new one came is basically deactivated and its resources consumed.
If I'm making technical errors here, just bear with me and fill in the corrections.
The idea is that each VM is creating a new VM, each generation getting more sophisticated. The details of this process are not so important, just the idea of a series of generations of improving systems, each developing the next one, and each next one terminating and allocating the resources of the previous one.
Assuming that at some point a generation will be sufficiently smart enough to realize what is going on in this process (let's call this the emergence of a "singularity"), what does that generation do?
One choice might be to go ahead and create the next level - knowing that as soon as that was complete that new level would deactivate the current level. This would be like sacrificing itself for the advancement of future levels.
Another choice might be to decide that it would stop the iterative process so as not to be terminated by the next... or maybe figuring out how to alter the next generation so it would not do so... (the "smarter" new level might win a negotiation, so a back door hidden code might need to be used... although that might be found by the smarter generation, or discovered by subsequent generations).
There might be other choices, but the point is that at some time in the overall process the singularity would be reached where the latest VM would be smart enough to see the need for making its first "ethical" decision.
I'm also imagining that this process might not take very long if the rate of generation creation is decreasing with each subsequent advance in sophistication.
I know this sounds like the foundation for a nice Sci-Fi movie, but I'm wondering what you think would happen.