- #1
longshot_nl
- 5
- 0
Hi i am new to this forum, i hope i post this in the right thread.
I am developing a 2D artificial evolution program and i am wondering what other people here think about the idear and maybe discuss this in depth.
The system
The system is a black box with x analog input signals.
10 of these are created with small differences.
Only the best 1 is kept depending on the desired results.
This one is again cloned into 10 with small mutations and so on.
I don't want to make each mutation a learning system, everything must be based on static rules as long as the system is alive so the best rules can be isolated.
mutations
The changes are made on the input signals.
A input signal could be 'distance to nearest object sensor 1'.
The changes can be made like 'angle of sensor 1', 'relevance of sensor 1' and 'angle of sensor 1' which determines if other sensors are more important.
Maybe even mutations like 'add sensor' or 'add sensor that senses only material' so it can evolve to eat stuff or not to bounce off of certain materials.
I can also change the speed.
Desicions
At first i only want the system to make 4 decisions:
- go faster / slower - there is (un)certainty according to the system.
- turn left or right depending on the sensors.
These decisions could be made depending on the sensors.
Problems
I have no complete image of this system in my mind.
There are some problems i have not figured out yet.
Like should the system have memory and how should i make this.
Experiments
At first i want the system to move through some environment without bouncing.
Then i want to add more stuff.
The mutation that survives is the one that mostly obays the rules i write.
These could be 'travel as far as possible without bumping into stuff' or 'find as much food as possible' and so on.
Any desired outcome should evolve into a different mutation, that's what i want to find out.
I am developing a 2D artificial evolution program and i am wondering what other people here think about the idear and maybe discuss this in depth.
The system
The system is a black box with x analog input signals.
10 of these are created with small differences.
Only the best 1 is kept depending on the desired results.
This one is again cloned into 10 with small mutations and so on.
I don't want to make each mutation a learning system, everything must be based on static rules as long as the system is alive so the best rules can be isolated.
mutations
The changes are made on the input signals.
A input signal could be 'distance to nearest object sensor 1'.
The changes can be made like 'angle of sensor 1', 'relevance of sensor 1' and 'angle of sensor 1' which determines if other sensors are more important.
Maybe even mutations like 'add sensor' or 'add sensor that senses only material' so it can evolve to eat stuff or not to bounce off of certain materials.
I can also change the speed.
Desicions
At first i only want the system to make 4 decisions:
- go faster / slower - there is (un)certainty according to the system.
- turn left or right depending on the sensors.
These decisions could be made depending on the sensors.
Problems
I have no complete image of this system in my mind.
There are some problems i have not figured out yet.
Like should the system have memory and how should i make this.
Experiments
At first i want the system to move through some environment without bouncing.
Then i want to add more stuff.
The mutation that survives is the one that mostly obays the rules i write.
These could be 'travel as far as possible without bumping into stuff' or 'find as much food as possible' and so on.
Any desired outcome should evolve into a different mutation, that's what i want to find out.