- #1
SkyMarshal
- 6
- 0
Sorry if this is the wrong place to ask a question like this, but I am at wits end trying to make this system function properly while remaining computationally efficient.
A summary of what I have so far:
The problem I am having is that when the number of simulated masses increases, both the number of calculations for gravitational attraction and for Collision detection increase exponentially, with n2 calculations being required for n bodies, for both collision and gravitation. As such, various types of Binary Space Partitioning trees (In the case of this 2D space, quadtrees are the ones that should be used) can be used to significantly reduce the number of calculations needed as the number of objects increases. For collisions, a data structure like so allows you to only check for collisions within either nearby nodes or the same node you are in, based on implementation. For gravitation, the Barnes-Hut simulation technique can be used to more rapidly and efficiently approximate gravitational attraction.
Now, you may think I just revealed the solution to my own problem, but that is where you in correct. A Quadtree only segments the contents of a finite space into smaller spaces, but the simulation I am working with has a working space of (theoretically) 1047 meters per side, when the size of the simulated space is on the order of 1013 meters... as such, subdividing that massive space into a small enough amount to gracefully handle the needed calculations on the scales we are working with would be likely both memory and computationally inefficient.
Thus, I pose to you: How should I go about partitioning this massive space into manageable chunks, without being pants on head retarded about it?
A summary of what I have so far:
- An iterative simulator that computes gravitational attration of objects on a cartesian co-ordinate plane, which computes the delta-V and then linearly adjusts the position based on this. (It is accurate enough for my needs with the timeframes being used, while being faster than more precise methods.)
- Simulated objects containing a position, velocity, and any acceleration it is applying as well as a bearing. These are approximated as spheres with the radius derived from mass and density moving on the plane of the simulation.
- A means to add velocities in a manner respecting relativistic equations. 0.5c + 0.6c results in ~0.86c, not the expected 1.1c if using a direct addition method.
The problem I am having is that when the number of simulated masses increases, both the number of calculations for gravitational attraction and for Collision detection increase exponentially, with n2 calculations being required for n bodies, for both collision and gravitation. As such, various types of Binary Space Partitioning trees (In the case of this 2D space, quadtrees are the ones that should be used) can be used to significantly reduce the number of calculations needed as the number of objects increases. For collisions, a data structure like so allows you to only check for collisions within either nearby nodes or the same node you are in, based on implementation. For gravitation, the Barnes-Hut simulation technique can be used to more rapidly and efficiently approximate gravitational attraction.
Now, you may think I just revealed the solution to my own problem, but that is where you in correct. A Quadtree only segments the contents of a finite space into smaller spaces, but the simulation I am working with has a working space of (theoretically) 1047 meters per side, when the size of the simulated space is on the order of 1013 meters... as such, subdividing that massive space into a small enough amount to gracefully handle the needed calculations on the scales we are working with would be likely both memory and computationally inefficient.
Thus, I pose to you: How should I go about partitioning this massive space into manageable chunks, without being pants on head retarded about it?