- #1
FunkyDwarf
- 489
- 0
Hey guys,
I know i really shouldn't be doing n-body simulations in mathematica and I'm trying to cobble together some c++ stuff but in the meantime...
i have a system of say 2000 non-interacting particles in a central potential and am solving for their orbits via the Euler-Lagrange equation, ie generating a table of euler-lagrange equations for each particle (they have different initial positions) and then solving using NDsolve. Its all fine and dandy when the mass of the central potential is within about 1000 of the mass of the incoming particle but as soon as i make that difference larger (which i need to) the RAM usage blows out and it keeps giving kernel errors. I've taken into account (or at least tried to) the fact that in a larger potential they will move faster and so the algorithm needs more steps and also I've reduced the time interval but it still blows up in my face and its quite irritiating, any ideas?
Cheers
-G
I know i really shouldn't be doing n-body simulations in mathematica and I'm trying to cobble together some c++ stuff but in the meantime...
i have a system of say 2000 non-interacting particles in a central potential and am solving for their orbits via the Euler-Lagrange equation, ie generating a table of euler-lagrange equations for each particle (they have different initial positions) and then solving using NDsolve. Its all fine and dandy when the mass of the central potential is within about 1000 of the mass of the incoming particle but as soon as i make that difference larger (which i need to) the RAM usage blows out and it keeps giving kernel errors. I've taken into account (or at least tried to) the fact that in a larger potential they will move faster and so the algorithm needs more steps and also I've reduced the time interval but it still blows up in my face and its quite irritiating, any ideas?
Cheers
-G