Wrong solution order using Runge Kutta 4

In summary: V12x, y12, V12y, z12, V12z).In summary, the RK function calculates the positions and velocities of the first and the second body, and swaps the values of the vectors K1, K2, and K3.
  • #1
Zebx
12
3
TL;DR Summary
Comparing the error behaviour of a 3-body solution with Runge Kutta 4 by varying the stepsize I get 2^p = 2 instead of 16.
Hi, I'm trying to simulate a 3-body problem with a star at the center of reference system and 2 body orbiting around it using Runge Kutta 4. The 2 bodies perturb each other orbits gravitationally, so my ode system is actually a coupled armonic oscillator and I evaluate the solution of both bodies simultaneously. The code I use for Runge Kutta is the following
C++:
void RK(double h, fun f, vector<CB> obj, vector<Sol> s, int ab_order=0) {
            int dim = 6;
            vector<double> p = s[0].u;
            vector<double> K1(2*dim);
            vector<double> K2 = K1;
            vector<double> K3 = K1;
            vector<double> K4 = K1;
            for (int n = 0; n < N-1; n++) {
                t[n+1] = t[0] + (n+1)*h;

                K1 = f(t[n]      , {u[n]            , u[N+n]            , u[2*N+n]            , u[3*N+n]            , u[4*N+n]            , u[5*N+n]}            , obj, {p[n], p[N+n], p[2*N+n], p[3*N+n], p[4*N+n], p[5*N+n]}, {0}, {0});
                K2 = f(t[n] + h/2, {u[n] + h*K1[0]/2, u[N+n] + h*K1[1]/2, u[2*N+n] + h*K1[2]/2, u[3*N+n] + h*K1[3]/2, u[4*N+n] + h*K1[4]/2, u[5*N+n] + h*K1[5]/2}, obj, {p[n], p[N+n], p[2*N+n], p[3*N+n], p[4*N+n], p[5*N+n]}, {0}, {0});
                K3 = f(t[n] + h/2, {u[n] + h*K2[0]/2, u[N+n] + h*K2[1]/2, u[2*N+n] + h*K2[2]/2, u[3*N+n] + h*K2[3]/2, u[4*N+n] + h*K2[4]/2, u[5*N+n] + h*K2[5]/2}, obj, {p[n], p[N+n], p[2*N+n], p[3*N+n], p[4*N+n], p[5*N+n]}, {0}, {0});
                K4 = f(t[n] + h  , {u[n] + h*K3[0]  , u[N+n] + h*K3[1]  , u[2*N+n] + h*K3[2]  , u[3*N+n] + h*K3[3]  , u[4*N+n] + h*K3[4]  , u[5*N+n] + h*K3[5]}  , obj, {p[n], p[N+n], p[2*N+n], p[3*N+n], p[4*N+n], p[5*N+n]}, {0}, {0});

                while (k < dim) {
                    u[k*N+n+1] = u[k*N+n] + h*(K1[k] + 2*K2[k] + 2*K3[k] + K4[k])/6;
                    p[k*N+n+1] = p[k*N+n] + h*(K1[dim+k] + 2*K2[dim+k] + 2*K3[dim+k] + K4[dim+k])/6;
                    k++;
                }
                k = 0;
            }
            vector<double>().swap(K1);
            vector<double>().swap(K2);
            vector<double>().swap(K3);
            vector<double>().swap(K4);
            vector<double>().swap(p);

        }

The RK function returns void cause I call it through one of the body solution, which is declared before I call Runge Kutta. I didn't post the entire code cause it's very long, but I will post other parts of it if needed.
The vectors u and p takes the positions and velocities of the first and the second body respectively. The variables are in the order (x, Vx, y, Vy, z, Vz), and I have x1 values from u[0] to u[N-1], V1x values from u[N] to u[2*N - 1] and so on, so the variables are all in one vector. The same goes for p. The function return 12 values (x1, V1x, y1, V1y, z1, V1z, x2, V2x, y2, V2y, z2, V2z), taking as arguments the positions and velocities of both bodies at the step n of the iteration. The first argument (time) is there just for completness. The last two argument functions are {0} but they would be filled with other bodies variables in case of 4 and 5 bodies simulation. The function work well with another method I'm using (velocity Verlet), and I get the correct results for the order of the solution in that case, and that's the problem: when I check for the behaviour of the error varying the stepsize using Runge Kutta 4 I always get 2 instead of 16. It's like I should use a different stepsize for one of the two bodies, but since they're coupled this would turn everything in "a dog chasing is own tail". How could I fix this? Thank you.
 
Last edited:
Technology news on Phys.org
  • #2
Zebx said:
How could I fix this?
Edit: I re-read your code
I'd start by making it easier to work with by
  1. breaking out the calculation of the function ## f'(t, \vec x) ## from the integration algorithm
  2. putting all your variables (except t) into a single structure
  3. using a numpy.array for that structure
Having done that, you can test that your RK4 really is ## O(h^4) ## globally with some test functions.
 
  • #3
OK, I see you have already separated the function. Test your algorithm with something simpler - say a 2D circular orbit - and see how it behaves. Test your function with a simpler integrator i.e. Euler and check it is globally linear.
 
  • #4
Oh also ensure that you are not suffering round-off error or instability due to e.g. close approaches of the orbiting bodies.
 
Last edited:
  • #5
Hmm, I re-re-read your code. Your solver is too tightly coupled with your problem to see what is going on. Put all the variables (except t) into a single vector and pass 3 arguments to your function: t, x and a reference for the result, then you can test the algorithm and the function independently.
 
Last edited:
  • #6
I would also recommend you test your integrator with something simple that RK4 should have no trouble integrating, like a set of independent linear harmonic oscillators (should yield a circular trajectory in phase space).
 
  • #7
Thank you both for the answers. I think I should have mentioned that I actually already partially tested the function as you suggested, and the method worked properly. For instance I tested it by using a single orbit, so just one body orbiting the central star, and I got the expected order. I did also other tests, but I haven't tried to use more than one body de-coupled from each other (which is similar to two independent armonic oscillators) as you suggested, so a 3 body problem without perturbations, and I had some trouble in that case. Looking back at the Runge Kutta code I understood that the problem was in the arguments I was giving to the function when I was calculating the K values. For instance, if you look at them, the u vector elements are added to terms like K*h/2. I had to do the same with the p vectors, I supposed not, that was the problem. Now everything works, thank you very much!
 
Last edited:
  • Like
Likes BvU
  • #8
You could avoid similar problems in future by just having one vector, this will make it trivial to use alternative explicit methods e.g. introducing adaptive step size, or multistep methods (but not implicit or symplectic methods such as Verlet, so I appreciate why the code is like it is), and also trivial to add more bodies (you only have to do this once, in the ODE function, not in each call in the integrator).
 
  • Like
Likes Zebx
  • #9
I've read that RK4 can have issues with orbital simulations, where errors in the process result in the orbit being changed over iterations. Solutions to this involve some type of feedback to keep the total (Kepler) energy (potential and kinetic) constant, such as using a modified Euler integrator instead of RK4.

https://people.ucsc.edu/~mmrosent/talks_notes/num_int_pres.pdf

Link to an article with a more involved process:

https://www.colorado.edu/ccar/sites/default/files/attached-files/numerical_algorithms_for_preci.pdf
 
Last edited:
  • #10
rcgldr said:
I've read that RK4 can have issues with orbital simulations, where errors in the process result in the orbit being changed over iterations. Solutions to this involve some type of feedback to keep the total (Kepler) energy (potential and kinetic) constant, such as using a modified Euler integrator (also called symplectic integrator) instead of RK4.
The OP is comparing the performance of RK4 with the symplectic Verlet method. Note that symplectic methods do not ensure total energy is constant, they simply tend to reduce the cumulative error by alternating positive and negative errors between the velocity and distance steps, compared with single step methods which tend to increase or decrease the energy monotonically.
 
  • Like
Likes Filip Larsen
  • #11
pbuk said:
Note that symplectic methods do not ensure total energy is constant, they simply tend to reduce the cumulative error by alternating positive and negative errors [...]

After some thought this statement now sounds too weak in my ears. I am fairly hazy on the detailed theory of geometric integrators, but as I understand the method, they do produce solutions when applied to conservative systems that except for rounding errors has no energy drift. That is, this property it is not just a coincidental tendency.

Allow me a short quote from the backside of "Simulating Hamiltonian Dynamics" by Leimkuhler and Reich (emphasis by me):
Geometric integrators are time-stepping methods, designed so that they exactly satisfy conservative laws, symmetrices, or symplectic properties of a system of differential equations.
 
  • Like
Likes Zebx
  • #12
You should not judge a book by its [back] cover :smile:

You may be able to find a copy of Ernst Hairer's Numerical Geometric Integration online which includes both thorough analysis and graphic illustration of the performance of many methods on different problems, for instance the following graph of total energy vs. time for a 7-body molecular dynamics problem:
Screenshot from 2021-04-04 12-18-48.png

As you can see there are errors, but they tend to cancel leaving no cumulative drift over many steps. Whether you view this as 'coincidental' or not is subjective: it is not a word that I would choose to use.
 
  • Like
Likes Zebx
  • #13
Yes, from the Hairer book I've read that Verlet (which is the other method I'm considering cause, yes, Runge Kutta doesn't preserve the energy, but for short integration times they should provide both good results) doesn't preserve the energy exaclty, but at least errors in "positive energy" are compensated by errors in "negative enegy", which means an oscillating orbit instead of a collapsing orbit as happens for Runge Kutta. I've actually just verified this (great:smile:).
 
  • #14
For numerical methods in c++ I generally prefer the Boost library. For this class of problems it offers the symplectic_rkn_sb3a_mclachlan integrator, which appears to implement what it refers to as a 4th order (i.e. global error ## O(h^4) ##) Runge-Kutta-Nyström symplectic integrator with Fehlberg automatic step size control.

https://www.boost.org/doc/libs/1_66...ost_numeric_odeint/tutorial/solar_system.html
 
Last edited:
  • Like
Likes Zebx and Filip Larsen
  • #15
That seems very good! Unfortunately it is asked me to use the methods I mentioned.
By the way I noticed that I forgot to mention one thing when we was talking about testing the code on simpler problmes. As I said I tested the methods on different simple problems and one of them was the linear pendulum. I got good results, meaning that the error scaled as ##2^p## as expected for every method and that happened also for other problems, but I didn't get this result when using damped pendulum. I didn't care too much about that since my main problem doesn't depends on the velocity and I had already succesfully tested the methods in other ways, but I wondered why in the damped pendulum case I didn't get the results I expected. I mean dividing the error evaluated with a certain timestep by the error evaluated with half of that timestep for different timesteps I never got ##2^p##.:rolleyes:
 
  • #16
This is normal for integrators.The truncation error for method of order 4, means the error is 2^4 = 16 lower than for double step: 2h.

But this is not true in general, because the error is proportional to some number in fact, which is not constant in general.For example: in the case of circular motion the angular speed: w = df/dt = const,

hence the error is indeed h^4... but for many steps per orbit only.

but this not true for big steps: h = T, or h = T/2, where T = 2pi/w is an orbital period.There is natural limit of step size, about 12 probably for circular orbit, but for any method - order 8, 12, and more!

The 4-order methods can sustain for 500 or more steps per orbital cycle, because the real error depends on something like:

(2piw h)^n, so, for too big steps: h ~ T the error is gigantic and grows exponentially without any limit!

In the case of elliptical orbits, the angular speed: w is variable, hence for big ellipticity, for example: e > 0.9, the error is 100 or more times bigger!
 
Last edited:
  • Like
Likes Zebx

FAQ: Wrong solution order using Runge Kutta 4

What is the Runge Kutta 4 method and how does it work?

The Runge Kutta 4 method is a numerical method used to solve ordinary differential equations. It is an iterative process that uses four intermediate steps to approximate the solution of the differential equation at a given point. These intermediate steps are then combined to calculate the final approximation of the solution.

Why is the solution order sometimes wrong when using Runge Kutta 4?

The solution order can be wrong when using Runge Kutta 4 if the step size used in the calculation is too large. The method is only accurate up to a certain order, and using a step size that is too large can cause the error to accumulate and result in a wrong solution order.

How can I determine the appropriate step size to use with Runge Kutta 4?

The appropriate step size for Runge Kutta 4 depends on the specific differential equation being solved. It is recommended to use a smaller step size for more complex or rapidly changing equations, and a larger step size for simpler and slower changing equations. Additionally, the step size can be adjusted and the solution recalculated to see if it improves the accuracy of the result.

Can other numerical methods be used to solve differential equations?

Yes, there are many other numerical methods that can be used to solve differential equations, such as Euler's method, Heun's method, and the 4th order Runge Kutta method. The choice of method depends on the specific equation and the level of accuracy required for the solution.

Are there any limitations to using Runge Kutta 4 to solve differential equations?

Yes, there are some limitations to using Runge Kutta 4. It may not be the most efficient method for certain types of equations, and the accuracy of the solution may decrease for equations with rapidly changing or oscillating solutions. Additionally, it may not be suitable for solving equations with discontinuities or singularities.

Similar threads

Replies
36
Views
4K
Replies
1
Views
2K
Replies
4
Views
7K
Replies
8
Views
2K
Replies
5
Views
2K
Replies
13
Views
5K
Back
Top