Finding Iterative Solutions to Non-Linear Equations

  • MHB
  • Thread starter topsquark
  • Start date
In summary, Dan is looking for an iterative solution to the non-linear differential equations represented by \ddot{ \theta} = A~\cos( \theta )and\dot{ \theta } ^2 = B( \sin( \theta ) - \sin( \theta _0 ) )where the dot represents a time derivative and \theta _0 is the angle at time t = 0. He is more interested in t( \theta ) for now. Most of the Physics is involved with the first equation, which you are more likely to find as \ddot{ \theta } = A~\sin( \the
  • #1
topsquark
Science Advisor
Insights Author
Gold Member
MHB
2,020
827
I am currently trying to find an iterative solution to the non-linear differential equations represented by
\(\displaystyle \ddot{ \theta} = A~\cos( \theta )\)

and
\(\displaystyle \dot{ \theta } ^2 = B( \sin( \theta ) - \sin( \theta _0 ) )\)

where the dot represents a time derivative and \(\displaystyle \theta _0\) is the angle at time t = 0. (This is the harmonic oscillator where the angle is not taken to be small. A and B are related constants and I can give the derivations if you feel you need them.)

I'm looking for an iterated solution for \(\displaystyle \theta (t)\), but I'm actually more interested in \(\displaystyle t( \theta )\) for now.

Most of the Physics is involved with the first equation, which you are more likely to find as \(\displaystyle \ddot{ \theta } = A~\sin( \theta )\) if you look it up. The second equation can be taken simply to mean that \(\displaystyle \sin( \theta _0 ) \leq \sin( \theta )\) at all times.

Thanks for any help!

-Dan
 
Physics news on Phys.org
  • #2
Just to be clear I am looking for a set of points \(\displaystyle ( t_n, \theta _n )\) where \(\displaystyle \theta _{n+1} = f( \theta _i, t_i )\) (\(\displaystyle 0 \leq i \leq n\) ) such that the points on the graph are "close" to the function given by the differential equation, so I'm not looking at the elliptic function repesentation of the solution to the differential equation.

-Dan
 
  • #3
Hi Dan,

I'm still not sure what you are asking for exactly.

Anyway, the method to solve an ODE like $\ddot\theta=A \cos\theta$ iteratively is by writing it as $\dot y = f(t,y)$.
In our case:
$$\begin{cases}\dot\theta = \omega \\ \dot\omega = A\cos\theta \end{cases} \Rightarrow \d{}t (\theta,\omega) = f(t,(\theta,\omega)) = (\omega,A\cos\theta)$$
The quick-and-dirty method to solve it is with Euler, which is $y_{n+1}=y_n+h f(t_n, y_n)$:
$$\begin{cases}t_{n+1}=t_n+ h \\ (\theta_{n+1},\omega_{n+1}) = (\theta_{n},\omega_{n}) + h f(t_n,(\theta_n,\omega_n))\end{cases}\Rightarrow\begin{cases}
t_{n+1}=t_n+ h \\
\theta_{n+1} = \theta_{n} + h \omega_n \\
\omega_{n+1} = \omega_{n} + h A\cos\omega_n\end{cases}$$
However, Euler is known to be unstable and yield errors that are ever getting worse.

[box=yellow]Instead, the commonly used method to solve $\dot y = f(t,y)$ is Runge-Kutta:
\begin{cases}
t_{n+1} &= t_n + h \\
y_{n+1} &= y_n + \tfrac{1}{6}\left(k_1 + 2k_2 + 2k_3 + k_4 \right)\\
\end{cases}
where:
\begin{cases}
k_1 &= h\ f(t_n, y_n) \\
k_2 &= h\ f\left(t_n + \frac{h}{2}, y_n + \frac{k_1}{2}\right) \\
k_3 &= h\ f\left(t_n + \frac{h}{2}, y_n + \frac{k_2}{2}\right) \\
k_4 &= h\ f\left(t_n + h, y_n + k_3\right)
\end{cases}[/box]

Alternatively, we can use the same methods to solve $\dot θ^2 = B(\sin(θ)−\sin(θ_0))$.
(We can derive this equation from the previous equation if we assume that $\dot θ_0 = 0$.)
$$\dot θ = f(t,θ) = \pm\sqrt{B(\sin(θ)−\sin(θ_0))}$$
We can apply Euler or Runge-Kutta again as desired.
To instead find $t(\theta)$, we can apply the inverse function theorem, and do:
$$t'(θ) = \frac1{\dot θ} = g(θ, t) = \frac{\pm 1}{\sqrt{B(\sin(θ)−\sin(θ_0))}}\Rightarrow
\begin{cases}
\theta_{n+1}=\theta_n+ h \\
t_{n+1} = t_{n} + h g(\theta_n,t_n)\end{cases}$$
And now apply Euler or Runge-Kutta.
 
  • #4
Thank you. I worked with Runge-Kutta once and was able to use it all right, but I really don't understand where it comes from. In fact my otherwise fairly exhaustive personal library doesn't cover this topic at all and I'm wanting to understand what I'm up to. For that reason and one other I've decided to try to do a Taylor expansion recursion formula (that's what I've been meaning when I've used the word "iterative.") By doing this I should be able to match my series with the series for the associated elliptic integral (which I also have few sources about.)

Thanks for the information and advice. I'm at my parents at the moment and I'm not working consistently on it. I'll probably get back to you folks here in a week or so.

-Dan
 
  • #5
I gave up on my project.

I was eventually able to derive the Taylor series. In fact I was a bit embarrassed how simple the solution was and how long it took me. Ah well.

Anyway I thought I'd let you all know what I was up to in case anyone wants to play with it. I was in the initial stages of finding an iterative solution to the double pendulum problem. (I was trying out methods on the simple pendulum.) I wanted to see if I could get any information about modes via Fourier transforms. I just derived the "full" equations for the double pendulum, as opposed to the small angle approximation, and the two equations of motion go all the way across the page. Not something I'd want to set up by hand, Runge-Kutta or not. I could program the solution but then I'd have a page full of data to try to Fourier transform, which would have to be done by computer as well. I figure I wouldn't get any inspiration from a page full of numbers, so I've quit.

It was fun while it lasted!

-Dan
 

FAQ: Finding Iterative Solutions to Non-Linear Equations

What are non-linear equations?

Non-linear equations are mathematical equations that do not follow a straight line when graphed. This means that the variables in the equation are raised to powers other than 1, creating curves and other complex shapes on a graph.

Why is it important to find iterative solutions to non-linear equations?

Iterative solutions to non-linear equations are important because they allow us to approximate solutions to complex equations that cannot be solved algebraically. This is especially useful in fields like physics, engineering, and economics, where non-linear equations are common.

How do you find iterative solutions to non-linear equations?

To find iterative solutions to non-linear equations, we use methods like the Newton-Raphson method or the Secant method. These methods involve repeatedly using a starting guess to approach the actual solution through a series of iterations.

What are the limitations of using iterative solutions for non-linear equations?

One limitation of using iterative solutions for non-linear equations is that they may not always converge to the actual solution. This can happen if the starting guess is too far from the actual solution or if the function has multiple solutions. Additionally, iterative solutions can be computationally intensive and may take a long time to converge.

Are there any real-life applications for finding iterative solutions to non-linear equations?

Yes, there are many real-life applications for finding iterative solutions to non-linear equations. Some examples include optimizing complex systems, such as in engineering or economics, predicting the behavior of non-linear systems in physics, and modeling natural phenomena like population growth or chemical reactions.

Back
Top