Differential equations stability

In summary, the equilibrium point $q$ is stable if for all $\varepsilon > 0$ there exists a $\delta > 0$ such that $\|x(t)-p\| \leq \delta$ whenever $\|x(t)-p\| \leq \varepsilon$ for all $t>t_0$.
  • #1
Harambe1
5
0
A one-dimensional dynamical system is given by
$x′ = f(x), t \in [0,+\infty)$,
where $f : \mathbb{R} \to \mathbb{R}$ is the smooth function defined as follows:

$$f(x) = \begin{cases}
x^4 \sin \left(\frac{1}{x}\right) & x \neq 0\\ 0 & x = 0.
\end{cases}.$$

Find all the equilibrium points and determine the stability properties of each equilibrium point.

I've managed to find all the equilibrium points and their stability properties (although using the derivative test I'm not sure if less than zero implies 'stable' or 'asymtopically stable') with the exception of zero. I'm struggling to formally prove its stability using the definition:

The equilibrium point $p \in \mathbb{R}$ is stable if for all $\varepsilon > 0$ there exists a $\delta > 0$ such that $\|x(t)-p\| \leq \delta$ whenever $\|x(t)-p\| \leq \varepsilon$ for all $t \geq t_0 \geq 0$.
I'm not really sure where to start with choosing epsilon or delta or how to use these. Thanks for any tips.
 
Physics news on Phys.org
  • #2
I think you have not quoted the definition of stability correctly for your application. It's looking like you're trying to apply Lyapunov stability:

Given the DE $\dot{x}=f(x)$, and solution $x=\varphi(t,p)$, a point $q$ is stable if given $\varepsilon > 0$, there is a $\delta > 0$ such that $\|\varphi(t,p)-q\|<\varepsilon$ for all $t>t_0$ and for all $p$ such that $\|p-q\|<\delta$. The idea in my mind is this: if you start the system close enough to a stable equilibrium point, then the system stays in an envelope about that equilibrium point for all time after the initial time.

Now your equilibria are:
$$x=0, \; \frac{1}{n\pi}, \; \forall \, n\in\mathbb{Z}.$$
I would do the $\dot{x}$ versus $x$ plot as done in http://www.math.psu.edu/tseng/class/Math251/Notes-1st%20order%20ODE%20pt2.pdf. You just draw arrows in the various regions, corresponding to whether you're above or below the $x$ axis. When you're above, you draw arrows to the right, and when you're below, draw them to the left. If an equilibrium has arrows on both sides going into it, you're stable. If going out of it, you're unstable. If the arrows are going through it, then you're neither unstable nor stable, but semistable (stable from one direction only). Does that make sense?

Another way of thinking about it is this: if $f'(q)<0$ at equilibrium point $q$, then it's stable. If $f'(q)>0$ at equilibrium point $q$, then it's unstable. If $f'(q)=0$, then $q$ is neither.
 
  • #3
Ackbach said:
I think you have not quoted the definition of stability correctly for your application. It's looking like you're trying to apply Lyapunov stability:

Given the DE $\dot{x}=f(x)$, and solution $x=\varphi(t,p)$, a point $q$ is stable if given $\varepsilon > 0$, there is a $\delta > 0$ such that $\|\varphi(t,p)-q\|<\varepsilon$ for all $t>t_0$ and for all $p$ such that $\|p-q\|<\delta$. The idea in my mind is this: if you start the system close enough to a stable equilibrium point, then the system stays in an envelope about that equilibrium point for all time after the initial time.

Now your equilibria are:
$$x=0, \; \frac{1}{n\pi}, \; \forall \, n\in\mathbb{Z}.$$
I would do the $\dot{x}$ versus $x$ plot as done in http://www.math.psu.edu/tseng/class/Math251/Notes-1st%20order%20ODE%20pt2.pdf. You just draw arrows in the various regions, corresponding to whether you're above or below the $x$ axis. When you're above, you draw arrows to the right, and when you're below, draw them to the left. If an equilibrium has arrows on both sides going into it, you're stable. If going out of it, you're unstable. If the arrows are going through it, then you're neither unstable nor stable, but semistable (stable from one direction only). Does that make sense?

Another way of thinking about it is this: if $f'(q)<0$ at equilibrium point $q$, then it's stable. If $f'(q)>0$ at equilibrium point $q$, then it's unstable. If $f'(q)=0$, then $q$ is neither.

Thanks for the reply.

I'm not really sure how I would go about drawing a direction field for this particular function so have opted to use the "If $f'(q)>0$ at equilibrium point $q$, then it's unstable. If $f'(q)=0$, then $q$ is neither." Although having obtained $f'(q)=0$ using the equilibrium q=0, the test fails. As I also am unable to sketch the direction field is it possible to use the definition to prove stability at q=o (by choosing an arbitrary epsilon?)

Thanks again.
 

FAQ: Differential equations stability

What is the concept of stability in differential equations?

Stability in differential equations refers to the behavior of the solutions over time. A system is considered stable if small changes in the initial conditions or inputs do not significantly affect the long-term behavior of the solutions.

What are the different types of stability in differential equations?

There are three types of stability: stable, unstable, and marginally stable. A system is stable if all solutions approach a fixed point or a limit as time goes to infinity. It is unstable if at least one solution grows without bound. Marginally stable systems have solutions that neither grow nor decay over time.

How is stability determined in differential equations?

The stability of a system can be determined by analyzing the eigenvalues of the system's matrix. If all eigenvalues have negative real parts, the system is stable. If at least one eigenvalue has a positive real part, the system is unstable. Marginally stable systems have at least one eigenvalue with a zero real part.

What is the difference between local and global stability in differential equations?

Local stability refers to the behavior of the solutions near a specific point or equilibrium, while global stability considers the behavior of the solutions over the entire domain of the system. A system can be locally stable but globally unstable if there are multiple equilibria with different stability properties.

How do we apply stability analysis in real-world problems?

Stability analysis is a fundamental tool in many scientific fields, including physics, engineering, and biology. It is used to analyze the behavior of systems such as electrical circuits, chemical reactions, and population dynamics. By understanding the stability properties of a system, we can make predictions and design control strategies to ensure its desired behavior.

Similar threads

Replies
1
Views
2K
Replies
5
Views
2K
Replies
1
Views
2K
Replies
7
Views
1K
Replies
6
Views
1K
Replies
4
Views
2K
Replies
4
Views
2K
Replies
1
Views
2K
Back
Top