Proving stability of linear system equations

In summary, proving the stability of linear system equations involves analyzing the system's response to inputs and determining whether it converges to a steady state over time. Key methods include examining the eigenvalues of the system's matrix, where stability is indicated by all eigenvalues having negative real parts. Additional techniques, such as Lyapunov's direct method, can also be employed to establish stability by constructing a suitable Lyapunov function. Overall, these approaches help ensure that the system maintains predictable and bounded behavior under various conditions.
  • #1
member 731016
Homework Statement
Please see below
Relevant Equations
Please see below
For this problem,
1716877322765.png

My solution is to find the characteristic equation of the system by putting the system into a matrix. This gives ##\lambda^2 + 2f \lambda + f^2 + 1 = 0##
Then each eigenvalue is ##\lambda_1 = -f - i## and ##\lambda_2 = -f + i##

I then want to find the Jacobian, however, I would need to find the partial derivatives (with respect to x and y) of ##F(x,y) = y - xf(x,y)## and ##G(x,y) = - x - yf(x,y)##, however, I'm not sure how to do that with the ##f(x,y)## in there.

Does anybody please know what I should do?

Thanks!
 
Physics news on Phys.org
  • #2
You don't need to look at the Jacobian; the fact that the question tells you nothing about the partial derivatives of [itex]f[/itex] is a strong suggestion that this is the wrong way to proceed.

When I see scalar multiples of [itex](x,y)[/itex] and [itex](-y,x)[/itex] on the right hand side, I immediately think of polar coordinates [itex](x,y) = (r \cos \theta, r \sin \theta)[/itex]. This is because [tex]\begin{split}
r\frac{dr}{dt} &= x\frac{dx}{dt} + y\frac{dy}{dt} \\
r^2\frac{d\theta}{dt} &= x\frac{dy}{dt} - y\frac{dx}{dt} \end{split}[/tex] so that the coefficient of [itex](x,y)[/itex] tells you how [itex]r[/itex] behaves and the coefficient of [itex](-y,x)[/itex] tells you how [itex]\theta[/itex] behaves. If [itex]\dot r < 0[/itex] the origin is asymptotically stable; if [itex]\dot r > 0[/itex] the origin is unstable.
 
Last edited:
  • Love
Likes member 731016
  • #3
pasmith said:
You don't need to look at the Jacobian; the fact that the question tells you nothing about the partial derivatives of [itex]f[/itex] is a strong suggestion that this is the wrong way to proceed.

When I see scalar multiples of [itex](x,y)[/itex] and [itex](-y,x)[/itex] on the right hand side, I immediately think of polar coordinates [itex](x,y) = (r \cos \theta, r \sin \theta)[/itex]. This is because [tex]\begin{split}
r\frac{dr}{dt} &= x\frac{dx}{dt} + y\frac{dy}{dt} \\
r^2\frac{d\theta}{dt} &= x\frac{dy}{dt} - y\frac{dx}{dt} \end{split}[/tex] so that the coefficient of [itex](x,y)[/itex] tells you how [itex]r[/itex] behaves and the coefficient of [itex](-y,x)[/itex] tells you how [itex]\theta[/itex] behaves. If [itex]\dot r < 0[/itex] the origin is asymptotically stable; if [itex]\dot r > 0[/itex] the origin is unstable.
Thank you for your reply @pasmith!

That is a interesting idea that I have not seen before. They have taught me to me so far to methods to find the stability of a non-linear system of equations, to either use the Jacobian matrix for linearization to find a equivalent linear DE system to the non-linear DE system or use the direct method.

If I think about for the later method, we could also try to solve this problem by using a Liapunov function of the form ##V(x,y) = dx^2 + dy^2 = d(x^2 + y^2)##? If that does not work then I could generalize to more coefficients, ##V(x,y) = dx^2 + gy^2##?

Thanks!
 

FAQ: Proving stability of linear system equations

What is stability in the context of linear systems?

Stability in linear systems refers to the behavior of the system's output over time in response to initial conditions or external inputs. A system is considered stable if, after a disturbance, its output returns to a steady state or equilibrium. In mathematical terms, for a linear time-invariant (LTI) system, stability is often assessed by examining the eigenvalues of the system's matrix.

How can I determine the stability of a linear system?

The stability of a linear system can be determined by analyzing the eigenvalues of its state matrix. If all eigenvalues have negative real parts, the system is asymptotically stable. If any eigenvalue has a positive real part, the system is unstable. If eigenvalues are on the imaginary axis (zero real part), the system is marginally stable.

What role do eigenvalues play in system stability?

Eigenvalues are crucial in determining the stability of linear systems. They indicate how the system responds to perturbations. The sign of the real parts of the eigenvalues dictates whether the system's response will decay to zero (stable), grow without bound (unstable), or oscillate indefinitely (marginally stable).

What is the difference between asymptotic stability and marginal stability?

Asymptotic stability means that the system's state will converge to an equilibrium point over time after a disturbance. In contrast, marginal stability indicates that the system will neither converge nor diverge but will oscillate indefinitely around the equilibrium point without settling down.

Can a linear system be stable if it has repeated eigenvalues?

A linear system can be stable even if it has repeated eigenvalues, provided that the real parts of those eigenvalues are negative. However, repeated eigenvalues can lead to complications in the system's response, such as the potential for oscillatory behavior or the need for generalized eigenvectors to fully describe the system's dynamics.

Back
Top