Neglecting higher powers of small quantities in calculations

In summary, if you neglect terms which are not constant or linear in a given function's derivatives then the long-term behaviour of the system is simplified to either a decaying equilibrium or an oscillating system that goes off to infinity.
  • #1
dyn
773
62
Hi
If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?
Thanks
 
Physics news on Phys.org
  • #2
dyn said:
Hi
If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?
Thanks
In terms of mathematics, no. Take the function ##f(x) = x\sin(\frac 1 x)##. The function is bounded for small ##x##, but ##f'(x) = \sin(\frac 1 x) - \frac 1 x \cos (\frac 1 x)## which is unbounded for small ##x##.

In terms of physics, the physical constraints of the system may exclude this type of badly behaved function.
 
  • #3
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?
 
  • #4
Another example i have seen is where a Lagrangian includes terms of α(dot)2 and a term of θ2 and a term of α(dot)θ(dot)sin(α-θ)
When the simplification that angles α and θ are both small is made the α(dot)θ(dot)sin(α-θ) term is then neglected. Why is this term neglected ?
 
  • #5
PeroK said:
In terms of mathematics, no.
This.

If position is small does that mean velocity is also small?
 
  • Like
Likes PhDeezNutz and S.G. Janssens
  • #6
Vanadium 50 said:
This.

If position is small does that mean velocity is also small
No , velocity could be large even for a small position

But why are those term neglected in #3 and #4 ?
 
  • #7
If the object is starting at rest, then the velocity is zero when the position is zero. Maybe that's being used/assumed? I think we would need to see more of the description of these examples to draw a definitive conclusion.
 
  • #8
dyn said:
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?

If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in [itex]r[/itex] and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case [itex]\dot r \to 0[/itex] is small.
(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).
 
Last edited:
  • Like
Likes ergospherical
  • #9
dyn said:
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?
In small oscillations problems, after determining the exact equations of motion using the Euler-Lagrange equation, one can obtain the so-called linearised equations by neglecting all terms which are not linear in the generalised coordinates and their time derivatives.

In your example, that turns out to be equivalent to simply dropping the middle term ##-a^2 \dot{r}^2 r## from the Lagrangian.

(This approach will give you the same results as if you were to instead use the "approximate" quadratic forms ##T = \dfrac{1}{2} a_{ij}(\boldsymbol{q}_0) \dot{q}_i \dot{q}_j## and ##U = \dfrac{1}{2} \partial_i \partial_j U \bigg{|}_{\boldsymbol{q}_0} q_i q_j## to derive the equations of motion, with ##\boldsymbol{q}_0## the equilibrium point).
 
  • #10
pasmith said:
If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in [itex]r[/itex] and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case [itex]\dot r \to 0[/itex] is small.
(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).
If i have a simple pendulum with zero friction then it would oscillate forever. Is that an equilibrium state ? How does that imply that r(dot) is small ?
I have come across the following in some lecture notes "for small amplitude oscillations about θ = 0 then θ(dot) is also small". But why ? Can a harmonic oscillator not oscillate with a fast speed ?
 
  • #11
Well if it's a pendulum, then I would think not. The velocity at the bottom is a function of the potential energy, so if the potential energy is small because it doesn't go up very high, then the velocity of the pendulum is small.
 
  • Like
Likes dyn
  • #12
That is a good point. What about a mass on a spring oscillating; if the spring has a large k value , could that not oscillate with a large speed ?
 
  • #13
Sure, but if you fix ##k##, then consider the maximum velocity as a function of the maximum displacement, you still get that the velocity goes to zero as the displacement goes to zero.
 
  • #14
It is already explained in Arnold's book on mechanics. If you have a differential equation ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x})## with an equilibrium position ##\mathbf{x}_0##, then to first order
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).
 
  • Like
Likes PhDeezNutz, dyn and hutchphd
  • #15
Office_Shredder said:
Sure, but if you fix ##k##, then consider the maximum velocity as a function of the maximum displacement, you still get that the velocity goes to zero as the displacement goes to zero.
I thought maximum velocity occurs at zero displacement ?
 
  • #16
The velocity at zero displacement is a function of the maximum displacement that occurs during the harmonic oscillation.
 
  • Like
Likes dyn
  • #17
ergospherical said:
It is already explained in Arnold's book on mechanics. If you have a differential equation ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x})## with an equilibrium position ##\mathbf{x}_0##, then to first order
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).
That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.
 
  • Like
Likes S.G. Janssens
  • #18
PeroK said:
That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.
You are correct, and the statement from post #14 that you quoted is indeed incorrect, and not only because of missing smoothness conditions on ##F##.

(It is not that I think you need my confirmation, I just want to stress the point.)
 
  • #19
What is the mistake? Assuming that ##\mathbf{x}_L## and ##\mathbf{x}_E## have the same initial conditions and that ##\boldsymbol{F}## is well-behaved?
 
  • #20
ergospherical said:
What is the mistake? Assuming that ##\mathbf{x}_L## and ##\mathbf{x}_E## have the same initial conditions and that ##\boldsymbol{F}## is well-behaved?
Does Arnold say anything about the properties of ##F## he's assuming? There must be conditions on ##F## for his analysis to hold. E.g. putting a maximum bound on all derivatives of ##F## clearly does the trick.
 
  • #21
Not that I can see, but then again this is an applied text and the bit I referenced was discussing the linearisation of systems like ##\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{\boldsymbol{q}}} = \dfrac{\partial L}{\partial\boldsymbol{q}}## near equilibrium positions. I have no idea what the conditions on F are, but then again I don't particularly need to worry about it as a physics student :)
 
  • #22
ergospherical said:
Not that I can see, but then again this is an applied text and the bit I referenced was discussing the linearisation of systems like ##\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{\boldsymbol{q}}} = \dfrac{\partial L}{\partial\boldsymbol{q}}## near equilibrium positions. I have no idea what the conditions on F are, but then again I don't particularly need to worry about it as a physics student :)
I found a PDF of Arnold's Mechanics. I don't believe that theorem on page 100. It doesn't look right at all. What it says is::

For any duration ##T## (no matter how large) and for any precision ##\epsilon## (no matter how small), simply by choosing a small enough initial offset from equilibrium (##\delta##), the exact solution and linearised solution remain within ##\delta \epsilon##. That can't be right.

The problem with the theorem as stated is that the smaller you make your initial offset ##\delta##, so the smaller you make the allowable error ##\delta \epsilon##.

The loophole is that the sum of terms in the Taylor series such as ##\frac{f''(x_0)x^2}{2} \dots## needn't converge as ##x^2## for small ##x##. That's why you need the derivatives to be bounded, not simply the Taylor series to converge.

PS And, of course, when you write:
ergospherical said:
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}
Then bounded derivatives are exactly what you are assuming.
 
Last edited:

FAQ: Neglecting higher powers of small quantities in calculations

What does it mean to neglect higher powers of small quantities in calculations?

Neglecting higher powers of small quantities in calculations refers to the practice of omitting terms in a mathematical expression that are significantly smaller than other terms. This is often done to simplify calculations and obtain a more accurate approximation of the result.

Why is it important to neglect higher powers of small quantities in calculations?

Neglecting higher powers of small quantities is important because it allows for more efficient and accurate calculations. Including these small terms can lead to errors and make the calculation unnecessarily complex.

How do you determine which terms to neglect in a calculation?

The decision to neglect higher powers of small quantities in a calculation is based on the relative size of the terms. Generally, if a term is significantly smaller than the other terms in the expression, it can be neglected without significantly affecting the accuracy of the result.

Are there any situations where neglecting higher powers of small quantities is not appropriate?

Yes, there are some situations where neglecting higher powers of small quantities is not appropriate. For example, in certain scientific or engineering calculations where even small errors can have significant consequences, it may be necessary to include all terms in the calculation.

Can neglecting higher powers of small quantities lead to inaccurate results?

Yes, neglecting higher powers of small quantities can potentially lead to inaccurate results. It is important to carefully consider the relative size of the terms being neglected and the level of accuracy required for the calculation. In some cases, neglecting these terms may result in a sufficiently accurate approximation, while in others it may lead to significant errors.

Back
Top