Finite difference scheme for y'(t)=a*y(t)

In summary: well as finite t.thanks but that said, any finite difference would work and why discuss stability? stability talks about finite t as...well as finite t.
  • #1
feynman1
435
29
It seems no finite difference scheme is stable for a>0, dt>0, correct?
 
Physics news on Phys.org
  • #2
For any discretization I think you end up with a linear recurrence [tex]
A_{n+1}y_{n+1} = A_ny_n + \dots + A_{n-k}y_{n-k}[/tex] where each [itex]A_i \in \mathbb{C}[a\Delta t][/itex]. The solution is then [tex]
y_n = \sum_{j=1}^{k+1} \alpha_jn^{m_j}\Lambda_j^n[/tex] where the [itex]\Lambda_j[/itex] are the roots of [tex]
A_{n+1}\Lambda^{k+1} = A_n\Lambda^k + \dots + A_{n-k}[/tex] and [itex]m_j = 0[/itex] unless there are repeated roots. The coefficients [itex]\alpha_j[/itex] are determined by [itex]y_0, y_1, \dots, y_k[/itex]. You can see from this that the absolute error [itex]|e^{na\Delta t} - y_n|[/itex] will increase without bound as [itex]n \to \infty[/itex] with [itex]a\Delta t > 0[/itex].
 
  • Like
Likes feynman1
  • #3
pasmith said:
For any discretization I think you end up with a linear recurrence [tex]
A_{n+1}y_{n+1} = A_ny_n + \dots + A_{n-k}y_{n-k}[/tex] where each [itex]A_i \in \mathbb{C}[a\Delta t][/itex]. The solution is then [tex]
y_n = \sum_{j=1}^{k+1} \alpha_jn^{m_j}\Lambda_j^n[/tex] where the [itex]\Lambda_j[/itex] are the roots of [tex]
A_{n+1}\Lambda^{k+1} = A_n\Lambda^k + \dots + A_{n-k}[/tex] and [itex]m_j = 0[/itex] unless there are repeated roots. The coefficients [itex]\alpha_j[/itex] are determined by [itex]y_0, y_1, \dots, y_k[/itex]. You can see from this that the absolute error [itex]|e^{na\Delta t} - y_n|[/itex] will increase without bound as [itex]n \to \infty[/itex] with [itex]a\Delta t > 0[/itex].
thanks a lot then what finite differences can solve this eq?
 
  • #4
feynman1 said:
thanks a lot then what finite differences can solve this eq?

I think the main point here is that the solution grows exponentially, and any discretization constructs polynomial approximations. The exponential will eventually grow faster than the polynomial, and then you'll never be able to catch up.

That said, it's a bit unusual to want a finite difference method to actually compute for *all* t>0. If you only care about a fixed time range (even if it's enormous), you can get arbitrarily good approximations in that region.
 
  • Like
Likes feynman1
  • #5
feynman1 said:
thanks a lot then what finite differences can solve this eq?

Any of them.

For example, for the Euler method we have [tex]
y_{n+1} = (1 + a\Delta t)y_n[/tex] with solution [tex]
y_n = y_0(1 + a\Delta t)^n.[/tex]If we let [itex]\Delta t \to 0[/itex] with [itex]N\Delta t = T[/itex] fixed we get [tex]
y(t) = \lim_{N \to \infty} y_0\left(1 + \frac{aT}{N}\right)^N = y_0e^{aT}[/tex] which is the analytical solution. Thus the method works, in that you get a more accurate result by taking a smaller timestep.

It is also the case that for [itex]a > 0[/itex] both [itex]e^{na\Delta t}[/itex] and [itex](1 + a\Delta t)^n[/itex] exhibit the same qualitative behaviour, namely exponential increase with [itex]n[/itex]. The absolute error grows because they do not increase at the same rate: The approximation can be written as [itex]e^{n\beta\Delta t}[/itex] where [tex]
\beta = \frac{\log(1 + a\Delta t)}{\Delta t} < a[/tex] and therefore increases more slowly than the analytical solution.

A tedious calculation shows that for the fourth-order Runge-Kutta method we have [tex]
y_{n+1} = \left(1 + (a\Delta t) + \tfrac12(a\Delta t)^2 + \tfrac16(a\Delta t)^3 + \tfrac{1}{24}(a\Delta t)^4\right)y_n[/tex] so that [tex]
\beta = \frac{\log\left(1 + (a\Delta t) + \tfrac12(a\Delta t)^2 + \tfrac16(a\Delta t)^3 + \tfrac{1}{24}(a\Delta t)^4\right)}{\Delta t}[/tex] which doesn't increase as fast as the analytical solution, but does increase faster than the Euler solution. And again we have [itex]\beta \to a[/itex] as [itex]\Delta t \to 0[/itex].
 
Last edited:
  • Like
Likes feynman1
  • #6
It seems to me that the backward Euler scheme is stable even if a > 0. Stability doesn't mean that the solution does not grow without bound. It means that the difference between the numerical solution and the exact solution does not grow without bound.
 
  • Like
Likes feynman1
  • #7
Chestermiller said:
It seems to me that the backward Euler scheme is stable even if a > 0. Stability doesn't mean that the solution does not grow without bound. It means that the difference between the numerical solution and the exact solution does not grow without bound.
but the stability regime for back euler is |1-a*dt|>=1, not even stable for dt->0+ when a>0.
 
  • #8
pasmith said:
Any of them.

For example, for the Euler method we have [tex]
y_{n+1} = (1 + a\Delta t)y_n[/tex] with solution [tex]
y_n = y_0(1 + a\Delta t)^n.[/tex]If we let [itex]\Delta t \to 0[/itex] with [itex]N\Delta t = T[/itex] fixed we get [tex]
y(t) = \lim_{N \to \infty} y_0\left(1 + \frac{aT}{N}\right)^N = y_0e^{aT}[/tex] which is the analytical solution. Thus the method works, in that you get a more accurate result by taking a smaller timestep.
thanks a lot, but isn't that analysis compatibility rather than stability? doesn't work for dt that is big?
 
  • #9
Office_Shredder said:
I think the main point here is that the solution grows exponentially, and any discretization constructs polynomial approximations. The exponential will eventually grow faster than the polynomial, and then you'll never be able to catch up.

That said, it's a bit unusual to want a finite difference method to actually compute for *all* t>0. If you only care about a fixed time range (even if it's enormous), you can get arbitrarily good approximations in that region.
thanks but that said, any finite difference would work and why discuss stability? stability talks about finite t as well.
 
  • #10
I think I was mistaken. For backward Euler, the difference scheme is $$y^{n+1}=\frac{y^n}{(1-a\Delta t)}$$ which is accurate only if ##a\Delta t<<1##.
 
  • #11
Chestermiller said:
I think I was mistaken. For backward Euler, the difference scheme is $$y^{n+1}=\frac{y^n}{(1-a\Delta t)}$$ which is accurate only if ##a\Delta t<<1##.
but that doesn't fall into the stable regime |1-a*dt|>=1
 
  • #12
?
 

FAQ: Finite difference scheme for y'(t)=a*y(t)

What is a finite difference scheme?

A finite difference scheme is a numerical method used to approximate the solution of a differential equation. It involves dividing the domain of the equation into a discrete grid and using finite difference approximations to calculate the values of the function at each point on the grid.

What does y'(t) = a*y(t) represent?

This equation represents a first-order linear ordinary differential equation, where y'(t) is the derivative of the function y(t) with respect to time t, and a is a constant representing the rate of change of y(t).

How does a finite difference scheme work for solving this equation?

The finite difference scheme for this equation involves approximating the derivative y'(t) using the difference between two points on the grid, and then using this approximation in the differential equation to solve for the value of y(t) at each point on the grid.

What are the advantages of using a finite difference scheme for this equation?

One advantage is that it allows us to solve the differential equation numerically, which can be useful when an analytical solution is not possible. It also allows us to approximate the solution at any point on the grid, not just at specific values of t.

Are there any limitations to using a finite difference scheme for this equation?

Yes, one limitation is that the accuracy of the solution depends on the size of the grid used. A smaller grid will result in a more accurate solution, but it also requires more computational resources. Additionally, the finite difference scheme may not work for more complex or higher-order differential equations.

Similar threads

Back
Top