I On error estimates of approximate solutions

psie
Messages
315
Reaction score
40
TL;DR Summary
I'm trying to estimate the error between an approximate and an exact solution, but I get a very poor estimate.
I'm reading Ordinary Differential Equations by Andersson and Böiers. They give an estimate for how the difference between an exact and an approximate solution propagates with time. Then they give an example where they encourage the reader to check that this estimate holds. When I do that, I get a very bad estimate and I wonder if I'm doing something wrong. I will first state a definition of what it means to be an approximate solution and then state the theorem that gives the estimate.

Definition 1. Let ##I## be an interval on the real axis, and ##\Omega## an open set in ##\mathbf R\times\mathbf{R}^n##. Assume that the function ##\pmb{f}:\Omega\to\mathbf{R}^n## is continuous. A continuous function ##\pmb{x}(t),\ t\in I##, is called an ##\varepsilon##-approximate solution of the system ##\pmb{x}'=\pmb{f}(t,\pmb{x})## if ##(t,\pmb{x})\in\Omega## when ##t\in I## and $$\left|\pmb{x}(t'')-\pmb{x}(t')-\int_{t'}^{t''} \pmb{f}(s,\pmb{x}(s))ds\right|\leq \varepsilon|t''-t'|\quad \text{when } t',t''\in I.\tag1$$

If ##\pmb{x}## is differentiable, then choosing ##t'=t## and ##t''=t+h## and taking limits as ##h\to0##, ##(1)## reads $$|\pmb{x}'(t)-\pmb{f}(t,\pmb{x}(t))|\leq\varepsilon\quad\text{when }t\in I.\tag2$$

The following theorem gives an estimate of how the difference between exact and approximate solutions propagates with ##t##. I state it without proof:

Theorem 1. Assume that ##\pmb{f}(t,\pmb{x})## is continuous in ##\Omega\subseteq \mathbf{R}\times\mathbf{R}^n## and satisfies the Lipschitz condition $$|\pmb{f}(t,\pmb{x})-\pmb{f}(t,\pmb{y})|\leq L|\pmb{x}-\pmb{y}|, \quad (t,\pmb{x}),(t,\pmb{y})\in\Omega.\tag3$$ Let ##\pmb{\tilde{x}}(t)## be an ##\varepsilon##-approximate and ##\pmb{x}(t)## and exact solution of ##\pmb{x}'=\pmb{f}(t,\pmb{x})## in ##\Omega## when ##t\in I##. For an arbitrary point ##t_0## in ##I## we then have $$|\pmb{\tilde{x}}(t)-\pmb{x}|\leq |\pmb{\tilde{x}}(t_0)-\pmb{x}(t_0)|e^{L|t-t_0|}+\frac{\varepsilon}{L}(e^{L|t-t_0|}-1),\quad t\in I.\tag4$$

Note that the first term on the right-hand side of ##(4)## vanishes if both ##\pmb{\tilde{x}}## and ##\pmb x## are equal at ##t_0##. Now consider the following example:

Example 1. (##n=1##) Consider the differential equation ##x'=3x^{2/3}##. The function ##\tilde{x}(t)\equiv 10^{-6}## is an ##\varepsilon##-approximate solution for ##\varepsilon=3\cdot10^{-4}## by ##(2)##, since $$|\tilde{x}'(t)-3\tilde{x}(t)^{2/3}|=3\left(10^{-6}\right)^{2/3}=3\cdot10^{-4}.\tag5$$ The exact solution of the initial value problem ##x'=3x^{2/3},\ x(0)=10^{-6}## is ##x(t)=\left(t+\frac{1}{100}\right)^3##. For ##t=1## we have $$x(1)\approx1.03,\quad \tilde{x}(1)=10^{-6}.\tag6$$

Remark. Check that ##(4)## is in agreement with example ##1##. Note that ##L## there is a large number, of magnitude ##200##.

In the example, we have that ##\tilde{x}(0)=x(0)=10^{-6}##, ##\varepsilon=3\cdot10^{-4}## and ##L=200##, so ##(4)## reads, for ##t=1##, $$|\tilde{x}(1)-x(1)|\leq\frac{3\cdot10^{-4}}{200}e^{200|1-0|}\approx 10^{81},\tag7$$ which is huge number compared to ##\tilde{x}(1)-x(1)\approx1.03##. Can this be correct?
 
Physics news on Phys.org
The problem is that the Lipschitz constant L must satisfy <br /> L \geq \sup\left\{\left|\frac{f(x) - f(y)}{x - y}\right| : (x,y) \in \Omega^2, x \neq y\right\}. (If f is differentiable, then we have <br /> \lim_{x\to y} \left| \frac{f(x) - f(y)}{x - y}\right| = |f&#039;(y)| and we would also require L \geq \sup |f&#039;(x)|.) For f(x) = 3x^{2/3} with f&#039;(x) = 2x^{-1/3} and \Omega = [10^{-6},1] this leads to a large Lipschitz constant on the order of 200 and a not particularly helpful bound on |x(t) - \tilde x(t)| near t = 1; I'm sure this was the point of the example.
 
There is the following linear Volterra equation of the second kind $$ y(x)+\int_{0}^{x} K(x-s) y(s)\,{\rm d}s = 1 $$ with kernel $$ K(x-s) = 1 - 4 \sum_{n=1}^{\infty} \dfrac{1}{\lambda_n^2} e^{-\beta \lambda_n^2 (x-s)} $$ where $y(0)=1$, $\beta>0$ and $\lambda_n$ is the $n$-th positive root of the equation $J_0(x)=0$ (here $n$ is a natural number that numbers these positive roots in the order of increasing their values), $J_0(x)$ is the Bessel function of the first kind of zero order. I...
Are there any good visualization tutorials, written or video, that show graphically how separation of variables works? I particularly have the time-independent Schrodinger Equation in mind. There are hundreds of demonstrations out there which essentially distill to copies of one another. However I am trying to visualize in my mind how this process looks graphically - for example plotting t on one axis and x on the other for f(x,t). I have seen other good visual representations of...

Similar threads

Back
Top