How Does Precision Affect Error in Euler's Method Calculations?

In summary: Thinking)... algorithmic error and the error due to limited machine precision, which is FLT_EPSILON.So when $h$ is close to FLT_EPSILON, the error will be significant.Furthermore, since we are using single-precision, FLT_EPSILON has a value of 1.19209290E-07F.Therefore, when $h$ becomes smaller, the error will become larger.In summary, the error in single-precision increases for $N>10^5$ because FLT_EPSILON becomes significant and the error in double-precision gets approximately half because FLT_EPSILON has a much smaller value of 10^-19. We can calculate the error for a given $N$ using the formula $|e
  • #1
evinda
Gold Member
MHB
3,836
0
Hello! (Wave)

We consider the initial value problem

$$\left\{\begin{matrix}
y'=y &, 0 \leq t \leq 1 \\
y(0)=1 &
\end{matrix}\right.$$

We apply the Euler method with $h=\frac{1}{N}$ and huge number of steps $N$ in order to calculate the approximation $y^N$ of the value of the solution $y$ at $t^N, \ y(t^N)=y(1)=e$.
At the following table there are, for all $N$, the errors $|\epsilon^N|=|e-y^N|$, when the calculations are done with single and double precision.$$\begin{matrix}
N & |\epsilon^N|\text{ Single-precision } & |\epsilon^N| \text{ Double-precision } \\
- & - & - \\
100 & 0.13468 \cdot 10^{-1} & 0.13468 \cdot 10^{-1} \\
200 & 0.67661 \cdot 10^{-2} & 0.67647 \cdot 10^{-2}\\
400 & 0.33917 \cdot 10^{-2} & 0.33901 \cdot 10^{-2}\\
800 & 0.16971 \cdot 10^{-2} & 0.16970 \cdot 10^{-2}\\
1600 & 0.85568 \cdot 10^{-3} & 0.84898 \cdot 10^{-3} \\
\cdots & & \\
102400 & 0.65088 \cdot 10^{-4} & 0.13273 \cdot 10^{-4} \\
204800 & 0.21720 \cdot 10^{-3} & 0.66363 \cdot 10^{-5} \\
409600 & 0.78464 \cdot 10^{-3} & 0.33181 \cdot 10^{-5} \\
819200 & 0.20955 \cdot 10^{-2} & 0.16590 \cdot 10^{-5} \\
\dots
\end{matrix}$$We notice that the errors of the calculations of double-precision get approximately half. However, in the case of single-precision, for $N>10^5$ the errors increase!
Indeed, for a big enough $N$, the errors in our case tend to $1.71828 \dots$.

Could you explain me why the errors, when the calculations are done in single-precision, increase for $N>10^5$ and why they get approximately half when the calculations are done in double-precision? :confused:
Also, how can we calculate the error for a given $N$?
For example, if we have $N=10^5$ then $\epsilon^N=|e-y^{10^5}|=\left |e- \left( 1+ \frac{1}{10^5} \right)^{10^5} \right |$.
How can we calculate the latter, knowing that the zero of the machine is $10^{-6}$ when we have single precision but $10^{-19}$ when we have double precision? (Thinking)
 
Mathematics news on Phys.org
  • #2
evinda said:
We notice that the errors of the calculations of double-precision get approximately half. However, in the case of single-precision, for $N>10^5$ the errors increase!
Indeed, for a big enough $N$, the errors in our case tend to $1.71828 \dots$.

Could you explain me why the errors, when the calculations are done in single-precision, increase for $N>10^5$ and why they get approximately half when the calculations are done in double-precision? :confused:

Hey! (Mmm)

The error is the sum of the error due to the algorithm and the error due to limited machine precision.

In C we have [m]FLT_EPSILON[/m] (for single precision), which is the minimum positive number such that 1.0 + FLT_EPSILON != 1.0.
Its value is [m]FLT_EPSILON = 1.19209290E-07F[/m]. (Nerd)

To execute Euler's algorithm, we have to calculate:
$$y^{k+1} = y^{k} + h \cdot f'(y^{k}) = y^{k} + h \cdot y^{k}$$
Since $y \in [1, e]$, we can tell that when $h$ comes closer to FLT_EPSILON, we will get significant machine precision errors. This will become worse the smaller $h$ becomes
When h < FLT_EPSILON, we can tell for sure that $y^{k+1}=y^{k}$, meaning we end up with an error of $e-1$.To estimate the error in double-precision, we need to know the algorithmic error.
What is the algorithmic error of Euler's method? (Wondering)
Also, how can we calculate the error for a given $N$?

Do you have a formula for it? Or can you find one? (Wondering)
For example, if we have $N=10^5$ then $\epsilon^N=|e-y^{10^5}|=\left |e- \left( 1+ \frac{1}{10^5} \right)^{10^5} \right |$.
How can we calculate the latter, knowing that the zero of the machine is $10^{-6}$ when we have single precision but $10^{-19}$ when we have double precision? (Thinking)

Make a Taylor expansion? (Thinking)
 
  • #3
I like Serena said:
Hey! (Mmm)

The error is the sum of the error due to the algorithm and the error due to limited machine precision.

In C we have [m]FLT_EPSILON[/m] (for single precision), which is the minimum positive number such that 1.0 + FLT_EPSILON != 1.0.
Its value is [m]FLT_EPSILON = 1.19209290E-07F[/m]. (Nerd)

To execute Euler's algorithm, we have to calculate:
$$y^{k+1} = y^{k} + h \cdot f'(y^{k}) = y^{k} + h \cdot y^{k}$$

I see.. (Nod) In my notes, the formula is $y^{k+1}=y^k+hf(t^k,y^k)$...

I like Serena said:
Since $y \in [1, e]$,

With $y$ do you mean the real solution of the initial value problem or the approximation? (Thinking)

I like Serena said:
we can tell that when $h$ comes closer to FLT_EPSILON, we will get significant machine precision errors. This will become worse the smaller $h$ becomes

Could you explain it further to me? :confused:

I like Serena said:
To estimate the error in double-precision, we need to know the algorithmic error.
What is the algorithmic error of Euler's method? (Wondering)

Do you mean one of the local discretization or global error? (Thinking)

I like Serena said:
Do you have a formula for it? Or can you find one? (Wondering)

Isn't it the difference $|y(t^N)-y^N|$?

I like Serena said:
Make a Taylor expansion? (Thinking)

So is it as follows? Or am I wrong? (Blush)

$$ \epsilon^N=|e-y^{10^5}|=\left| e- \left( 1- \frac{1}{10^5}\right)^{10^5}\right|=\left| e- e^{\ln{ \left( 1- \frac{1}{10^5}\right)^{10^5} }}\right|=\left| e- e^{10^5\ln{ \left( 1- \frac{1}{10^5}\right) }}\right|=\left| e- e^{10^5 \left( - \frac{1}{10^5}- \frac{(\xi-1)^2}{2} \right)}\right|$$
 
  • #4
evinda said:
With $y$ do you mean the real solution of the initial value problem or the approximation? (Thinking)

Any one of them.
The initial value for $y$ is $y(0) = y^0 = 1$.
The actual solution is $y=e^t$.
Furthermore, $t$ moves from 0 to 1 with steps of $\frac 1 N$, ending in $y=e$.
Since Euler's method is underestimating, every actual value of $y$ and any approximation of $y$ is between $1$ and $e$. (Thinking)
Could you explain it further to me? :confused:

Let's start with $y^1$:
$$y^1 = y^0 + h \cdot y^0 = 1 + h \cdot 1 = 1 + h$$
FLT_EPSILON is defined as the smallest single-precision value such that $1 = 1 +$ FLT_EPSILON.
So if $h =$ FLT_EPSILON, then it follows that $y^1 = y^0$. (Mmm)
Do you mean one of the local discretization or global error? (Thinking)

I meant the global error.

According to Taylor, our function $y$ can be written as:
$$y(t+h) = y(t) + h\cdot y'(t) + \frac 1 2 h^2 y''(t+\theta h)$$
for some $0 \le \theta \le 1$.

Euler's method consists of those first 2 terms:
$$y^{k+1} = y^k + h \cdot y'(t^k)$$
That means that the global error $\Delta y^{k+1}$ in an iteration is the sum of the error in the previous iteration plus the remainder term of Taylor's expansion:
$$\Delta y^{k+1} = \Delta y^k + \frac 1 2 h^2 y''(t+\theta h)$$
It allows you to find lower and upper bounds for the global error. (Nerd)
Isn't it the difference $|y(t^N)-y^N|$?

Yes. (Smile)
So is it as follows? Or am I wrong? (Blush)

$$ \epsilon^N=|e-y^{10^5}|=\left| e- \left( 1- \frac{1}{10^5}\right)^{10^5}\right|=\left| e- e^{\ln{ \left( 1- \frac{1}{10^5}\right)^{10^5} }}\right|=\left| e- e^{10^5\ln{ \left( 1- \frac{1}{10^5}\right) }}\right|=\left| e- e^{10^5 \left( - \frac{1}{10^5}- \frac{(\xi-1)^2}{2} \right)}\right|$$

I meant a binomial expansion:
$$(1+h)^n = 1 + nh + \binom n 2 h^2 + ... + h^n$$
(Wasntme)
 
  • #5
I like Serena said:
Any one of them.
The initial value for $y$ is $y(0) = y^0 = 1$.
The actual solution is $y=e^t$.
Furthermore, $t$ moves from 0 to 1 with steps of $\frac 1 N$, ending in $y=e$.
Since Euler's method is underestimating, every actual value of $y$ and any approximation of $y$ is between $1$ and $e$. (Thinking)

So, isn't it possible that the approximation $y$ gets values below $1$? (Thinking)
I like Serena said:
Let's start with $y^1$:
$$y^1 = y^0 + h \cdot y^0 = 1 + h \cdot 1 = 1 + h$$
FLT_EPSILON is defined as the smallest single-precision value such that $1 = 1 +$ FLT_EPSILON.
So if $h =$ FLT_EPSILON, then it follows that $y^1 = y^0$. (Mmm)

Isn't [m]FLT_EPSILON[/m] defined as the smallest single-precision value such that [m]1+FLT_EPSILON != 1[/m] ?

If so, then we have that $y^1=y^0$ if [m]h < FLT_EPSILON [/m].

At post #2, you said "we can tell that when h comes closer to FLT_EPSILON, we will get significant machine precision errors. This will become worse the smaller h becomes".
This holds when [m]h > FLT_EPSILON [/m], right?
But why do we get in this case significant machine precision errors and not when [m]h < FLT_EPSILON[/m], where $y^i=1, \forall i$ ($y$ is considered to be constant)?
I like Serena said:
I meant the global error.

According to Taylor, our function $y$ can be written as:
$$y(t+h) = y(t) + h\cdot y'(t) + \frac 1 2 h^2 y''(t+\theta h)$$
for some $0 \le \theta \le 1$.

Euler's method consists of those first 2 terms:
$$y^{k+1} = y^k + h \cdot y'(t^k)$$
That means that the global error $\Delta y^{k+1}$ in an iteration is the sum of the error in the previous iteration plus the remainder term of Taylor's expansion:
$$\Delta y^{k+1} = \Delta y^k + \frac 1 2 h^2 y''(t+\theta h)$$
It allows you to find lower and upper bounds for the global error. (Nerd)

According to my notes, the error $\epsilon^n:=y(t^n)-y^n, n=0, \dots, N$ is called discretization error or error of approximation of the method. It is also called global error.

$$\delta^n:=[y(t^n)+hf(t^n,y(t^n))]-y(t^{n+1})$$

is called local error of Euler method at the point $t^n, n=0, \dots, N-1$.
We suppose that the solution is known at the point $t^n$. Making with Euler method only one step, beginning from the point $(t^n, y(t^n))$, we get an approximation of the value $y(t^{n+1})$.
The difference of this approximation minus the exact solution $y(t^{n+1})$ is equal to the local error.

Using the Taylor expansion we have:

$$\delta^n=\left [y(t^n)+hf(t^n,y(t^n)) \right]- \left[ y(t^n)+hy'(t^n)+ \frac{h^2}{2} y''(\xi^n)\right]$$

with $\xi \in (t^n, t^{n+1})$. Consequently, we get $\delta^n=-\frac{h^2}{2}y''(\xi^n)$.

So do you mean the local error? :confused:
I like Serena said:
I meant a binomial expansion:
$$(1+h)^n = 1 + nh + \binom n 2 h^2 + ... + h^n$$
(Wasntme)
So, do we have to do it like that?$$\epsilon^N=|e-y^{10^5}|=\left |e-\left (1-\frac{1}{10^5}\right )^{10^5}\right |=\left |e-\left (1-10^5\frac{1}{10^5}+\binom{10^5}{2}\left (\frac{1}{10^5}\right )^2+\dots +\left (\frac{1}{10^5}\right )^{10^5}\right )\right |=\left |e-\left (\binom{10^5}{2}\left (\frac{1}{10^5}\right )^2+\dots +\left (\frac{1}{10^5}\right )^{10^5}\right )\right |$$If so, how could we continue?
 
  • #6
evinda said:
So, isn't it possible that the approximation $y$ gets values below $1$? (Thinking)

Nope. Can you tell why? (Wondering)

Isn't [m]FLT_EPSILON[/m] defined as the smallest single-precision value such that [m]1+FLT_EPSILON != 1[/m] ?

If so, then we have that $y^1=y^0$ if [m]h < FLT_EPSILON [/m].

Yep. You're right. :eek:
At post #2, you said "we can tell that when h comes closer to FLT_EPSILON, we will get significant machine precision errors. This will become worse the smaller h becomes".
This holds when [m]h > FLT_EPSILON [/m], right?
But why do we get in this case significant machine precision errors and not when [m]h < FLT_EPSILON[/m], where $y^i=1, \forall i$ ($y$ is considered to be constant)?

We also get significant errors when [m]h < FLT_EPSILON[/m]. In that case the error is guaranteed to take on a maximum that will no longer increase, since however small we make $h$, the error will be the same since every iteration will just yield the same number instead of approaching the solution. (Nerd)
According to my notes, the error $\epsilon^n:=y(t^n)-y^n, n=0, \dots, N$ is called discretization error or error of approximation of the method. It is also called global error.

I had to look up what discretization error meant. Apparently it is exactly the same thing as global error. (Mmm)
$$\delta^n:=[y(t^n)+hf(t^n,y(t^n))]-y(t^{n+1})$$

is called local error of Euler method at the point $t^n, n=0, \dots, N-1$.
We suppose that the solution is known at the point $t^n$. Making with Euler method only one step, beginning from the point $(t^n, y(t^n))$, we get an approximation of the value $y(t^{n+1})$.
The difference of this approximation minus the exact solution $y(t^{n+1})$ is equal to the local error.

Using the Taylor expansion we have:

$$\delta^n=\left [y(t^n)+hf(t^n,y(t^n)) \right]- \left[ y(t^n)+hy'(t^n)+ \frac{h^2}{2} y''(\xi^n)\right]$$

with $\xi \in (t^n, t^{n+1})$. Consequently, we get $\delta^n=-\frac{h^2}{2}y''(\xi^n)$.

So do you mean the local error? :confused:

No. (Shake)

The local error is $\frac 1 2 h^2 y''(\xi)$.
The global error is $\Delta y^k + \frac 1 2 h^2 y''(\xi)$, where $\Delta y^k$ is the global error in iteration $k$.

So, do we have to do it like that?$$\epsilon^N=|e-y^{10^5}|=\left |e-\left (1-\frac{1}{10^5}\right )^{10^5}\right |=\left |e-\left (1-10^5\frac{1}{10^5}+\binom{10^5}{2}\left (\frac{1}{10^5}\right )^2+\dots +\left (\frac{1}{10^5}\right )^{10^5}\right )\right |=\left |e-\left (\binom{10^5}{2}\left (\frac{1}{10^5}\right )^2+\dots +\left (\frac{1}{10^5}\right )^{10^5}\right )\right |$$If so, how could we continue?

This can be evaluated, even with the limited single precision, without making significant errors. (Wasntme)
 
  • #7
I like Serena said:
Nope. Can you tell why? (Wondering)

Because the real solution is increasing, so the approximation will also be increasing? Right? (Thinking)
I like Serena said:
We also get significant errors when [m]h < FLT_EPSILON[/m]. In that case the error is guaranteed to take on a maximum that will no longer increase, since however small we make $h$, the error will be the same since every iteration will just yield the same number instead of approaching the solution. (Nerd)

I still don't understand why we have significant machine precision errors when [m]h > FLT_EPSILON[/m].
Could you explain it further to me?
I like Serena said:
No. (Shake)

The local error is $\frac 1 2 h^2 y''(\xi)$.
The global error is $\Delta y^k + \frac 1 2 h^2 y''(\xi)$, where $\Delta y^k$ is the global error in iteration $k$.

With $\Delta y^{k+1}$ do you mean the difference $y(t+h)-y^{k+1}$ ?
I like Serena said:
This can be evaluated, even with the limited single precision, without making significant errors. (Wasntme)

So do we have to calculate the binomial coefficient of $10^5$ and $2$ and then round off, then calculate $\left( \frac{1}{10^5} \right)^2$ and round off and then find their product and round off and so on? (Thinking)
 
  • #8
evinda said:
Because the real solution is increasing, so the approximation will also be increasing? Right? (Thinking)

Yes. (Smile)

Moreover, since both the first and second derivative are greater than zero in the interval [0,1], Euler's method will be increasing everywhere, and underestimating everywhere. (Nerd)
I still don't understand why we have significant machine precision errors when [m]h > FLT_EPSILON[/m].
Could you explain it further to me?

Suppose FLT_EPSILON=0.0015, h=0.0020, and we start with y=1,
Then after the first step we will have 1.0015 instead of 1.0020.
That is, a local error that is already 25% away from what Euler's method should do. (Thinking)

And if we have reached y=2, in the next step we should get 2.0020, but it will be truncated to 2, which is 100% away from what Euler's method is supposed to do.
With $\Delta y^{k+1}$ do you mean the difference $y(t+h)-y^{k+1}$ ?

Yes. This is the global error. (Mmm)
So do we have to calculate the binomial coefficient of $10^5$ and $2$ and then round off, then calculate $\left( \frac{1}{10^5} \right)^2$ and round off and then find their product and round off and so on? (Thinking)

Yes. (Wasntme)
 
  • #9
The machine epsilon is $10^{-6}$.

It is the minimum positive number $\text{ EPSILON }$ such that $1.0+\text{ EPSILON } != 1.0$.

To execute Euler's algorithm, we have to calculate: $$y^{k+1}=y^k+hf(t^k, y^k) \Rightarrow y^{k+1}=y^k+hy^k \Rightarrow y^{k+1}=(1+h)y^k$$

$\text{ EPSILON }$ is the smallest number such that $1+\text{ EPSILON } !=1$, that means that when $h<\text{ EPSILON }=10^{-6} \Rightarrow \frac{1}{N}<\frac{1}{10^6} \Rightarrow N>10^6$ we have that $1+h=1$.

So we have that $$y^{k+1}=y^k=y^{k-1}= \dots =y^1=y^0=1 \ \ , \ \ \text{ for } N>10^6$$

The error is $$\epsilon^N=|e-y^N|=|e-1| \underset{N\rightarrow \infty}{ \rightarrow } e-1=2.71828-1=1.71828$$

Is this right?

But how do we get the number $10^5$ to conclude that for $N>10^5$ the errors increase? (Thinking)
 
  • #10
evinda said:
Is this right?

But how do we get the number $10^5$ to conclude that for $N>10^5$ the errors increase? (Thinking)

Yep. Looks right. (Smile)

I think $10^5$ is just an arbitrary number that is somewhat close to the $10^6$ where approximations break down completely. (Wasntme)
 

FAQ: How Does Precision Affect Error in Euler's Method Calculations?

What is the purpose of calculating errors in scientific experiments?

Calculating errors allows scientists to determine the accuracy and precision of their experimental results. It also helps in identifying and correcting any sources of error in the experiment.

How are errors typically expressed in scientific calculations?

Errors are typically expressed as either absolute or relative. Absolute error is the difference between the experimental value and the true value, while relative error is the absolute error divided by the true value multiplied by 100.

What is the difference between random and systematic errors?

Random errors are caused by unpredictable and uncontrollable factors, such as variations in measurement tools or human error. Systematic errors, on the other hand, are caused by consistent biases in the experimental setup or procedure.

How do you calculate the total error in a series of measurements?

The total error is calculated by adding the individual errors in a series of measurements. If the errors are expressed as absolute values, they are added together. If they are expressed as relative values, they are added as percentages.

What are some common methods for minimizing errors in scientific experiments?

Some common methods for minimizing errors include using precise and calibrated measurement tools, repeating experiments multiple times, and controlling for potential sources of systematic error. It is also important to properly record and analyze data to minimize human error.

Similar threads

Replies
16
Views
3K
Replies
1
Views
2K
Replies
10
Views
543
Replies
2
Views
2K
Replies
9
Views
3K
Replies
1
Views
1K
Replies
17
Views
3K
Back
Top