Solving an IVP: Finding Euler Approximations & Errors

In summary, the conversation revolved around a challenging calculus problem involving using Euler's method to approximate the solution to a first order ordinary initial value problem. The participants discussed various methods and techniques, including the Fundamental Theorem of Calculus, approximate integration, and a numeric scheme for solving the problem. The conversation also touched on the difficulty of evaluating definite integrals and the usefulness of approximation methods in various fields. The conversation ended with a summary of the problem and a reminder to use the correct variables in calculations.
  • #1
alane1994
36
0
Here is my problem, I have been trying this for a couple of hours. I have sought help with a professor, and yet we still couldn't get it. Here is the question in full.

Consider the initial value problem below to answer the following.
a)Find the approximations to [tex]y(0.2)[/tex] and [tex]y(0.4)[/tex] using Euler's method with time steps of [tex]\Delta{t}=0.2,0.1,0.05, \text{and} 0.025[/tex]
b)Using the exact solution given, compute the errors in the Euler approximations at [tex]t=0.2[/tex] and [tex]t=0.4[/tex].
c)Which time step results in the more accurate approximation? Explain your observations.
d)In general, how does halving the time step affect the error at [tex]t=0.2[/tex] and [tex]t=0.4[/tex]?

[tex]y\prime(t)=-2y[/tex], [tex]y(0)=1[/tex], [tex]y(t)=e^{-2t}[/tex]

I am approaching the point of crying because nothing that I do seems to work... I would put my work so far... but I have about 4 pages of it, and that would just be a waste of time for me to type all of that out. Any and all help would be appreciated...

I am confused as to how to start, and I don't expect work to be done for me... I just need liberal amounts of guidance to get me on my way.
 
Last edited:
Physics news on Phys.org
  • #2
Using the variables defined in this problem, you have a point $(t, y(t))$. The derivative is expressed in terms of $y$ but for calculations you'll need to write it in terms of $t$ I believe. So I would rewrite the derivative as $y'(t)=-2e^{-2t}$. This is the derivative you get by differentiating $y(t)=e^{-2t}$ so everything makes sense so far.

Let's say $s_0=y(0)$, $s_1=y(0)+(\Delta t) y'(0)$ and more generally $s_{n+1}=y(s_n)+(\Delta t) y'(s_n)$. I hope that makes sense. These aren't normally the variables I see used in books.

Let's get from $t=0$ to $t=0.2$ with $\Delta t = 0.2$.

$s_0=y(0)=1$

$s_1=y(0)+(0.2)(y(0))=1+(0.2)(1)=1.2$

I believe that's how it's done, but it's been a while. Where are you having problems?
 
  • #3
Here are some excerpts taken from notes I wrote on the topic when I was a student:

The Fundamental Theorem of Calculus (FTOC) provides the vital connecting link between the two tools of calculus: differentiation and integration. Use of this link offers a way to demonstrate that the methods for approximating definite integrals and the numerical methods for approximating solutions to first order ordinary initial value problems are mathematically equivalent. To illustrate this, let's begin with the so-called derivative form of the FTOC:

The existence of an indefinite integral of $\displaystyle f(x)$, i.e., of a function $\displaystyle F(x)$ whose derivative is $\displaystyle f(x)$ is given by:

(1) $\displaystyle \frac{d}{dx}\int_a^x f(u)\,du=f(x)$ or $\displaystyle F(x)+C=\int_a^x f(u)\,du$, C a constant.

It should be remarked that the variable u in the symbol $\displaystyle f(u)\,du$ given in (1) is only a "dummy variable." For the variable of integration, whatever it may be called, is "integrated out" and disappears. Having established this, let's go on to define:

$\displaystyle F(x_1)+C=\int_a^{x_1} f(u)\,du$ and $\displaystyle F(x_2)+C=\int_a^{x_2} f(u)\,du$ where $\displaystyle x_1<x_2$

Then we have:

$\displaystyle \left(F(x_1)+C \right)-\left(F(x_2)+C \right)=\left(\int_a^{x_1} f(u)\,du \right)-\left(\int_a^{x_2} f(u)\,du \right)$

$\displaystyle F(x_1)-F(x_2)=\left(\int_a^{x_1} f(u)\,du \right)-\left(\int_a^{x_1} f(u)\,du+\int_{x_1}^{x_2} f(u)\,du \right)$

(2) $\displaystyle \int_{x_1}^{x_2} f(u)\,du=F(x_2)-F(x_1)$

This is the anti-derivative form of the FTOC, and from this we may also conclude:

$\displaystyle \int_a^b f(u)\,du=-\int_b^a f(u)\,du$

We have tacitly assumed that $\displaystyle f(x)$ is continuous and differentiable on $\displaystyle [a,b]$.

While the existence of indefinite and definite integrals of continuous functions is established, the technique of finding them may be far from simple. In fact in many cases the integrals of elementary functions cannot be expressed in terms of elementary functions themselves. For example, consider the simple functions:

$\displaystyle f(x)=\sqrt{x^3+1}$ and $\displaystyle f(x)=e^{x^2}$

While there are substitutions and techniques to transform many into forms found in a finite table of integrals, the evaluation of a definite integral may prove exceedingly difficult if not impossible. It therefore becomes useful to to develop methods for approximating definite integrals as they find a wide application in physical problems. Besides the use in plane areas, the definite integral is used for volumes, arc lengths, surface areas, rectilinear motion, center of mass, moments of inertia, electrostatic and gravitational potentials, liquid pressure, biology, economics, etc.

Approximate Integration

Suppose we have some approximation or numeric scheme A such that:

(4) $\displaystyle A\approx\int_a^b f(x,y)\,dx$, where $\displaystyle \frac{dy}{dx}=f(x,y)$

Since the slope of y is assumed to be continuous on the interval $\displaystyle [a,b]$, given some initial value for y, we may suppose that an explicit relationship between x and y can be found within the rectangle $\displaystyle (a,y(a))-(b,y(b))$ by the uniqueness and existence theorem.

Now, also suppose we have divided the interval $\displaystyle [a,b]$ into n sub-intervals of equal width where:

$\displaystyle a\le x_k<x_{k+1}\le b$ and $\displaystyle x_0=a,\,x_n=b,\,0\le k\le n-1$.

Since by the additive property we have:

$\displaystyle \int_a^b f(x,y)\,dx=\sum_{k=0}^{n-1}\left[\int_{x_k}^{x_{k+1}} f(x,y)\,dx \right]$ we may define:

(5) $\displaystyle A=\sum_{k=0}^{n-1}A_k$ where $\displaystyle A_k\approx\int_{x_k}^{x_{k+1}} f(x,y)\,dx$

If the approximation converges, then we must have:

$\displaystyle \lim_{n\to\infty}\left[\sum_{k=0}^{n-1}A_k \right]=\int_a^b f(x,y)\,dx$

By defining $\displaystyle y_k\equiv y(x_k)$ and applying (2) we have:

$\displaystyle A_n\approx y_{n+1}-y_n$

Solving for $\displaystyle y_{n+1}$ gives rise to the numeric scheme:

(6) $\displaystyle y_{n+1}\approx y_n+A_n$

Thus, the approximating summation for the definite integral has become an approximation to the first order IVP:

(6a) $\displaystyle \frac{dy}{dx}=f(x,y)$, $\displaystyle y(x_0)=y_0$

We now have a means of approximating the solution $\displaystyle y(x)$ at $\displaystyle x=x_n$ where $\displaystyle a\le x_n\le b$.

Riemann Sums and the Approximation Method of Euler

In my opinion, one of the most straight-forward approximation methods available for definite integrals are the Riemann sums with regular partitions, which approximates the definite integral with a series of rectangles of equal width and whose heights are the function's value at $\displaystyle x_n$. But, as usual, the tradeoff for the simplicity of the method is that it does not converge very rapidly.

First, approximate the following definite integral using a Riemann sum of one partition:

(10) $\displaystyle \int_{x_n}^{x_{n+1}} f(x,y)\,dx\approx\Delta x\cdot f(x,y)$, where $\displaystyle \Delta x=x_{n+1}-x_n$

which leads to the numeric scheme:

(11) $\displaystyle y_{n+1}\approx y_n+\Delta x\cdot f(x,y)$, $\displaystyle y_0=y(x_0)$

where $\displaystyle A_n=\Delta x\cdot f(x,y)$ which is base $\displaystyle \Delta x$ times height $\displaystyle f(x,y)$. This is an example of an explicit scheme, i.e., it can be solved for $\displaystyle y_{n+1}$.

Approximation Method of Euler (tangent line method)

When we use the direction field method to sketch a particular solution to an IVP, we try to visualize the intermediate directions between the isoclines we have drawn. If we follow a finite number of these directions, the sketch becomes a polygonal curve or chain of line segments. This polygonal curve is, visually speaking, an approximation to the solution. We can construct values $\displaystyle y_n$ that approximate the solution values $\displaystyle y(x_n)$ as follows:

One method we may use to demonstrate the derivation of Euler's method is through the use of the differential to obtain a linear approximation (the tangent line). Another method would be to use the point-slope formula or Taylor formula of order 1. At the point $\displaystyle (x_n,y_n)$, the slope of the solution is given by:

$\displaystyle \frac{dy}{dx}=f(x_n,y_n)$

Recall that the definition of the differential of the dependent variable is $\displaystyle \Delta y\approx\Delta x\frac{dy}{dx}$

Using $\displaystyle \Delta y=y_{n+1}-y_{n}$ this yields the recursive formula:

(12) $\displaystyle y_{n+1}\approx y_n+\Delta x\cdot f(x,y)$

Thus, the solution at $\displaystyle x=x_n$ may be approximated by (12). This is equivalent to (11), showing that the Riemann sums and approximation method of Euler are equivalent.

In a similar fashion, the trapezoidal method and the improved Euler's method are analogs, as are the mid-point rule and the second order Runge-Kutta method, and Simpson's Rule and the fourth order Runge-Kutta method, but I will leave these for later. (Happy)
 
  • #4
CORRECT
[tex]\Delta{t}[/tex]Approximation to y(0.2)Approximation to y(0.4)
0.20.600000.36000
0.10.640000.40960
0.050.656100.43047
0.0250.663420.44013

This is the "correct" answer... however, I got this answer...
MY ANSWER
[tex]\Delta{t}[/tex]Approximations to y(0.2)Approximations to y(0.4)
0.20.800000.64000
0.10.810000.65610
0.050.814510.66342
0.0250.816650.66692

I cannot tell where I went wrong... I am so confused right now. As well as a little angry that I have to do this crap for Calculus I...

My Calculus class is a little messed up... We have covered trapezoidal rule, simpsons rule, and other things that I have been told are typically in calculus 2...
 
Last edited:
  • #5
alane1994 said:
CORRECT
[tex]\Delta{t}[/tex]
Approximation to y(0.2)
Approximation to y(0.4)
0.2
0.60000
0.36000
0.1
0.64000
0.40960
0.05
0.65610
0.43047
0.025
0.66342
0.44013

This is the "correct" answer... however, I got this answer...
MY ANSWER
[tex]\Delta{t}[/tex]
Approximations to y(0.2)
Approximations to y(0.4)
0.2
0.80000
0.64000
0.1
0.81000
0.65610
0.05
0.81451
0.66342
0.025
0.81665
0.66692

I cannot tell where I went wrong... I am so confused right now. As well as a little angry that I have to do this crap for Calculus I...

My Calculus class is a little messed up... We have covered trapezoidal rule, simpsons rule, and other things that I have been told are typically in calculus 2...
Consider \(\Delta t=0.2\), then \(y(0.2) = y(0)+\Delta t\times y'(0) = 1-0.2 \times 2=0.6\) and \(y(0.4)=y(0.2)+\Delta t \times y'(0.2)=0.6-0.2\times 1.2=0.36\)

CB
 
  • #6
CaptainBlack said:
Consider \(\Delta t=0.2\), then \(y(0.2) = y(0)+\Delta t\times y'(0) = 1-0.2 \times 2=0.6\) and \(y(0.4)=y(0.2)+\Delta t \times y'(0.2)=0.6-0.2\times 1.2=0.36\)

CB

How to you calculate [tex]y^{\prime}(0)[/tex]?
I am confused as to how it is =2
 
  • #7
OK, I have a similar problem to try.
Instead of y(0.2) & y(0.4), it is y(0.4) & y(0.8).
[tex]\Delta{t}[/tex]Approximations of y(0.4)Approximations of y(0.8)
0.40.20000?
0.2
0.1
0.05

So... for the first blank I have 0.2. Is that correct? And how would you set up the second blank. I have this started.

[tex]y(0.8)=y(0.4)+(\Delta{t} \times y^{\prime}(0.4))[/tex]
[tex]y(0.8)=0.2-(0.4 \times ...[/tex]

I am unsure how to calculate [tex]y^{\prime}(0.4)[/tex]
 
  • #8
alane1994 said:
How to you calculate [tex]y^{\prime}(0)[/tex]?
I am confused as to how it is =2

It comes from the statement of the problem; the ODE we are solving is \(y'(t)=-2y(t)\) and the initial condition \(y(0)=1\), so \(y'(0)=-2y(0)=-2 \times 1=-2\)

CB
 
  • #9
Jameson said:
Ok, then $y'(t)$ can be expressed as $y'(t)=-2y$ or $y'(t)=-2e^{-2t}$. Use the second one to plug in values. You want to find $y'(0.4)$. Plugging that you in you get \(\displaystyle y'(0.4)=-2e^{-.8}\). You'll want a decimal approximation of that.

Make sense?

You should not be using the \(y(t)=e^{-2t}\) information at all, it is the solution to the initial value problem and is only there for you to use to compare the numerical solution with.

CB
 
  • #10
CaptainBlack said:
You should not be using the \(y(t)=e^{-2t}\) information at all, it is the solution to the initial value problem and is only there for you to use to compare the numerical solution with.

CB

Fair enough and good point. I am used to seeing these problems with the derivative explicitly given, but assuming we can use $y(t)$ to help with computations is too much I agree. Thanks for clearing that up guys.
 

FAQ: Solving an IVP: Finding Euler Approximations & Errors

What is an Initial Value Problem (IVP)?

An Initial Value Problem (IVP) is a type of mathematical problem that involves finding a solution to a differential equation with a given set of initial conditions. It is often used in physics, engineering, and other fields to model real-world phenomena.

What is an Euler approximation?

An Euler approximation is a numerical method used to approximate the solution to an IVP. It involves breaking the interval of interest into smaller subintervals and using a linear approximation to estimate the value of the solution at each point. The smaller the subintervals, the more accurate the approximation will be.

How do you find Euler approximations?

To find Euler approximations, you need to first have a differential equation and a set of initial conditions. Then, you can use the Euler method to calculate the approximate values of the solution at each point by using the formula: yn+1 = yn + hf(xn, yn), where h is the step size and f(x,y) is the differential equation.

What is the truncation error in Euler approximations?

The truncation error in Euler approximations is the difference between the exact solution and the approximate solution calculated using the Euler method. It is caused by the linear approximation used in the method and can be reduced by using smaller step sizes.

How can you minimize the error in Euler approximations?

The error in Euler approximations can be minimized by using smaller step sizes. This means breaking the interval into more subintervals and calculating the approximate solution at more points. However, this also increases computation time and effort. Alternatively, more accurate numerical methods, such as the Runge-Kutta method, can be used to minimize the error.

Similar threads

Back
Top