Finite difference method-convergence

In summary: Can you check?In summary, the conversation discusses implementing a finite difference method for a fourth order problem in C and getting stuck at the rate of convergence. The formula for the rate of convergence is provided, as well as the functions and conditions for the problem. The conversation then delves into finding the solution for $y$ and $w$, and the error when halving the step size. There is a discrepancy in the provided $f(x)$ function, which may lead to numerical instability. Further clarification is needed to determine the correct function.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hello! :)
I am implementing the finite difference method in a program in C and I got stuck at the rate of convergence.. The formula is [tex] \frac{log(\frac{e_{1}}{e_{2}})}{log(\frac{J_{2}}{J_{1}})} [/tex], right? where [tex] e_{i}=max|y_{j}^{J_{i}}-y(x_{j}^{J_{i}})| [/tex], [tex] 0<=j<=J_{i} [/tex]. How can I find the [tex] J_{1} [/tex] and [tex] J_{2} [/tex]?
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
I would also know how the error changes while the step size is getting smaller? Is it getting smaller or bigger??
 
  • #3
mathmari said:
Hello! :)
I am implementing the finite difference method in a program in C and I got stuck at the rate of convergence.. The formula is [tex] \frac{log(\frac{e_{1}}{e_{2}})}{log(\frac{J_{2}}{J_{1}})} [/tex], right? where [tex] e_{i}=max|y_{j}^{J_{i}}-y(x_{j}^{J_{i}})| [/tex], [tex] 0<=j<=J_{i} [/tex]. How can I find the [tex] J_{1} [/tex] and [tex] J_{2} [/tex]?

Which finite difference method do you mean?
I looked it up on wiki and found a family of methods.
In particular wiki does not refer to $J_i$ or $y_{j}^{J_{i}}$.
Can you provide some more information?
 
  • #4
I like Serena said:
Which finite difference method do you mean?
I looked it up on wiki and found a family of methods.
In particular wiki does not refer to $J_i$ or $y_{j}^{J_{i}}$.
Can you provide some more information?

I want to implement a finite difference method for a forth order problem.
$$y'''-q(x)y''(x)=f(x), \forall x \in [0,1]$$
$$y(0)=y(1)=y''(0)=y''(1)=0$$
where $f:[0,1] \rightarrow R$, $q:[0,1] \rightarrow R$, $q(x) \geq 0, \forall x \in [0,1]$

To find the solution y, we consider the function $w:[0,1] \rightarrow R$, where $w(x)=y''(x)$, so we have the following problem:
$w''(x)-q(x)w(x)=f(x), \forall x \in [0,1]$ (1)
$w(0)=w(1)=0$
and
$y''=w(x), \forall x \in [0,1]$
$y(0)=y(1)=0$For the problem (1) when we halve the step, the error get divided by 4, right?

But what happens for the whole problem??
 
  • #5
mathmari said:
I want to implement a finite difference method for a forth order problem.
$$y'''-q(x)y''(x)=f(x), \forall x \in [0,1]$$
$$y(0)=y(1)=y''(0)=y''(1)=0$$
where $f:[0,1] \rightarrow R$, $q:[0,1] \rightarrow R$, $q(x) \geq 0, \forall x \in [0,1]$

To find the solution y, we consider the function $w:[0,1] \rightarrow R$, where $w(x)=y''(x)$, so we have the following problem:
$w''(x)-q(x)w(x)=f(x), \forall x \in [0,1]$ (1)
$w(0)=w(1)=0$
and
$y''=w(x), \forall x \in [0,1]$
$y(0)=y(1)=0$For the problem (1) when we halve the step, the error get divided by 4, right?

That sounds reasonable, but how did you get it?
Seems to me you have to write it as a difference equation and solve it before you can say anything...
 
  • #6
I like Serena said:
That sounds reasonable, but how did you get it?
Seems to me you have to write it as a difference equation and solve it before you can say anything...

The solution of the original differential is given. It's $y(x)=sin(\pi x)$.
Also the functions $f,q$ are given, $f(x)=\pi^4 \sin(\pi x)-\pi^2 q(x) \sin(\pi x)$, $q=\sin(x)^2$.

The finite difference method for $w$ is:
$$\frac{W_{j-1}-2W_j+W_{j+1}}{h^2}-q(x_j)W_j=f(x_j), j=1,...,J-1$$
$$W_0=W_J=0$$

where $W_j$ is the approximation of $w(x_j)$.The finite difference method for $y$ is:
$$\frac{Y_{j-1}-2Y_j+Y_{j+1}}{h^2}=W_j, j=1,...,J-1$$
$$Y_0=Y_J=0$$

where $Y_j$ is the approximation of $y(x_j)$.In an other exercise there was a finite difference method for a second order problem of two points, and when I halved the step, the error got divided by 4. So I thought that so it would be for all second order problems.
 
  • #7
mathmari said:
The solution of the original differential is given. It's $y(x)=sin(\pi x)$.
Also the functions $f,q$ are given, $f(x)=\pi^4 \sin(\pi x)-\pi^2 q(x) \sin(\pi x)$, $q=\sin(x)^2$.

The finite difference method for $w$ is:
$$\frac{W_{j-1}-2W_j+W_{j+1}}{h^2}-q(x_j)W_j=f(x_j), j=1,...,J-1$$
$$W_0=W_J=0$$

where $W_j$ is the approximation of $w(x_j)$.The finite difference method for $y$ is:
$$\frac{Y_{j-1}-2Y_j+Y_{j+1}}{h^2}=W_j, j=1,...,J-1$$
$$Y_0=Y_J=0$$

where $Y_j$ is the approximation of $y(x_j)$.In an other exercise there was a finite difference method for a second order problem of two points, and when I halved the step, the error got divided by 4. So I thought that so it would be for all second order problems.

Can it be that it should be $f(x)=\pi^4 \sin(\pi x)+\pi^2 q(x) \sin(\pi x)$?
With a $+$ instead of a $-$?

Assuming it is and rewriting the difference equation, I get:
$$W_{j+1}=h^2f(x_j) + (2 + h^2 q(x_j))W_j-W_{j-1}$$

If I solve it for h=0.1, h=0.05, and h=0.025, I get increasing errors.
I think it is numerically unstable.

That makes sense because an error in $W_j$ is blown up by at least a factor 2.
 
  • #8
I like Serena said:
Can it be that it should be $f(x)=\pi^4 \sin(\pi x)+\pi^2 q(x) \sin(\pi x)$?
With a $+$ instead of a $-$?
No, it is like I wrote it, with a $-$.
 
Last edited by a moderator:
  • #9
mathmari said:
No, it is like I wrote it, with a $-$.

Well, let's verify.
We have:
\begin{array}{}
y(x)&=&sin(\pi x) & (1)\\
f(x)&=&\pi^4 \sin(\pi x)-\pi^2 q(x) \sin(\pi x) & (2)\\
q(x)&=&\sin(x)^2 & (3)\\
w(x)&=&y''(x) & (4) \\
w''(x)-q(x)w(x)&=&f(x) & (5)
\end{array}

Substitute (1) in (4) to get:
$$w(x) = -\pi^2\sin(\pi x) \qquad\qquad\qquad (6)$$
Another 2 derivatives to get:
$$w''(x) = \pi^4\sin(\pi x) \qquad\qquad\qquad (7)$$
Substitute (3), (6) and (7) in (5) to get:
$$w''(x)-q(x)w(x) = \pi^4\sin(\pi x) - \sin(x)^2 \cdot -\pi^2\sin(\pi x)
= \pi^4\sin(\pi x) + \pi^2 \sin(x)^2 \sin(\pi x)$$
That looks like $f(x)$ except for the $+$ sign...
It looks like either $f(x)$ is wrong or your differential equation is wrong... :eek:
 
  • #10
I like Serena said:
Well, let's verify.
We have:
\begin{array}{}
y(x)&=&sin(\pi x) & (1)\\
f(x)&=&\pi^4 \sin(\pi x)-\pi^2 q(x) \sin(\pi x) & (2)\\
q(x)&=&\sin(x)^2 & (3)\\
w(x)&=&y''(x) & (4) \\
w''(x)-q(x)w(x)&=&f(x) & (5)
\end{array}

Substitute (1) in (4) to get:
$$w(x) = -\pi^2\sin(\pi x) \qquad\qquad\qquad (6)$$
Another 2 derivatives to get:
$$w''(x) = \pi^4\sin(\pi x) \qquad\qquad\qquad (7)$$
Substitute (3), (6) and (7) in (5) to get:
$$w''(x)-q(x)w(x) = \pi^4\sin(\pi x) - \sin(x)^2 \cdot -\pi^2\sin(\pi x)
= \pi^4\sin(\pi x) + \pi^2 \sin(x)^2 \sin(\pi x)$$
That looks like $f(x)$ except for the $+$ sign...
It looks like either $f(x)$ is wrong or your differential equation is wrong... :eek:

I asked the professor and he told me that the function $f$ was wrong. It is with a $+$.
 
  • #11
mathmari said:
I asked the professor and he told me that the function $f$ was wrong. It is with a $+$.

Good!
So do you know how to solve the difference equation?
 
  • #12
I like Serena said:
Good!
So do you know how to solve the difference equation?

I implemented the method to find the approximations $Y_j$ and found the following errors:
for $J=5$ $\text{error}=0.064097$
for $J=10$ $\text{error}=0.016395$

Could you tell me if these errors are right?
 
  • #13
mathmari said:
I implemented the method to find the approximations $Y_j$ and found the following errors:
for $J=5$ $\text{error}=0.064097$
for $J=10$ $\text{error}=0.016395$

Could you tell me if these errors are right?

Which method did you implement?
What is the reference point that you are using for your errors? :confused:

What I did, is use the shooting method to find the solution to your difference equation.
I compared that with the original function.
My error is the largest difference between the two (that occurs in the middle of the interval).
Chances are that you effectively did the same...
 
  • #14
I like Serena said:
Which method did you implement?
The finite difference method.

I like Serena said:
What is the reference point that you are using for your errors? :confused:

What I did, is use the shooting method to find the solution to your difference equation.
I compared that with the original function.
My error is the largest difference between the two (that occurs in the middle of the interval).
Chances are that you effectively did the same...

I wrote a program in C, that calculates first the approximations $W_j$, for each $j$, using the Gauss elimination for tridiagonal systems. Then having found these approximations, it calculates the approximations $Y_j$ with the same way... The error is $max_{1 \leq j \leq J-1}|Y_j-y(x_j)|$, where $y$ is the real solution.
 
  • #15
mathmari said:
The finite difference method.

I wrote a program in C, that calculates first the approximations $W_j$, for each $j$, using the Gauss elimination for tridiagonal systems. Then having found these approximations, it calculates the approximations $Y_j$ with the same way... The error is $max_{1 \leq j \leq J-1}|Y_j-y(x_j)|$, where $y$ is the real solution.

I have for J=10 an error in y of 0.1514.
 
  • #16
I like Serena said:
I have for J=10 an error in y of 0.1514.

Oh.. :confused: Ok.. I will check my program... And for J=20? When you double J, what happens with the error?
 
  • #17
mathmari said:
Oh.. :confused: Ok.. I will check my program... And for J=20? When you double J, what happens with the error?

With J=20, the maximum error in y that I get is 0.67848.
 
  • #18
I like Serena said:
With J=20, the maximum error in y that I get is 0.67848.

Maybe I've found wrong the approximations..

For $J=4$ the approximations $W_j$ are:
69.305796
-10.379758
-7.337134

and the approximations $Y_j$:
69.305796
-1.2877786
-0.414607

The first one of both approximations isn't right, is it?
Could tell me if your result are different?
 
  • #19
mathmari said:
Maybe I've found wrong the approximations..

For $J=4$ the approximations $W_j$ are:
69.305796
-10.379758
-7.337134

and the approximations $Y_j$:
69.305796
-1.2877786
-0.414607

The first one of both approximations isn't right, is it?
Could tell me if your result are different?

My previous result for $J=20$ was wrong, the error in $Y$ should have been $0.05877$.

With $J=4$, I'm getting for $W$:
-6.708713243
-7.283950347
-3.563132688

And for $Y$:
0.452783069
0.450319241
0.225159621

for an error in $Y$ of $0.54968$.
 
  • #20
I like Serena said:
My previous result for $J=20$ was wrong, the error in $Y$ should have been $0.05877$.

With $J=4$, I'm getting for $W$:
-6.708713243
-7.283950347
-3.563132688

And for $Y$:
0.452783069
0.450319241
0.225159621

for an error in $Y$ of $0.54968$.

I cannot find my error.. :(

To find the approximations $W_j$ I did the following:

Code:
for(j=0; j<J-1; j++){
                x=(j+1)*h;
		b[j]=f(x);	
}

For the Gauss elimination for tridiagonal systems, the system is $Aw=b$, where $b$ is calculated above and the matrix $A$ is tridiagonal which has the form:
Diagonal:-2.0/(h*h)-q(x), where x=(j+1)*h
Upper diagonal:1.0/(h*h)
Lower diagonal:1.0/(h*h)

Do you see something wrong? Did you find these approximations with an other way?
 
  • #21
mathmari said:
I cannot find my error.. :(

To find the approximations $W_j$ I did the following:

Code:
for(j=0; j<J-1; j++){
                x=(j+1)*h;
		b[j]=f(x);	
}

For the Gauss elimination for tridiagonal systems, the system is $Aw=b$, where $b$ is calculated above and the matrix $A$ is tridiagonal which has the form:
Diagonal:-2.0/(h*h)-q(x), where x=(j+1)*h
Upper diagonal:1.0/(h*h)
Lower diagonal:1.0/(h*h)

Do you see something wrong? Did you find these approximations with an other way?

That looks correct.
However, your value of 69.305796 for $W_1$ is clearly way too large.

I used a different method: the shooting method.
I wrote each value as a recursive relation depending on the previous 2 values.
Then I changed $W_1$ just as long until $W_J$ became 0.

Either way, you should get the same values.
How did you solve $Aw=b$?
Did you verify by evaluating $Aw$ with the result for $w$ you found?
 
  • #22
I like Serena said:
However, your value of 69.305796 for $W_1$ is clearly way too large.
Yes...I ran the program for several $J$, and for $J>4$ the values for $W_j$ are not so large...
For example for $J=5$ I get:
for $W_j$:
-5.671676
-8.580075
-7.553415
-3.738234

for $Y_j$:
0.538176
0.849485
0.817591
0.483560

I like Serena said:
How did you solve $Aw=b$?
A is a matrix of dimension (J-1)x3.
It has the form:
$\begin{bmatrix}
0 & d_1 & u_1\\
l_1 & d_2 &u_2 \\
l_2 & d_3 &u_3 \\
... & ... & ...\\
l_{J_3} & d_{J-2} & u_{J-2}\\
l_{J-2} & d_{J-1} & 0
\end{bmatrix}$
where $l_1,...,l_{J-1}$ are the elements of the lowdiagonal, $u_1,...,u_{J-1}$ the elements of the upperdiagonal and $d_1,...,d_{J-1}$ the elements of the diagonal.
then we shift the elements of the first row one place to the left and then we implement the Gauss elimination with pivot. We do that at each step..

I like Serena said:
Did you verify by evaluating $Aw$ with the result for $w$ you found?
No, I haven't done it yet..
 
  • #23
mathmari said:
A is a matrix of dimension (J-1)x3.
It has the form:
$\begin{bmatrix}
0 & d_1 & u_1\\
l_1 & d_2 &u_2 \\
l_2 & d_3 &u_3 \\
... & ... & ...\\
l_{J_3} & d_{J-2} & u_{J-2}\\
l_{J-2} & d_{J-1} & 0
\end{bmatrix}$
where $l_1,...,l_{J-1}$ are the elements of the lowdiagonal, $u_1,...,u_{J-1}$ the elements of the upperdiagonal and $d_1,...,d_{J-1}$ the elements of the diagonal.
then we shift the elements of the first row one place to the left and then we implement the Gauss elimination with pivot. We do that at each step..

That sounds about right... but it's easy to make a mistake there. :eek:
No, I haven't done it yet..

So will you? (Wasntme)
 
  • #24
I like Serena said:
So will you? (Wasntme)

For $J=5$ the matrix $A$ is $\begin{bmatrix}
-50.039470& 25 & 0 & 0\\
25& -50.151647 & 25 &0 \\
0 & 25 &-50.318821 & 25\\
0& 0 & 25& -50.514600
\end{bmatrix}
$

and $b=\begin{bmatrix}
57.484598\\
94.064990 \\
95.634182\\
60.240927
\end{bmatrix}$

Using the Matlab command w=A\b, I get the same $w$ as I get with my program:
$w=\begin{bmatrix}
-5.992187\\
-9.694450 \\
-9.692919\\
-5.989633
\end{bmatrix}$
 
  • #25
That means that the solution of the system is right, isn't it ?
 
  • #26
mathmari said:
Yes...I ran the program for several $J$, and for $J>4$ the values for $W_j$ are not so large...
For example for $J=5$ I get:
for $W_j$:
-5.671676
-8.580075
-7.553415
-3.738234

mathmari said:
For $J=5$ the matrix $A$ is $\begin{bmatrix}
-50.039470& 25 & 0 & 0\\
25& -50.151647 & 25 &0 \\
0 & 25 &-50.318821 & 25\\
0& 0 & 25& -50.514600
\end{bmatrix}
$

and $b=\begin{bmatrix}
57.484598\\
94.064990 \\
95.634182\\
60.240927
\end{bmatrix}$

Using the Matlab command w=A\b, I get the same $w$ as I get with my program:
$w=\begin{bmatrix}
-5.992187\\
-9.694450 \\
-9.692919\\
-5.989633
\end{bmatrix}$

mathmari said:
That means that the solution of the system is right, isn't it ?

Yep.
Did you fix something?
Because in your previous post you had a different solution for w...
 
  • #27
I like Serena said:
Yep.
Did you fix something?
Because in your previous post you had a different solution for w...

At the following part I replaced $J-1$ with $J$.
mathmari said:
Code:
for(j=0; j<[COLOR="#FF0000"]J-1[/COLOR]; j++){
                x=(j+1)*h;
		b[j]=f(x);	
}

Code:
for(j=0; j<[COLOR="#FF0000"]J[/COLOR]; j++){
                x=(j+1)*h;
		b[j]=f(x);	
}
 
  • #28
mathmari said:
At the following part I replaced $J-1$ with $J$.
Code:
for(j=0; j<[COLOR="#FF0000"]J[/COLOR]; j++){
                x=(j+1)*h;
		b[j]=f(x);	
}

Oh?
That should not make a difference?? :confused:
Can there be a $\pm 1$ mistake afterwards?
 
  • #29
I like Serena said:
Can there be a $\pm 1$ mistake afterwards?

What do you mean? :confused:
 
  • #30
mathmari said:
What do you mean? :confused:

You have measurements at $x_0, x_1, ..., x_J$.
That are $(J+1)$ measurements.
However, you can leave out $x_0$ and $x_J$ since they are both given as boundary conditions: they are 0.
The leaves you with $(J-1)$ measurements.
So your matrix should be $(J-1) \times (J-1)$, your $b$ should have $(J-1)$ entries, and your resulting $w$ should have $(J-1)$ entries.

However, your "fix" gives you $J$ entries instead of $(J-1)$ entries.
That is $1$ off.
If it helps you to get the right answer, it means that later on you are also $1$ off.
So before you were probably calculating with an uninitialized variable.
 
  • #31
I like Serena said:
You have measurements at $x_0, x_1, ..., x_J$.
That are $(J+1)$ measurements.
However, you can leave out $x_0$ and $x_J$ since they are both given as boundary conditions: they are 0.
The leaves you with $(J-1)$ measurements.
So your matrix should be $(J-1) \times (J-1)$, your $b$ should have $(J-1)$ entries, and your resulting $w$ should have $(J-1)$ entries.

However, your "fix" gives you $J$ entries instead of $(J-1)$ entries.
That is $1$ off.
If it helps you to get the right answer, it means that later on you are also $1$ off.
So before you were probably calculating with an uninitialized variable.

Yes,you are right! :eek:

I looked again my code and realized that I didn't fix that $J$.
I have written the following in a function:
Code:
fun(b,J){
         for(j=0; j<J-1; j++)
		b[j]=f(X(j,J));	
         
 }

At the beginning I called the function fun(b,J-1) and that is wrong, since X is calculated in the function
Code:
X(j,J){
      x=(j+1)/J;
 }
Then I fixed it by calling the function by fun(b,J).
 
Last edited by a moderator:
  • #32
mathmari said:
Yes,you are right! :eek:

I looked again my code and realized that I didn't fix that $J$.
I have written the following in a function, fun(b,J):
Code:
for(j=0; j<J-1; j++){
                x=(j+1)*h;
		b[j]=f(x);	
}

At the beginning I called the function fun(b,J-1) and in the function it was for(j=0; j<J; j++) and that is wrong.
Then I fixed it by calling the function by fun(b,J).

Aha!
 

FAQ: Finite difference method-convergence

What is the finite difference method and how does it work?

The finite difference method is a numerical technique used to solve differential equations. It works by approximating the derivatives of a function at discrete points, and then using these approximations to solve the differential equation. This is done by dividing the domain into a finite number of smaller subdomains, and then using the Taylor series expansion to approximate the derivatives at each point.

What is the significance of convergence in the finite difference method?

Convergence in the finite difference method refers to the accuracy of the numerical solution. It is important because it determines how closely the numerical solution approximates the true solution of the differential equation. A higher level of convergence indicates a more accurate solution.

How is convergence measured in the finite difference method?

Convergence is typically measured by comparing the numerical solution to the true solution of the differential equation. This is done by calculating the error between the two solutions, and then analyzing the behavior of the error as the grid size (or time step) is decreased. A smaller error and a more consistent trend as the grid size decreases indicate a higher level of convergence.

What factors can affect the convergence of the finite difference method?

The convergence of the finite difference method can be affected by various factors, such as the choice of grid size, the order of the finite difference scheme, and the smoothness of the solution. A very coarse grid size or a low-order scheme can lead to a lower level of convergence, while a smooth solution can result in a higher level of convergence.

How can one improve the convergence of the finite difference method?

To improve the convergence of the finite difference method, one can use a finer grid size, a higher-order scheme, or a smoother initial guess. It is also important to choose an appropriate numerical method for the specific problem at hand, as some methods may have better convergence properties for certain types of differential equations.

Similar threads

Replies
17
Views
3K
Replies
1
Views
1K
Replies
30
Views
6K
Replies
9
Views
3K
Replies
9
Views
2K
Replies
8
Views
2K
Replies
1
Views
776
Back
Top