What conditions guarantee convergence of Newton's method for approximating pi/2?

In summary, the author has calculated an approximation to $\frac{\pi}{2}$ using Newton's method. After 2 iterations, they get $1,5707$. They need that the starting point satisfies a condition in order for the sequence of iterations to converge.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! 😊

I have calculated an approximation to $\frac{\pi}{2}$ using Newton's method on $f(x)=\cos (x)$ with starting value $1$. After 2 iterations we get $1,5707$.
Which conditions does the starting point has to satisfy so that the convergence of the sequence of the Newton iterations to $\frac{\pi}{2}$ is guaranteed? :unsure:
 
Mathematics news on Phys.org
  • #2
Suppose we start at a point $x_0$ with $0<x_0<\frac\pi 2$. Then the first iteration $x_1$ will be greater than $\frac\pi 2$.
We need that $x_1$ is closer to $\frac\pi 2$ than $x_0$. And if it is, it will be convergent, won't it? 🤔
 
  • #3
Klaas van Aarsen said:
Suppose we start at a point $x_0$ with $0<x_0<\frac\pi 2$. Then the first iteration $x_1$ will be greater than $\frac\pi 2$.

Why will $x_1$ be greater than $\frac\pi 2$ ? :unsure:
Klaas van Aarsen said:
We need that $x_1$ is closer to $\frac\pi 2$ than $x_0$. And if it is, it will be convergent, won't it? 🤔

We need that because that would mean that the sequence $(x_i)$ converges to $\frac{\pi}{2}$, or not? :unsure:
 
  • #4
mathmari said:
Why will $x_1$ be greater than $\frac\pi 2$ ?

Geometrically, the method takes the intersection of the tangent line with the x-axis.
Since $\cos$ has a decreasing slope on $(0,\frac\pi 2)$, that intersection point must be on the other side of $\frac\pi 2$. 🤔

Alternatively, we can look at it algebraically.
wiki explains that we can write the successive errors as:
$$\varepsilon_{n+1} = \frac {- f'' (\xi_n)}{2 f'(x_n)} \cdot {\varepsilon_n}^2$$
where $x_n$ are the successive approximations of a root $a$, $\varepsilon_n=a-x_n$ is the error in that approximation, and $\xi_n$ is a point between $x_n$ and the root $a$.

So if $\varepsilon_1$ is negative, we have that $x_1$ is on the right side of the root. Can we deduce that? 🤔

We need that because that would mean that the sequence $(x_i)$ converges to $\frac{\pi}{2}$, or not?
We're trying to find which starting value we need so that the sequence converges.
We get that if the remaining error converges to zero, don't we? 🤔
 
Last edited:
  • #5
Klaas van Aarsen said:
Alternatively, we can look at it algebraically.
wiki explains that we can write the succesive errors as:
$$\varepsilon_{n+1} = \frac {- f'' (\xi_n)}{2 f'(x_n)} \cdot {\varepsilon_n}^2$$
where $x_n$ are the successive approximations of a root $a$, $\varepsilon_n=a-x_n$ is the error in that approximation, and $\xi_n$ is a point between $x_n$ and the root $a$.

So if $\varepsilon_1$ is negative, we have that $x_1$ is on the right side of the root. Can we deduce that? 🤔

We have that
$$\varepsilon_{1} = \frac {- f'' (\xi_0)}{2 f'(x_0)} \cdot {\varepsilon_0}^2= \frac {\cos (\xi_0)}{-2 \sin (1)} \cdot \left (\frac{\pi}{2}-1\right )^2 \approx \frac {\cos (\xi_0)}{-1.6829} \cdot 0.3258 = -0.1936 \cdot \cos (\xi_0)$$
So it is negative when $ \cos (\xi_0)$ is positive, right? :unsure:
 
  • #6
mathmari said:
We have that
$$\varepsilon_{1} = \frac {- f'' (\xi_0)}{2 f'(x_0)} \cdot {\varepsilon_0}^2= \frac {\cos (\xi_0)}{-2 \sin (1)} \cdot \left (\frac{\pi}{2}-1\right )^2 \approx \frac {\cos (\xi_0)}{-1.6829} \cdot 0.3258 = -0.1936 \cdot \cos (\xi_0)$$
So it is negative when $ \cos (\xi_0)$ is positive, right?
You have picked $x_0=1$ in which case $\xi_0$ must be between $1$ and $\frac\pi 2$. So for each possible value of $\xi_0$ we have indeed that $\cos (\xi_0)$ is positive.
But aren't we trying the find the $x_0$ that is 'just' sufficient to get convergence?
We have already found that for $x_0=1$ we get convergence. So the $x_0$ that we are trying to find should be smaller, shouldn't it? 🤔
 
  • #7
Klaas van Aarsen said:
You have picked $x_0=1$ in which case $\xi_0$ must be between $1$ and $\frac\pi 2$. So for each possible value of $\xi_0$ we have indeed that $\cos (\xi_0)$ is positive.
But aren't we trying the find the $x_0$ that is 'just' sufficient to get convergence?
We have already found that for $x_0=1$ we get convergence. So the $x_0$ that we are trying to find should be smaller, shouldn't it? 🤔

So we want to find the boundary ofall possible $x_0$'s to get convergence, right?

Do we have to use for that the formula for the remaining error?

:unsure:
 
  • #8
That's how I interpret the OP.

The formula is for the general case. In our case it should suffice if we solve for the case that after the first iteration we have the same error. 🤔
 
  • #9
Klaas van Aarsen said:
The formula is for the general case. In our case it should suffice if we solve for the case that after the first iteration we have the same error. 🤔

I got stuck right now.Why does the error have to be the same as the previous error? :unsure:
 
  • #10
mathmari said:
I got stuck right now.Why does the error have to be the same as the previous error?
Then we should have the critical initial starting value.
A bigger starting value (that is less than $\frac\pi 2$) should guarantee convergence shouldn't it? 🤔
 
  • #11
Klaas van Aarsen said:
A bigger starting value (that is less than $\frac\pi 2$) should guarantee convergence shouldn't it? 🤔

Yes. The nearer the starting value is to $\frac{\pi}{2}$ the faster it converges, right? :unsure:
 

FAQ: What conditions guarantee convergence of Newton's method for approximating pi/2?

What is the Newton method?

The Newton method, also known as the Newton-Raphson method, is an iterative algorithm used to find the roots of a function. It uses the derivative of the function to approximate the root and converges quickly to the actual root.

How does the Newton method work?

The Newton method starts with an initial guess for the root and then uses the derivative of the function at that point to find a better approximation. This process is repeated until the desired level of accuracy is achieved.

What is the convergence of Newton method?

The convergence of Newton method refers to the rate at which the algorithm approaches the actual root of the function. It is considered to be quadratic, meaning that with each iteration, the number of correct digits doubles.

What are the advantages of using the Newton method?

The Newton method is known for its fast convergence rate, making it an efficient algorithm for finding roots. It also allows for complex functions to be solved as long as their derivatives can be calculated.

What are the limitations of the Newton method?

The Newton method may fail to converge if the initial guess is not close enough to the actual root or if the function has multiple roots. It also requires the calculation of the derivative, which can be time-consuming for more complex functions.

Similar threads

Replies
9
Views
2K
Replies
1
Views
10K
Replies
7
Views
2K
Replies
10
Views
1K
Replies
4
Views
1K
Replies
1
Views
1K
Replies
7
Views
2K
Replies
1
Views
9K
Back
Top