Functional iteration and convergence

The theorem is usually called the Banach fixed point theorem. In summary, the equation y=(x+10)/(x+1) and its associated sequence s_(n+1)=(s_n + 10)/(s_n + 1) converge to the positive root of 10 for any initial value except for the negative root of 10, which remains at that point. This behavior can be explained by studying the cobweb diagram and the derivative of the function. The Banach fixed point theorem states that for a fixed point to be a stable attractor under fixed point iteration, the absolute value of the derivative must be less than one on an interval containing the fixed point.
  • #1
bbb999
27
0
Hey,

I am working with the equation y=(x+10)/(x+1), and have calculating the iterations of the sequence s_(n+1)=(s_n + 10)/(s_n + 1).

I find that whatever value of s(1) is chosen (the initial value) the sequence converges to root 10. However I am now trying to prove why this happens, and I have been told I should be able to prove it using just a standard test for convergence. Can anyone help?

thanks in advance
 
Mathematics news on Phys.org
  • #2
Try this:

First suppose the limit does exist and call the limit "S". Then taking the limit of both sides of [itex]s_{n+1}= (s_n+10)/(s_n+1)[/itex], each limit is S: [itex]S= (S+ 10)/(S+ 1)[/itex]. Multiplying both sides by S+ 1, [itex]S(S+1)= S^2+ S= S+ 10[/itex] so that [itex]S= \pm\sqrt{10}[/itex].

Now, suppose [itex]s_0[/itex] is some number less than [itex]\sqrt{10}[/itex]. Then show, perhaps by induction, that [itex]\{s_n\}[/itex] is an increasing sequence and has an upper bound. By the "monotone convergence" property, then, it does convertg. Similarly, if [itex]s_0> \sqrt{10}[/itex], you can show it is a decreasing sequence having a lower bound.

Are you sure that the limit is [itex]\sqrt{10}[/itex] for any initial value or only for positive initial values? It seems to me that it might happen that if [itex]s_0[/itex] is negative, the sequence converges to [itex]-\sqrt{10}[/itex]. If this were my problem, I'd want to check that!
 
Last edited by a moderator:
  • #3
thanks, ok I understand the part about negative initial values converging to -root 10, and I understand how to find the limit using the fraction you gave.

But I am struggling with what you mean about finding an upper and lower bound to show it converges. What standard test would that use?

thanks in advance
 
  • #4
Oh, and I have just checked but even picking negative initial values converges to positive root 10 not negative root 10
 
  • #5
1)Halls is referring you to the Monotone Convergence theorem: If a sequence of real numbers has increasing and has an upper bound M, then the sequence converges.

2) Which negative values did you try? Try some more.
 
  • #6
I have tried a range of negative values such as -0.5, -5.5, -100 and they all result in convergence to positive root 10
 
  • #7
That's weird. Just to be certain, try -3, -3.5, -4, -3.16, and -rt10.
 
  • #8
I have tried all of these values and they all converge to positive root 10 except for negative root 10 which stays at that point
 
  • #9
Can anyone confirm that this is correct: that for every initial value other than -root 10, the sequence converges to root 10 and in the case of -root 10 it remains at that point?
 
  • #10
Hye, I had the same problem like bbb999. I already tried all positive and negative values and the sequence will converge to positive square root ten. How about negative square root ten? huhu
 
  • #11
It definitely looks like the positive root might be the only stable fixed point. The strategy HallsofIvy suggested on the first run of this thread won't work because the sequence [itex]\{s_n\}[/itex] isn't monotonically increasing or decreasing, as is easy to see from inspecting the cobweb diagram. See this site for an applet: http://www.emporia.edu/math-cs/yanikjoe/Chaos/CobwebPlot.htm

Even near the negative root the cobweb diagram trajectories end up shooting back over towards the positive root after a few iterations. This is by no means a proof, but it is suggestive.

I suspect part of the underlying reason for the strange behavior is the divergence at x = -1. It looks like this gives rise to a geometry that makes the negative root completely unstable and the positive root universally stable.

Perhaps some understanding can be found by studying the more general

[tex]s_{n+1} = \frac{s_n + a}{s_n + b + 1}[/tex]
which has fixed points [itex]s^\ast = (1/2)(-b \pm \sqrt{b^2 + 4a})[/itex]. Note that when b^2 + 4a < 0 there are no fixed points.
 
  • #12
Yes Mute, only the positive root is numerically stable for the given function under fixed point iteration.

Fixed point iteration of the form [itex]x = f(x)[/itex] is not numerically stable when [itex]|f'(x)|>1[/itex] at the given solution.

For the function in question we have,

[tex]f(x) = \frac{x+10}{x+1}[/tex]

[tex]f'(x) = \frac{-9}{(x+1)^2}[/tex]

[tex]f'(+\sqrt{10}) \simeq -0.52[/tex]

[tex]f'(-\sqrt{10}) \simeq -1.93[/tex]
 
Last edited:
  • #13
Izdihar, here's some other ways we could find a function that converges to sqrt(10) under fixed point iteration.

Take [itex](x+1)(x-1) = 9[/itex]. It's easy to show that this has solutions of [itex]\pm \sqrt{10}[/itex].

Case 1.

Rearrange the above to

[tex]x = \frac{9}{x+1} + 1[/tex]

You can show that only the positive root is numerically stable for this case.

Case 2.

You can also rearrange the original equation to

[tex]x = \frac{9}{x-1} - 1[/tex]

Only the negative root is numerically stable for this case.
 
Last edited:
  • #14
thanks everybody..!:smile:

uart:

"Fixed point iteration of the form x=f(x) is not numerically stable when |f′(x)|>1 at the given solution."
is this theorem?
 
  • #15
izdihar said:
"Fixed point iteration of the form x=f(x) is not numerically stable when |f′(x)|>1 at the given solution."
is this theorem?

Yes, the absolute value of the derivative must be less than one on an interval containing the fixed point for it to be a "attractor" fixed point.
 

FAQ: Functional iteration and convergence

What is functional iteration?

Functional iteration is a mathematical method used to find the fixed point of a function. It involves repeatedly applying a function to an initial value until the output converges to a specific value, known as the fixed point.

How does functional iteration work?

Functional iteration works by taking an initial value and plugging it into a function. The resulting output is then plugged back into the function, and this process is repeated until the output converges to a fixed point. This fixed point is the solution to the equation.

What is the purpose of functional iteration?

The purpose of functional iteration is to find the solution to equations that cannot be solved algebraically. It is also used to approximate solutions to complex problems or to find the roots of a function.

What is the difference between functional iteration and numerical methods?

Functional iteration is a type of numerical method, but it differs from other numerical methods in that it involves repeatedly applying a function to an initial value until it converges. Other numerical methods, such as Newton's method, use a series of steps to approximate the solution.

How do you know when functional iteration has converged?

In functional iteration, convergence is reached when the output of the function is very close to the fixed point. This can be determined by setting a tolerance level and checking if the difference between the output and the fixed point is less than the tolerance.

Similar threads

Replies
1
Views
1K
Replies
9
Views
2K
Replies
20
Views
2K
Replies
10
Views
1K
Replies
7
Views
2K
Replies
1
Views
9K
Replies
12
Views
3K
Back
Top