Can Taylor series be used to get the roots of a polynomial?

In summary: In other words, the Taylor series is not a good way to approximate the function in any neighborhood of x = a.In summary, the conversation is about a method of finding the value of ##f^{-1}(x)## using Taylor series and the fact that ##f^{-1'}(x)=\frac{1}{f^{'}(f^{-1}(x))}##. The method involves evaluating the derivatives of ##f^{-1}(x)## at ##x=0## and then substituting the values in the Taylor series. This method is equivalent to the Lagrange inversion theorem but the important question is whether the value of ##x=c## is within the radius of
  • #1
Kumar8434
121
5
I'm using this method:

First, write the polynomial in this form:
$$a_nx^n+a_{n-1}x^{n-1}+...a_2x^2+a_1x=c$$
Let the LHS of this expression be the function ##f(x)##. I'm going to write the Taylor series of ##f^{-1}(x)## around ##x=0## and then put ##x=c## in it to get ##f^{-1}(c)## which will be the value of ##x##.

Since, ##f^{-1}(0)=0## here, so we've got the first term of our Taylor series as 0.

Now, the only thing that remains is calculating the derivatives of ##f^{-1}(x)## at ##x=0##.

I'm using the fact that $$\frac{d(f^{-1}(x)}{dx}=\frac{1}{f^{'}(f(^{-1}x))}$$

By differentiating this equation, we can get the second derivative of ##f^{-1}(x)## as:
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{(f^{'}(f(^{-1}x)))^2}*f^{''}(f(^{-1}x))*f^{-1'}(x)$$

Similarly, we can get the other derivatives by further differentiation of this equation. Then we can evaluate all the derivatives at ##x=0## to get the Taylor series of ##f^{-1}(x)## and evaluate it at ##x=c## to get the value of ##x##.

1.Is this method correct?

2.Can something be done to make it better and remove the limitations?
 
Last edited:
Mathematics news on Phys.org
  • #2
Kumar8434 said:
I'm using the fact that $$\frac{d(f^{-1}(x)}{dx}=\frac{1}{f^{'}(f(x))}$$
##\frac{d(f^{-1}(x))}{dx} = \frac{1}{f '(f^{-1}(x))}##
 
  • #3
Stephen Tashi said:
##\frac{d(f^{-1}(x))}{dx} = \frac{1}{f '(f^{-1}(x))}##
Oh, I got it wrong. I'm going to edit it. Is there still some hope in this approach?
 
  • #4
Kumar8434 said:
Is there still some hope in this approach?

It appears that in order to find ##f^{-1}(x)## by that method, you'd need to know ##f^{-1}(x)## to begin with.

So, if there is any hope for that method, you need to discover a way to evaluate expressions like ##f'(f^{-1}(x))## at ##x = 0## without knowing the explicit formula for ##f^{-1}(x)##.

There is also the technicality that ##f^{-1}(x)## doesn't necessarily exist (as a function). A polynomial of degree n can have up to n distinct roots. Of course, if you could develop an algorithm that found just one of the roots, that would be progress.
 
  • #5
Stephen Tashi said:
It appears that in order to find ##f^{-1}(x)## by that method, you'd need to know ##f^{-1}(x)## to begin with.

So, if there is any hope for that method, you need to discover a way to evaluate expressions like ##f'(f^{-1}(x))## at ##x = 0## without knowing the explicit formula for ##f^{-1}(x)##.

There is also the technicality that ##f^{-1}(x)## doesn't necessarily exist (as a function). A polynomial of degree n can have up to n distinct roots. Of course, if you could develop an algorithm that found just one of the roots, that would be progress.
But I know the value of ##f^{-1}(x)## at ##x=0##. After doing all the formulas, what I have to do in the end in evaluating that expression at ##x=0## and I've the value at ##x=0##.
For example, $$f^{-1'}(x)=\frac{d(f^{-1}(x)}{dx}=\frac{1}{f^{'}(f(^{-1}(x))}$$
$$=\frac{1}{n*a_{n}(f(^{-1}(x)))^{n-1}+(n-1)a_{n-1}(f(^{-1}(x)))^{n-2}+....+2a_2f^{-1}(x)+a_1}$$
which gives $$\frac{1}{a_1}$$ at ##x=0## since $$f^{-1}(0)=0$$
Similarly,
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{(f^{'}(f(^{-1}(x)))^2}*f^{''}(f(^{-1}x))*f^{-1'}(x)$$
We already have the value of the first derivative at ##x=0## so we can substitute that here, so
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{a_1^2}*2a_2*\frac{1}{a_1}$$
$$=-\frac{2a_2}{a_1^3}$$
I think this process can be continued to get more derivatives by the product rule.
 
Last edited:
  • #6
I think what you are doing is equivalent to: https://en.wikipedia.org/wiki/Lagrange_inversion_theorem.

The resulting infinite series has some radius of convergence. So an important question is whether x = c is a value in that radius of convergence.

Try the method on a quadratic equation and see if you get a series that terminates - or peharps a series whose sum has a closed form expression.
 
  • #7
Stephen Tashi said:
I think what you are doing is equivalent to: https://en.wikipedia.org/wiki/Lagrange_inversion_theorem.

The resulting infinite series has some radius of convergence. So an important question is whether x = c is a value in that radius of convergence.

Try the method on a quadratic equation and see if you get a series that terminates - or peharps a series whose sum has a closed form expression.
I looked up this Lagrange inversion theorem. I didn't understand much but it was also about getting Taylor series of inverse functions. Are the coefficients obtained also the same in Lagrange inversion theorem and my method.
And, I'm new with Taylor series. I've not yet studied it in school. I've just read about it on the internet. So, I still don't understand what does this radius of convergence mean. What does it mean? I mean, we have regularly increasing factorials in the denominators of the Taylor series, so I think the series will always converge. Even if ##(x-c)## is large, then after a large number of terms, I think factorials will become dominating and the denominators will be so large that even the large exponent of ##(x-c)## won't contribute much to the answer. I think the Taylor series will always converge even if ##c## is not in some required range.
 
Last edited:
  • #8
Kumar8434 said:
So, I still don't understand what does this radius of convergence mean. What does it mean?
A series expressed as a sum of terms that are functions of a variable ##x## may not converge for all possible values of ##x##. If it converges for only those values of ##x## in an interval ##(a,b)##, it is customary to write that interval as ##(c - r, c + r)## where ##r## is the "radius of convergence" and ##c## is the center of the interval. The term "radius" makes more sense when we are dealing with functions of a complex variable. Then there can be a circle of convergence in the complex plane. That circle has a radius.

Taylor series are an example of a "power series" since the terms involve sums of powers of the variable ##x##. Power series have some interval or circle of convergence. (If they converge for all values of ##x##, we count that as an "infinite" radius of convergence.) A non-power series might not have a interval or circle of convergence. The values at which it converges might form some irregular shape.

I mean, we have regularly increasing factorials in the denominators of the Taylor series, so I think the series will always converge.
No, it won't. I suggest you make a post on the topic of "Are there smooth functions whose Taylor series diverges?". That is a specific mathematical question - and maybe someone will answer your post besides me!

There are also examples of functions f(a) whose Taylor series converges at the value of x=a, but does not converge to f(a).
Even if ##(x-c)## is large, then after a large number of terms, I think factorials will become dominating and the denominators will be so large that even the large exponent of ##(x-c)## won't contribute much to the answer.

More is involved that the factorials and the powers of ##(x-c)## you also have to consider the other factors in the terms.
 
  • #9
Are the coefficients obtained also the same in Lagrange inversion theorem and my method.
That's a good question. I don't know the answer. You like examples. Try the quadratic equation both ways.
 
  • #10
Stephen Tashi said:
That's a good question. I don't know the answer. You like examples. Try the quadratic equation both ways.
I evaluated the first and second coefficients for ##f(x)=px^2+qx##. They were the same, so I think both the methods give the same series.
Except the Lagrange formula for the coefficients seems to be a better version of my formula. I think this formula of mine can be simplified to get that formula. So, I re-discovered this Lagrange inversion on my own!
Thanks. I'll stop asking these type of questions now if they resemble crackpotism.
 
  • #11
Kumar8434 said:
So, I re-discovered this Lagrange inversion on my own!
Yes, the real variable version.

Is there a way the series expansion can reveal two unequal roots of a quadratic?

Thanks. I'll stop asking these type of questions now if they resemble crackpotism.

They only resemble crackpotism if you assert things are true just because of numerical examples.

When people first learn some concepts of calculus, this can result in a kind of mental intoxication. The possibilities for doing algebraic manipulations suddenly explode. One day it seems hard just to solve quadratic equations and then suddenly the sky's the limit. You should enjoy the intoxication while it lasts.

What you will eventually encounter is that algebraic manipulations are not completely reliable - especially when the subject matter deals with "infinitely" small or large things. In particular, you need to be wary of thinking of an function as an algorithm that can be expressed by a concise formulae. The definition of a function is permitted to have all sorts of complicated "if...then..." conditions.

There is a branch of mathematics called "The Calculus of Finite Differences" where algebra is less inhibited. (e.g.
https://www.physicsforums.com/threa...sequences-recursive-reps.892720/#post-5619687
https://www.physicsforums.com/threa...interpolation-polynomial.903217/#post-5688309 )
 
  • Like
Likes Kumar8434
  • #12
Stephen Tashi said:
So an important question is whether x = c is a value in that radius of convergence.
.
Can we first use Newton's method to get an approximate root, then apply Lagrange inversion around that point? I don't know about Newton's method but I've heard about it that it's also a method to get polynomial roots. So, what would be better: To use Newton's method all along, or to use Newton's method first to get an approximate root, then apply Lagrange inversion?
 
  • #13
Stephen Tashi said:
Of course, if you could develop an algorithm that found just one of the roots, that would be progress.

I think that in general there can be no such algorithm, that is you cannot find just one root unless there is something special about the polynomial, or you have been given some extra fact about it, like if you know for some reason that one root is twice another. Which is tantamount to having two equations to eliminate between. Obviously if you could find one root by a general algorithm you could find them all. But in general trying to pick out one root is like trying to pin a globule of mercury. Every expression you come up with involves all the roots in indistinguishable, i.e. symmetrical, fashion.
 
  • #14
Kumar8434 said:
Can we first use Newton's method to get an approximate root, then apply Lagrange inversion around that point?

Newton's method requires an initial estimate (which can be a guess) for a root. So Newton's method does not locate all the roots of an equation.

Are you interested in computer programming? Solving problems with numerical methods utilizes many approximations and you seem to be interested in numerical approximations. Programming also makes it easier to study methods like Newton's method because putting the steps of the algorithm into computer code makes the steps of the algorithm memorable.
 
  • #15
Stephen Tashi said:
Newton's method requires an initial estimate (which can be a guess) for a root. So Newton's method does not locate all the roots of an equation.

Are you interested in computer programming? Solving problems with numerical methods utilizes many approximations and you seem to be interested in numerical approximations. Programming also makes it easier to study methods like Newton's method because putting the steps of the algorithm into computer code makes the steps of the algorithm memorable.
Yeah, I've been learning it for a month now. I can only do something like prime factorization in C++ until now. I'd try to develop my skills.
 
  • #16
Kumar8434 said:
Yeah, I've been learning it for a month now. I can only do something like prime factorization in C++ until now. I'd try to develop my skills.

If you started with C++ without first learning C, you began your swimming lessons by jumping into deep water!
 
  • #17
Stephen Tashi said:
If you started with C++ without first learning C, you began your swimming lessons by jumping into deep water!
I searched on the internet before I started learning and many people said that you need not learn C and that they're the same thing. I'm learning with an eBook file and I've had no problem in learning C++ until now. I've learned upto writing functions until now.
 

FAQ: Can Taylor series be used to get the roots of a polynomial?

Can Taylor series be used to find all the roots of a polynomial?

No, Taylor series can only approximate the roots of a polynomial and may not be able to find all of them. It depends on the degree and complexity of the polynomial.

How accurate is using Taylor series to find the roots of a polynomial?

The accuracy of using Taylor series to find the roots of a polynomial depends on the degree of the polynomial and the number of terms used in the series. The more terms included, the more accurate the approximation will be.

Can Taylor series be used to find complex roots of a polynomial?

Yes, Taylor series can be used to find complex roots of a polynomial. However, it may require a large number of terms in the series to achieve a high level of accuracy.

What are the limitations of using Taylor series to find the roots of a polynomial?

The limitations of using Taylor series to find the roots of a polynomial include the inability to find all roots, the need for a large number of terms for complex polynomials, and the possibility of encountering convergence issues.

Are there other methods that are more effective in finding the roots of a polynomial?

Yes, there are other methods that may be more effective in finding the roots of a polynomial, such as the Newton-Raphson method or the bisection method. These methods may provide more accurate results or be more efficient for certain types of polynomials.

Similar threads

Replies
5
Views
2K
Replies
33
Views
2K
Replies
2
Views
1K
Replies
7
Views
1K
Replies
10
Views
530
Replies
3
Views
704
Back
Top