A question about Taylor and MacLauren series

In summary: The important thing to remember is that if you know the power series for a function, you can approximate it anywhere on the complex plane!In summary, the Weirstrass approximation theorem states that every continuous function in a closed interval can be approximated by polynomials.
  • #1
fleazo
81
0
Hello,I have a kind of general question. So I understand that the goal of a Taylor function is to approximate a transcendental function using a polynomial function. This makes things easier to deal with sometimes. I understand that this works by chosing a polynomial function that seems to behave like the transcendental function does, at least in a localized area. For example f(c)=P(c) where f is our transcendental function and P is our polynomial function. As we add additional constrantaints, like f'(c)=P'(c) and f''(c)=P''(c) this approximation becomes better and better until eventually we can come up with a power series that is exactly the function.My question is, WHY is it that as the higher order derivatives match, the function becomes a better and better approximation? I mean, intuitively it makes sense: there are more things that the two functions have in common so it seems natural that a function that has more in common would be a better approximation than one that has less in common. I just don't understand what it is that is making it a better approximation. Why are the higher order derivatives so important? I'm not sure if this makes sense. I feel like I understand how it works, I just don't understand why.
 
Physics news on Phys.org
  • #2
The way I look at it is just the intuition. Wouldn't it make sense that if the Taylor Polynomial was as similar as possible to the function at the one point then it would be a good approximation in the area?

Being equal at higher derivatives just increases the level of similarity.
 
  • #3
Nice Question:

I don't have any actual theorem/result to cite, but I would say that ,since f

is differentiable at x=c, the change in f is locally-linear near x, and f'(x)

describes with some precision the line that describes this change. The second

derivative gives a more precise description of how f'(x) changes, i.e., how

the linear approximation changes, and so on.
 
  • #4
Thanks for the responses guys! I think I am seeing that my problem was a fundamental lack of understanding about what exactly higher order derivatives represent. I guess it's been easy for me to go through life just knowing that a derivative measures the rate of change, and then not thinking much or visualizing what a second derivative, or third derivative, or so on, tells us. Looking into it now I can see that for example on a graph in 2 D, the second derivative tells us, like you said, how f'(x) changes, or the curvature of the graph at that point. So it makes sense now thinking about why a higher order Taylor polynomial approximates a function better because as we keep adding terms the graph seems to be conforming (locally) more and more to the function we are tryfing to approximate.
I guess my problem now is I can visualize a derivative and what it means, and a second derivative and what it means, but for higher order derivatives is it possible to visualize? I just imagine we get to a point where instead of thinking "first derivative means this, second derivative means this" a general pattern would emerge in terms of visualizing things.Thanks for the responses it really got me thinking! Also Taylor polynomials are cool!
 
  • #5
I don't know if this is 100% an explanation, but there is also a result called the

Weirstrass approximation theorem so that every continuous function in a closed

interval can be approximated by polynomials.

http://en.wikipedia.org/wiki/Stone–Weierstrass_theorem

densely by using polynomials
 
  • #6
A proof of the Taylor series can be presented as follows. Assume that a function [itex]f(x)[/itex] has a power series in a given interval, in other words, assume it is holomorphic. Then, we know it has a power series:
[tex]f(x)=\sum_{k=0}^{\infty}c_k (x-a)^k[/tex]
where [itex]c_k[/itex] are coefficients and a is a constant. Differentiating one time in the above formula gives
[tex]f'(x)=\sum_{k=1}^{\infty}c_k k(x-a)^{k-1}[/tex]
It can be shown by induction that differentiating n times will give
[tex]f^{(n)}(x)=\sum_{k=n}^{\infty}c_k k! (x-a)^{k-n}=c_n n!+\sum_{k=n+1}^{\infty}c_k k! (x-a)^{k-n}[/tex]
Setting [itex]x=a[/itex] in the above formula gives us [itex]\displaystyle \frac{f^{(n)}(a)}{n!}=c_n[/itex]. Note that we assumed that the power series exists, so this implies that the function f must be holomorphic on the whole complex plane or at least meromorphic in a region D, in which it has a power series.
 
  • #7
For example, consider the e^x function. All of its derivatives are 1 when x = 0. When you know that the function's derivative is 1 at a certain point, you can estimate that function by a straight line. Let's say that the straight line is reasonably accurate on an interval h around 0. Let's now say you know that the tenth derivative is 1 at zero. Then the 9th derivative, constructed out of the value at the single point, can be considered to be reasonably accurate on the interval h, but the same way you could estimate the derivative at one point to be accurate for some interval, the values at the endpoints of the interval h would themselves be reasonably accurate within some interval, and then you would expect the 8th derivative to be accurate within a larger interval and so on.

Basically, you can consider a single point to be accurate enough approximation of it's antiderivative on an interval, then since the antiderivative is accurate on that interval, you could construct an accurate antiderivative of the antiderivative on that interval, plus expect accuracy on an additional interval around the endpoints, as with the first antiderivative and so on.
 
  • #8
fleazo said:
My question is, WHY is it that as the higher order derivatives match, the function becomes a better and better approximation?.

Maybe it doesn't.

You could probably invent a smooth function that has sharp curvature at the point x = x0 and a little further to the right it's graph becomes asymptotic to the line that is tangent to the function at the point x = x0. That would create places where the linear approximation is a better approximation that a higher degree polynomial approximation.

You did say indicate that your focus was on points close to x = x0, so perhaps that's an "unfair" example. But it shows that you'd have to make your question more precise - not for the sake of a formal proof, I mean you have to make it precise just to get a correct intuitive answer.
 
  • #9
Stephen Tashi said:
Maybe it doesn't.

You could probably invent a smooth function that has sharp curvature at the point x = x0 and a little further to the right it's graph becomes asymptotic to the line that is tangent to the function at the point x = x0. That would create places where the linear approximation is a better approximation that a higher degree polynomial approximation.
......
.

Is it possible for a smooth function to have sharp curvature?
 
  • #10
Bacle2 said:
Is it possible for a smooth function to have sharp curvature?

It can't have a point of non-differentiability, but for this example it just needs to curve away from the tangent line and then curve back to it.

Some various ways of making the question of the original post precise:

Let f(x) be a function (from the reals to the reals) that is infinitely differentiable in some open interval O that contains the point x = c.
Let T[n,x] be the nth degree taylor polynomial formed by using the first n terms of the Taylor series for f(x) expanded about the point x = c.

Version 1: ("uniformly" pointwise better) There exists an open interval S containing c such that for each pair of non-negative integers M and N with M > N and for each point p in S, | T[M,p] - f(p) ] <= | T[n,p] - f(p) |.

Version 2: (Pointwise better): For each pair of non-negative integers M and N with M > N there exists an interval S containing c such that for each point p in S, | T[M,p] - f(p) ] <= | T[n,p] - f(p) |.Version 3: ("uniformly" least squares better) There exists a closed interval S containing c such that for each pair of non-negative integers M and N with M > N and for each point p in S, the integral of (T[M,p] - f(p))^2 dp over S is equal or less than the integral of (T[n,p] - f(p))^2 dp over S.

Version 4: (least squares better) For each pair of non-negative integers M and N with M > N there exists a closed interval S containing c such that for each point p in S, the integral of (T[M,p] - f(p))^2 dp over S is equal or less than the integral of (T[n,p] - f(p))^2 dp over S.
 
Last edited:

FAQ: A question about Taylor and MacLauren series

What is the difference between a Taylor series and a MacLauren series?

A Taylor series is a mathematical representation of a function using a series of derivatives at a single point, while a MacLauren series is a special case of a Taylor series where the point of expansion is at x=0.

How do you find the Taylor and MacLauren series of a function?

To find the Taylor series of a function, you can use the formula:
f(x) = f(a) + f'(a)(x-a) + f''(a)(x-a)^2 + ... + f(n)(a)(x-a)^n
To find the MacLauren series, you can use the same formula but with a=0.

Can both series be used to approximate any function?

Yes, both the Taylor and MacLauren series can be used to approximate any function. However, the accuracy of the approximation depends on the function and the number of terms used in the series.

What is the importance of Taylor and MacLauren series in mathematics?

The Taylor and MacLauren series are important in mathematics because they allow us to approximate complex functions using simpler polynomials. They are also used in calculus to find derivatives and integrals of functions.

What is the difference between an infinite and finite Taylor and MacLauren series?

An infinite Taylor and MacLauren series includes an infinite number of terms, while a finite series only includes a limited number of terms. An infinite series provides a more accurate approximation of a function, but a finite series can still be useful for simpler functions.

Similar threads

Back
Top