What's the point of Taylor/Maclaurin series?

In summary, Taylor series are a useful tool for approximating functions as an infinite sum of terms, calculated from the values of the function's derivatives at a single point. They can be used in various practical applications, such as solving difficult differential equations, finding roots of functions, and approximating functions for real-world problems, like studying the behavior of an electric field. The binomial form of the Taylor series is often the best method for approximating a function in these cases.
  • #1
Zack K
166
6
We were informally introduced Taylor series in my physics class as a method to give an equation of the electric field at a point far away from a dipole (both dipole and point are aligned on an axis). Basically for the electric field: $$\vec E_{axis}=\frac q {4πε_o}[\frac {1} {(x-\frac s 2)^2}- \frac {1} {(x+\frac s 2)^2}]$$
Where 's' is the separation of the dipole and ##(x\pm\frac s 2)^2## or ##r\pm##are the distances between the dipole and the point at 'x' distance away (you can see that one charge will be further away to 'x' than the other).

Well we learned that if ##r\pm\gg s##, then the equation becomes just: $$\vec E_{axis}=\frac 1 {4πε_o}\frac {2qs} {r^3}$$ We used taylor series to explain this phenomena. I understood the mathematics of taylor series, and I get the normal definition, my problem is I'm struggling to understand how a taylor series actually works and what the point of it is.

What are you actually approximating in a taylor series, and why is it even useful if you have the equation in the first place? What's so special about wanting to choose a random point on a graph and make it look like the graph itself?

Also I don't understand how you can apply this in physics, for example the dipole derivation, and why it works. To me without using a taylor series, it made sense that if the radius is much larger than the distance between the dipoles, then obviously the distances between the dipoles would be so insignificant for the electric field that you don't even have to include it.
 
Last edited:
Physics news on Phys.org
  • #2
Short answer: Taylor series give approximations of important functions (sine, cosine, exponentials, etc) by polynomials. Computers can handle these things pretty good.

For example, since ##\exp(z) = \sum_{n=0}^\infty \frac{z^n}{n!}##, it follows that we can approximate (uniformly!) ##\exp(z)## by polynomials of the form ##\sum_{n=0}^k \frac{z^n}{n!}## where you can get the approximation as small as possible by choosing ##k## sufficiently large (how large exactly can be determined by a precise bound in this case).
 
  • #3
Math_QED said:
Short answer: Taylor series give approximations of important functions (sine, cosine, exponentials, etc) by polynomials. Computers can handle these things pretty good.

For example, since ##\exp(z) = \sum_{n=0}^\infty \frac{z^n}{n!}##, it follows that we can approximate (uniformly!) ##\exp(z)## by polynomials of the form ##\sum_{n=0}^k \frac{z^n}{n!}## where you can get the approximation as small as possible by choosing ##k## sufficiently large (how large exactly can be determined by a precise bound in this case).
I understand what a taylor series does. I don't understand it's practicalities and how to use it practically.
 
  • #4
Zack K said:
What are you actually approximating in a taylor series

You are approximating a function as an infinite sum of terms which are calculated from the values of the function derivatives at a single point.
So, given that a function ##f## is infinitely differentiable at a real or complex number ##a## its Taylor series is

## \sum_{n=0}^{\infty} \frac{f^{(n)} a}{n!}(x - a)^n## where ##f^{(n)} a## is the ##n##th derivative of ##f## at point ##a##.

Zack K said:
and why is it even useful if you have the equation in the first place?

You may have a differential equation for instance which is difficult to be solved in a direct way where the use of a Taylor series can give a good approximation around some specific point. Other example uses are infinite sums and limits.
 
  • Like
Likes FactChecker and Zack K
  • #5
QuantumQuest said:
You are approximating a function as an infinite sum of terms which are calculated from the values of the function derivatives at a single point.
So, given that a function ##f## is infinitely differentiable at a real or complex number ##a## its Taylor series is

## \sum_{n=0}^{\infty} \frac{f^{(n)} a}{n!}(x - a)^n## where ##f^{(n)} a## is the ##n##th derivative of ##f## at point ##a##.
You may have a differential equation for instance which is difficult to be solved in a direct way where the use of a Taylor series can give a good approximation around some specific point. Other example uses are infinite sums and limits.
So would this be like Newton's method of finding the roots of a function? Where the more 'n' terms you include, the closer of an approximation you get?
 
  • #6
Zack K said:
So would this be like Newton's method of finding the roots of a function? Where the more 'n' terms you include, the closer of an approximation you get?

Newton's method a.k.a Newton - Raphson method is basically a root finding algorithm that uses the first few terms of the Taylor series of a function ##f## in the neighborhood of a suspected root.
 
  • #7
Zack K said:
I understand what a taylor series does. I don't understand it's practicalities and how to use it practically.

I just gave you an example how you can use it practically...
 
  • #8
Math_QED said:
I just gave you an example how you can use it practically...
I meant more like of how to use it on a real world level rather than using it for a graph. For example my dipole problem, I don't understand how taylor series is used in that sense.
 
  • #9
Zack K said:
I meant more like of how to use it on a real world level rather than using it for a graph. For example my dipole problem, I don't understand how taylor series is used in that sense.

The second equation that you gave in your original post:

Zack K said:
$$\vec E_{axis}=\frac 1 {4πε_o}\frac {2qs} {r^3}$$

Is clearly easier to study than your first:

Zack K said:
$$\vec E_{axis}=\frac q {4πε_o}[\frac {1} {(x-\frac s 2)^2}- \frac {1} {(x+\frac s 2)^2}]$$

It's not obvious from this that the field reduces away as ##1/r^3##.

And, what about for points not on the x-axis? Can you describe the behaviour of the dipole for those points?

Note that studying an electric field, for example, is different from being able to calculate the field at a given coordinate. You may be interested in generally how the field varies with distance and polar angle. For that, an approximation is often the key. And, the Taylor series is often the best method of approximating a function. In this case in the form of the binomial expansion.
 
  • Like
Likes Klystron, FactChecker, jim mcnamara and 2 others
  • #10
Zack K said:
What are you actually approximating in a taylor series, and why is it even useful if you have the equation in the first place?
You have the function, but you often can not easily get its derivative or integral. It's easy to do both of those with the powers of x in the Taylor series. There is a lot more about the theory of power series than you can imagine at this point. The entire study of analytic functions of a complex variable are built around it, and the consequences are profound.
 
  • Like
Likes QuantumQuest, Zack K and Klystron
  • #11
Zack K said:
I meant more like of how to use it on a real world level rather than using it for a graph. For example my dipole problem, I don't understand how taylor series is used in that sense.

Did you learn in elementary physics that gravitational potential energy is ##mgh##? Did you also learn elsewhere that gravitational potential energy is ##-\frac {GM m}{r}##? Why are those both true?

Answer: Because the first one is derived from a first-order Taylor series approximation to the exact formula. It's good so long as ##h## is small compared to ##r## the radius of the earth. What is "small compared to"? The theory of Taylor series answers that question precisely. If you need an answer accurate to 1 part in 100, then you know how small ##h/r## needs to be. If you need an answer accurate to 1 part in 1000, then you know how small ##h/r## needs to be for that. If you need a slightly more accurate answer but still not the exact answer, you can include a second term of the Taylor series.

That's not done often, but what IS done is a power series of several terms when modeling the precise gravitational field of the Earth with all its irregularities, mountain ranges, etc. You need that to accurately predict how things in orbit behave and for ballistic calculations like where long-range missiles will land. There is no exact functional form. There is only an experimentally-measured power series approximation, with enough terms to give the desired accuracy.

Very very often in physics what we're using is a first-order or occasionally second-order Taylor approximation to something else, because it's much easier to calculate with and gives good enough results. Pendulum motion is one that comes to mind. Diffraction patterns are another. And dipoles are another. You say you don't know how Taylor series are used, yet you quote the ##1/r^3## Taylor series approximation. That ##1/r^3## is not in the original equation. It's from Taylor series.
 
Last edited:
  • Like
Likes DrClaude, Zack K and FactChecker
  • #12
Zack K said:
my problem is I'm struggling to understand how a taylor series actually works and what the point of it is.

Particular examples of series are less disturbing if you understand the general ideas associated with
series. To generalize your question:

Why express a "known" function ##w(x)## as any sort of series ##w(x) = c_0 f_0(x) + c_1f_1(x) + c_2f_2(x) + ...## ?

Famous examples: Taylor series, Fourier series, series of orthogonal polynomials

The utility of series stems from the fact than many important (in the practical sense) mathematical operations are linear operations.
##L(f(x) +g(x)) = Lf(x) + Lg(x)##
## L( c f(x)) = c L(f(x))##

Examples (for "nice" functions):

Taking Limits:
##\lim_{x \rightarrow a} (f(x) + g(x)) = \lim_{x \rightarrow a} f(x) + \lim_{x \rightarrow a} g(x)##
##\lim_{x \rightarrow a} c\ f(x) = c\ lim_{x \rightarrow a} f(x)##

(A limit of type ##lim_{x \rightarrow \infty}## is relevant to your particular question.)

Differentiation:
##D( f(x) + g(x) ) = f'(x) + g'(x) = D f(x) + D g(x)##
##D( c f(x)) = c f'(x) = c D f(x)##

Integration:
##\int_a^b (f(x) + g(x)) dx = \int_a^b f(x) dx + \int_a^b g(x) dx##
##\int_a^b c f(x) dx = c \int_a^b f(x) dx ##

Multiplication by a fixed function, such a ##e^x##.
##M f(x) = e^x f(x)##
##M (f(x) + g(x)) = e^x f(x) + e^x g(x) = M(f(x)) + M(g(x))##
##M( c f(x)) = e^x c f(x) = c M f(x)##Of course not all significant operations are linear. Example: Let ##S f(x) = (f(x))^2## Then ##S( f(x) + g(x)) \ne S f(x) + S g(x)##.

And there are ways of expressing functions in terms of other functions that are not series.
Example, continued fractions: ##w(x) = c_0 + c_1/( f_1(x) + c_2/( f_2(x) + c_3/ (f_3(x) + ... ))) ##

It is the frequent practical significance of linear operations that makes series expansions important. It is often easier analyze the the effect of linear operations or actually compute the effect by considering the operation applied term-by-term to the series expansion of a function ##w(x)##,
##L(w(x)) = L( c_0 + c_1 f_f(x) + c_2 f_2(x) + c_3 f_3(x) + ...) = L c_0 + c_1 L(f_1(x)) + c_2 L(f_2(x)) + c_3 L(f_3(x)) + ...##

The oft occurring practical situation is that we are able to ignore the trailing terms in the series.

------

What makes some series famous and useful and others obscure? For example, why not have things like
##w(x) = c_0 + c_1 x^{1/2} + c_2 x^{2/3} + c_3 x^{3/4} + ...##
or
##w(x) = c_0 + c_1 x \sin(x) + c_2 x^2 \sin(x^2) + c_3 x^3 \sin(x^3) + ...## ?

Famous series of functions are introduced by teaching students how to find the coefficients ##c_0, c_1, c_2,..##. It's natural that students take for granted that unique values for these coefficients can be found! But famous series are actually special cases in this regard.

For a series expansion of a function ##w(x)## in terms of functions ##f_1, f_2,f_3,...## to be useful , we need:
1) Solutions for the coefficients ##c_0, c_1, c_2,..## exist
2) Solutions for the coefficents are unique (i.e. we don't have possibilities like ##w(x) = 1 + 3.8 f_1(x) + 9.2 f_2(x) + ... = 7 + 0.6 f_1(x) + 12.4 f_2(x) + ...##)
3) There are convenient procedures for finding the coefficients.

The details of how to find the coefficients vary from famous series to famous series. However, the general idea is analogous to concept of expressing a vector ##w## as a sum of other vectors ##f_0,f_1,f_2## as ##w = c_0 f_0 + c_1 f_1 + c_2 f_2 ##.
We find the coefficient ##c_j## by projecting ##w## onto ##f_j## and looking at the magnitude of the projected vector.

For example to find the n-th coefficient of the McLaurin series for ##w(x)## we have ##c_n = (D^n w(x))|_{x=0} )/ n!## which "picks out" the coefficient ##c_n## from the set of coefficients ##\{c_0,c_1,c_2,...\}##.

A notable feature of the projection of vectors is that it is a linear operation. i.e. Let ##L_{f_1} (w) ## represent the vector that results from projecting vector ##w## onto vector ##f_1##. We have ## L_{f_1}(v + w) = L_{f_1}(v) + L_{f_1}(w)## and ##L_{f_1}(c w) = c L_{f_1}( w)## where ##c## is a scalar.

The procedures for finding the terms for famous series are also linear operations. This is a significant contribution to making them convenient. For example, let ##L_n(w(x))## represent the operation of finding the ##n##-th term of the McLaurin series for ##w(x)##. i.e. ## L_n(w(x)) = \frac{ ( D^n w(x))|_{x=0}}{n!} x^n##
We have ##L_n(v(x) + w(x)) = L_n(v(x)) + L_n(w(x))## and ##L_n( c w(x)) = c L_n(w(x))##
 
  • Like
Likes Hiero and Zack K
  • #13
Almost nothing in physics can be calculated exactly. The next best thing is to approximate. Enter Taylor-series, turning complicated functions into easily manageble power series, allowing us to calculate stuff.
 
  • #14
Zack K said:
We used taylor series to explain this phenomena. I understood the mathematics of taylor series, and I get the normal definition, my problem is I'm struggling to understand how a taylor series actually works and what the point of it is.
I remember asking myself the same question about the same example a long time ago when I first saw the Taylor expansion. That was before I saw much more physics in later years. Electric dipoles are usually of atomic or molecular size. If you wish to look at the electric field due to such a dipole (or a collection of them), the distance from you to the dipole(s) is usually much larger than molecular size. If nothing else, you can't get closer than the size of your nose :smile:. As others have remarked, both the exact and the approximate expressions are valid at that distance, however the Taylor expansion makes it easier to interpret the algebraic expression. Try calculating the torque on one dipole due to the presence of another dipole at distance ##r>>s## using the exact expression and you will see what I mean. You will end up with an expression with four different vectors (e.g. ##\vec r_{++},\vec r_{+-},\vec r_{-+},\vec r_{--}##) that most people will find difficult to interpret. Then try it using the dipole approximation in which a single distance ##\vec r## and two dipoles ##\vec p_1## and ##\vec p_2## are involved. Why attempt to kill a fly with a sledge hammer if a fly swatter does the job just as well?
 
  • Like
Likes Zack K
  • #15
Actually, I met the great-great ... -grandson of Colin Maclaurin last year! I don't know if that's another good reason to use the series, but there you go.
 
  • Like
Likes FactChecker, QuantumQuest, Ratman and 1 other person
  • #16
PeroK said:
Actually, I met the great-great ... -grandson of Colin Maclaurin last year! I don't know if that's another good reason to use the series, but there you go.
Being a great-great ... - grandson, the person you met must be considered a much higher order term in the series of Maclaurins than Colin. It was nice of you not to ignore him as, I am sure, others may have been doing. :smile:
 
  • Like
Likes Nugatory, PAllen and PeroK
  • #17
I meat his great-great-great grandson. Actually, he was just the great-great grandson. But he was really great!.
 
  • #18
One reason that series approximations are very useful in physics is that you very often have situations where the true solution to a differential equation is an extremely complicated infinite series but high-order terms don't contribute that much to the actual physics of the system that you're analyzing.

One example that comes very readily to mind is the small angle approximation ##\sin(\theta) \approx \theta##. The Taylor series for the sine function is ##\sin(\theta) = \theta -\frac{\theta^3}{3!} +\frac{\theta^5}{5!} - \frac{\theta^7}{7!} +...## But suppose that ##\theta## is very small. As ##\theta## goes to 0, the terms with power greater than 1 go to zero much more quickly than the linear term, so for small values of ##\theta## you can assume that ##\sin(\theta) \approx \theta##. Note that you should only do this when you're taking ##\theta## to be in radians.

A good example of this is the pendulum with small oscillations.

2PvVOOV.png


(https://opentextbc.ca/physicstestbook2/chapter/the-simple-pendulum/)

The equation of motion for ##\theta## that we get from Newton's Second Law turns out to be ##\frac{d^2\theta}{dt^2} +\frac{g}{L}\sin(\theta) = 0##. In this form, this differential equation is nonlinear and we prefer to avoid nonlinear differential equations wherever possible. However, if we make the assumption that the oscillations are small, then the DE becomes ##\frac{d^2\theta}{dt^2} + \frac{g}{L}\theta = 0##, which is very easy to solve, and we find that the solution is ##\theta(t) = \theta_{max}\sin(\omega t)## where ##\theta_{max}## is the greatest value of the angle obtained by the pendulum and ##\omega = \sqrt{\frac{g}{L}}##. This accurately describes the motion for a very large class of important physical systems (grandfather clocks work on this principal, for instance) and you'll be seeing systems like this a lot in your advanced mechanics classes. So to answer your question, that is what makes Taylor series so special.

You'll also want to remember the binomial approximation ##(1+x)^\alpha \approx 1+\alpha x##, which is obtained by taking the first two Taylor series terms of ##f(x) = (1+x)^\alpha##, since ##f(x) \approx f(0) + f^\prime (0) x = 1 + \alpha x##. This is valid for ##x## very close to 0.

As for the specific case of electric dipoles, dipoles are the second term that appears in what is called the multipole expansion for the electric potential, which approximates the solution to Laplace's equation ##\frac{\partial ^2 \phi}{\partial x^2} + \frac{\partial ^2 \phi}{\partial y^2} + \frac{\partial ^2 \phi}{\partial z^2} = 0## by approximating ##\phi## with a Taylor series in ##x, y, z##. You'll learn plenty about this when you get to your E&M classes.
 

Attachments

  • 2PvVOOV.png
    2PvVOOV.png
    4.4 KB · Views: 589
  • Like
Likes WWGD and PeroK

FAQ: What's the point of Taylor/Maclaurin series?

What are Taylor/Maclaurin series used for?

Taylor/Maclaurin series are used to approximate complicated functions by breaking them down into simpler polynomial equations.

How do Taylor/Maclaurin series differ from other series?

Taylor series are centered around a specific point, while Maclaurin series are centered around x = 0. This allows for more accurate approximations for functions that are not centered around 0.

What is the significance of the remainder term in Taylor/Maclaurin series?

The remainder term in Taylor/Maclaurin series represents the difference between the actual value of the function and the approximation given by the series. It allows us to determine the accuracy of the approximation and also helps in determining the number of terms needed for a desired level of accuracy.

Can Taylor/Maclaurin series be used for all functions?

No, Taylor/Maclaurin series can only be used for functions that are infinitely differentiable at the point of expansion. This means that the function must have derivatives of all orders at that point.

How are Taylor/Maclaurin series related to calculus?

Taylor/Maclaurin series are closely related to calculus as they are used to approximate functions and their derivatives. They also involve the use of limits and derivatives in their derivation and application.

Back
Top