The Strange Behaviour of Numbers Close to Unity

In summary, the author is describing how to approximate a function with a Taylor series and how the error gets smaller as x gets closer to 0.
  • #1
Perplexed
6
0
I have been looking at material properties such as thermal expansion of metals which usually involves very small coefficients. The general equation of thermal expansion is usually
\(\displaystyle L_\theta = L_0 ( 1 + \alpha \theta)\)
where L is the length and theta is the temperature change. The coefficient alpha is usually pretty small, 11E-6 for steel, so one ends up with a lot of numbers like 1.000011.

This is where I seem to have entered a strange world, where
\(\displaystyle \sqrt{(1 + x)} \rightarrow 1 + x/2\)
\(\displaystyle \dfrac{1}{ \sqrt{(1 - x)}} \rightarrow 1 + x/2\)
\(\displaystyle (1 - x)^3 \rightarrow 1-3x\)

Is there a name for this area of maths, and somewhere I can look up more about it?

Thanks for any help.

Perplexed
 
Mathematics news on Phys.org
  • #2
Taylor series is one topic where (Infinite) polynomials are used to approximating functions. For example,
\[
(1+x)^{1/2}=1+\frac{x}{2}+R_1(x)
\]
where $R_1(x)$ is called the remainder and is infinitely small compared to $x$ when $x$ is small. More precisely,
\[
(1+x)^{\alpha }=1+\alpha x+{\frac {\alpha (\alpha -1)}{2!}}x^{2}+\cdots+
\frac{\alpha\cdot\ldots\cdot(\alpha-n+1)}{n!}x^n+R_n(x)
\]
where $R_n(x)$ is infinitely small compared to $x^n$ when $x$ tends to $0$.
 
  • #3
"Linear approximation". Any function, f, having a derivative at x= a, can be approximated by the "tangent line" [tex]y= f'(a)(x- a)+ f(a)[/tex]. The error will be proportional to [tex](x- a)^2[/tex] and f''(a).

For example, if [tex]f(x)= \sqrt{1+ x}= (1+ x)^{1/2}[/tex] then [tex]f'(x)= (1/2)(1+ x)^{-1/2}[/tex] so that with x= 0, [tex]f(0)= \sqrt{1+ 0}= 1[/tex] and [tex]f'(0)= (1/2)/\sqrt{1+ 0}= 1/2[/tex]. So y= f(x) is approximated, around x= 0, by [tex]y= (1/2)x+ 1[/tex] or [tex]1+ x/2[/tex].

If [tex]f(x)= \frac{1}{\sqrt{1+ x}}= (1+ x)^{-1/2}[/tex] then [tex]f'(x)= -(1/2)(1+ x)^{-3/2}[/tex] so that [tex]f(0)= \frac{1}{\sqrt{1+ 0}}= 1[/tex] and then [tex]f'(0)= -(1/2)(1+ 0)^{3/2}= -1/2. So y= f(x) is approximated, around x= 0, by [tex]y= -(1/2)x+1[/tex] or [tex]1- x/2[/tex]. Notice the negative sign- what you have is NOT correct.

If [tex]f(x)= (1- x)^3[/tex] then [tex]f'(x)= 3(1- x)^2(-1)= -3(1- x)^2[/tex]. [tex]f(0)= (1- 0)^3= 1[/tex] and [tex]f'(0)= -3(1- 0)^2= -3[/tex]. So y= f(x) is approximated by -3x+ 1 or 1- 3x.

You could also do the last one by actually multiplying it out: [tex](1- x)^3= 1- 3x+ 3x^2- x^3[/tex]. If x is small enough (i.e. close enough to 0) that higher values of x can be ignored in the approximation, y= 1- 3x.

Again, these are all first order or linear approximations to the functions, not exact values.

(You can get the Taylor's polynomial and series that Evgeny- Makarov refers to by extending those same ideas to higher powers.)
 
Last edited by a moderator:
  • #4
HallsofIvy said:
If [tex]f(x)= \frac{1}{\sqrt{1+ x}}= (1+ x)^{-1/2}[/tex] then [tex]f'(x)= -(1/2)(1+ x)^{-3/2}[/tex]

so that [tex]f(0)= \frac{1}{\sqrt{1+ 0}}= 1[/tex] and then [tex]f'(0)= -(1/2)(1+ 0)^{3/2}= -1/2[/tex].

So y= f(x) is approximated, around x= 0, by [tex]y= -(1/2)x+1[/tex] or [tex]1- x/2[/tex]. Notice the negative sign- what you have is NOT correct.
Thank you for your reply, it is very helpful.

Just to clear things up so that someone else looking at this doesn't get confused, in my second approximation I had [tex]f(x)= \frac{1}{\sqrt{1 - x}}[/tex] rather than the [tex]f(x)= \frac{1}{\sqrt{1+ x}}[/tex] that you started with: notice the "-" rather than "+" in the square root. It was the simple change of sign in arriving at the reciprocal that first intrigued me on this one, and your explanation makes the reason why this works clear.

Less Perplexed now
 
  • #5
Allow me to make another observation regarding this:

Scientific measurements are often given in "significant figures", the reasoning being, we can only take measurements up to a certain degree of accuracy.

So, suppose our input data can only give 6 decimal places.

If we expect we can model a function (and for many functions this is true) by:

$f(x) = a_0 + a_1x + a_2x^2 +\cdots$

And that the coefficients $a_k$ either stay "about the same size" or even better, decrease, then if we measure $x$ to 6 decimals places, the "correction term" for $x^2$ is around 12 decimal places, in other words, much much smaller than our standards of accuracy allow.

For certain classes of "well-behaved" functions, there are means to estimate (or "bound") the size of the error, which in turn let's us know "how many terms to go out".

For small enough $x$, this kind of reasoning let's us use the approximation:

$\sin(x) = x$

often used in simplifying wave equations that govern oscillators, and if more accuracy is needed, the approximation:

$\sin(x) = x - \dfrac{x^3}{6}$ is pretty darn good, as you can see here.
 

FAQ: The Strange Behaviour of Numbers Close to Unity

What is "The Strange Behaviour of Numbers Close to Unity"?

"The Strange Behaviour of Numbers Close to Unity" is a mathematical phenomenon where numbers that are very close to the value of 1 exhibit unexpected and unusual behaviors in certain mathematical operations.

What are some examples of this strange behavior?

One example is the fact that when a number very close to 1 is raised to a large power, the result can be significantly larger or smaller than the original number. Another example is the fact that when these numbers are added or multiplied together, the result can be very different from what would be expected.

Why does this phenomenon occur?

This behavior occurs because of the nature of the mathematical operations involved. When numbers are very close to 1, even small differences in their values can have a big impact on the outcome of the operation. Additionally, the way that computers represent and process numbers can also contribute to this phenomenon.

What are the practical applications of studying this strange behavior?

Studying this phenomenon can provide insights into the limitations and potential errors in mathematical calculations, particularly in fields such as physics, engineering, and finance where precision is crucial. It can also lead to the development of more accurate and efficient algorithms for performing mathematical operations.

Is there a mathematical explanation for this behavior?

Yes, there are several mathematical theories and explanations for this phenomenon, including rounding errors, floating-point arithmetic, and the properties of infinite series. However, this is still an active area of research and there is no definitive answer yet.

Similar threads

Replies
3
Views
2K
Replies
20
Views
791
Replies
15
Views
1K
Replies
9
Views
3K
Replies
3
Views
1K
Back
Top