Will we get an infinitesimal x when we neglect ##x^2## in ##x+x^2##?

In summary: The relation between ##\Delta x## and ##dx## is that ##\Delta x## represents the change in the value of the variable over the change in the value of dx.
  • #1
Mike_bb
65
4
Hello.

Let's assume that we have ##2x \Delta x + \Delta x^2##. When ##\Delta x## tends to zero we can neglect ##\Delta x^2## and we'll get ##2xdx##.
Let's assume that we have ##x + x^2##. When ##x## tends to zero we can neglect ##x^2##. Will we get an infinitesimal ##x## as such as ##dx##?

Thanks.
 
Physics news on Phys.org
  • #2
[tex]d(x+x^2)=dx+d(x^2)=(1+2x)dx[/tex]
For x=0, 2x=0 in the formula.
 
  • Like
Likes Mike_bb
  • #3
anuttarasammyak said:
[tex]d(x+x^2)=dx+d(x^2)=(1+2x)dx[/tex]
Sorry, I forgot to say that ##x+x^2## is just expression where x is variable. Not differential.
 
  • #4
You would seem to like to have
[tex]\lim_{x\rightarrow 0} \frac{x+x^2}{x}=1[/tex]
But it is not different from differentiation at any x
[tex]\frac{d(x+x^2)}{dx}=1+2x [/tex]
for the case of x=0.
 
  • Like
Likes Mike_bb
  • #5
anuttarasammyak said:
You would seem to like to have
[tex]\lim_{x\rightarrow 0} \frac{x+x^2}{x}=1[/tex]
But it is not different from differentiation at any x
[tex]\frac{d(x+x^2)}{dx}=1+2x [/tex]
for the case of x=0.
Ok. We have 2 similar expressions: 1. ##2x \Delta x + \Delta x^2## where ##\Delta x## is variable and 2. ##x+x^2## where ##x## is variable.
In 1. case we'll get ##2xdx## when ## \Delta x## tends to zero.
In 2. case we'll get ##x## when ##x## tends to zero.
But in the first case we'll get infinitesimal dx and in the second case we'll get just ##x## but not infinitesimal x. Why is it so?
Thanks.
 
Last edited:
  • #6
Definition formula of differentiation,
[tex]f^{'}(x):=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}[/tex]
when x=0
[tex]f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h}[/tex]
Further when f(0)=0
[tex]f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)}{h}[/tex]
we can replace h with any alphabet we like. When we happen to choose ’x’ instead of h, though it is confusing and even contradicts with 'x=0',
[tex]f^{'}(0)=\lim_{x\rightarrow 0}\frac{f(0+x)}{x}=\lim_{x\rightarrow 0}\frac{f(x)}{x}[/tex]
It corresponds to the first formula in my post with f(x) = x+x^2.  I wrote it imagining what you think but I do not recommend it because it would cause confusion by x=0 and x##\rightarrow 0## at the same time.
 
Last edited:
  • Like
  • Informative
Likes PeroK and Mike_bb
  • #7
anuttarasammyak said:
Definition formula of differentiation,
[tex]f^{'}(x):=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}[/tex]
when x=0
[tex]f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h}[/tex]
Further when f(0)=0
[tex]f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)}{h}[/tex]
we can replace h with any alphabet we like. When we happen to choose ’x’ instead of h, though it is confusing and even contradicts with 'x=0',
[tex]f^{'}(0)=\lim_{x\rightarrow 0}\frac{f(0+x)}{x}=\lim_{x\rightarrow 0}\frac{f(x)}{x}[/tex]
It corresponds to the first formula in my post with f(x) = x+x^2.  I wrote it imagining what you think but I do not recommend it because it would cause confusion by x=0 and x##\rightarrow 0## at the same time.
Thank you, it's helpful!
 
  • #8
  • Like
Likes Dale, SammyS, weirdoguy and 1 other person
  • #9
Mike_bb said:
Let's assume that we have ##2x \Delta x + \Delta x^2##. When ##\Delta x## tends to zero we can neglect ##\Delta x^2## and we'll get ##2xdx##.
If "##\Delta x## tends to zero" then the whole expression ##2x \Delta x + \Delta x^2## "tends to" zero. In more precise terms, ##\lim_{\Delta x \to 0} 2x \Delta x + \Delta x^2 = 0##. However, ##\Delta x^2## approaches zero more rapidly than does ##\Delta x##, but that's not really relevant to what you're asking.
 
  • Like
Likes Dale, bhobba and PeroK
  • #10
Mark44 said:
However, ##\Delta x^2## approaches zero more rapidly than does ##\Delta x##
I didn't fully understand what does it mean? What is it used for?
 
  • #11
Mike_bb said:
I didn't fully understand what does it mean?
Let Δx = 0.1 then Δx2 = 0.01
Make Δx ten times smaller, Δx = 0.01, then Δx2 = 0.0001
 
  • #12
Mike_bb said:
I didn't fully understand what does it mean? What is it used for?
The expression ##\Delta x## is usually meant to represent a small (close to zero) number, but not an infinitesimal, which is what dx represents in some contexts.

If we start with ##\Delta x = 0.1## then ##(\Delta x)^2 = 0.01##. If we decrease ##\Delta x## to 0.01, then ##(\Delta x)^2 = 0.0001##.

As to what it's used for, you can often ignore very small numbers raised to high powers to get a decent approximation.
 
Last edited:
  • Like
Likes bhobba
  • #13
Mark44 said:
The expression ##\Delta x## is usually meant to represent a small (close to zero) number, but not an infinitesimal, which is what dx represents in some contexts.

If we start with ##\Delta x = 0.1## then ##\(Delta x)^2 = 0.01##. If we decrease ##\Delta x## to 0.01, then ##\(Delta x)^2 = 0.0001##.

As to what it's used for, you can often ignore very small numbers raised to high powers to get a decent approximation.
What is relation between ##\Delta x## and ##dx##? If we want to get term with dx then we discard ##\Delta x^2## and other high-order terms in expression. Is it true?
 
  • #14
Mike_bb said:
What is relation between ##\Delta x## and ##dx##?
There is none. Don't let the guys fool you. ##dx## is not defined at the level of this thread. We have
$$\dfrac{dx}{dt}=\lim_{t \to 0}\dfrac{\Delta x}{\Delta t} \stackrel{!}{\neq}\dfrac{\lim_{x \to 0}\Delta x}{\lim_{t \to 0}\Delta t}$$ but not ##dx## alone. Read post #8.
This infinitesimal chattery doesn't serve you well. It is infinite, infinitely small, but infinite small is zero. But ##dx## isn't zero. So take it as just written or learn differential forms.

Mike_bb said:
If we want to get term with dx then we discard ##\Delta x^2## and other high-order terms in expression. Is it true?
When you discard something, then you make an error. Whether this is allowed or not depends on whether we only chat on the internet or you are building bridges.

Again, read the link in post #8.
 
Last edited:
  • Like
  • Love
Likes Dale, bhobba, Delta2 and 1 other person
  • #15
Mike_bb said:
Hello.

Let's assume that we have ##2x \Delta x + \Delta x^2##. When ##\Delta x## tends to zero we can neglect ##\Delta x^2## and we'll get ##2xdx##.
Let's assume that we have ##x + x^2##. When ##x## tends to zero we can neglect ##x^2##. Will we get an infinitesimal ##x## as such as ##dx##?

Thanks.
It seems to me you are missing the basics of (standard) real analysis. In modern terminology "infinitesimal" is part of non-standard analysis.

If you are using an archaic textbook, then it may be difficult for us to help.
 
  • Like
Likes bhobba
  • #16
PeroK said:
It seems to me you are missing the basics of (standard) real analysis. In modern terminology "infinitesimal" is part of non-standard analysis.

If you are using an archaic textbook, then it may be difficult for us to help.
I read in this source: http://www.bndhep.net/Lab/Math/Calculus.htm

The fact that x^2 becomes insignificant compared to x for very small values of x is a fundamental principle of infinitesimal calculus. We say x is infinitesimal when we allow its value to approach zero, but never actually reach zero, and we write x→0. To express the behavior of x + x^2 as x→0 we say, "The limit of x + x^2 as x→0 is x."
 
  • #17
Mike_bb said:
I read in this source: http://www.bndhep.net/Lab/Math/Calculus.htm

The fact that x^2 becomes insignificant compared to x for very small values of x is a fundamental principle of infinitesimal calculus. We say x is infinitesimal when we allow its value to approach zero, but never actually reach zero, and we write x→0. To express the behavior of x + x^2 as x→0 we say, "The limit of x + x^2 as x→0 is x."
Well, that source is wrong! Assuming it says what you say it says. The link you posted is broken.

There is no such thing as an infinitesimal in standard real analysis. Although the term is often thrown around loosely or erroneously.
 
  • Like
Likes Mike_bb
  • #19
malawi_glenn said:
http://www.bndhep.net/Lab/Math/Calculus.html

should be htlm and not htm as posted ;) standard hack
The question for the author is at what size ##x## stops being a bona-fide real number and becomes an infinitesimal!
 
  • Like
Likes Dale and malawi_glenn
  • #20
In Loomis & Sternberg's Advanced Calculus an infinitesimal is defined to be a function ##f## such that ##f(0)=0## and ##f## is continuous at zero (so its limit at zero is zero as well). I think I've encountered this (or similar) definition in some other textbooks (in Russian). Two further types of infinitesimal defined in the book are ##O## large and ##o## small (respectively, an infinitesimal that is Lipschitz continuous at zero, and an infinitesimal that goes to zero faster than the argument). I really liked this notation and how it's used to define the differential and derive its properties.
 
  • Like
  • Informative
Likes lurflurf and S.G. Janssens
  • #21
The reason why in many derivations involving infinitesimals we drop higher powers of ##dx## (or ##\Delta x##) is because somewhere in the derivation we (though not always explicitly stated) divide by ##dx## and then take the limit of the respective term as ##dx \to 0## which means that the higher order powers will yield 0 because for example ##\frac{(dx)^2}{dx}=dx\to 0## or ##\frac{dx^3}{dx}=(dx)^2\to 0 ## as ##dx\to 0##. But we can't ignore the single dx term for example a term like ##2xdx## because after division by dx will become ##2x## which ##2x\to 2x## as ##dx\to 0##, while ##2x(dx)^2## after division by ##dx## will become ##2xdx## which tends to ##0## as ##dx\to 0##.
 
  • Like
Likes Dale and Mike_bb
  • #22
I think that the question is "very thin" the two equations: ## 2x\Delta x +\Delta x^2## for ##\Delta x \rightarrow 0## and ##x+x^2## for ##x\rightarrow 0## are conceptually different because are two different objects. The first equation approximated to the first order gives ##2xdx##, the second gives ##x##, one is a differential the other is a function ...
Ssnow
 
  • Like
Likes Mike_bb
  • #23
The only way to solve the 'paradoxes' of calculus is by using rigorous arguments found in real analysis. But at an intuitive level Δx, Δ f(x) = f(x + Δx) - f(x) etc, are simply small changes in x or f(x). dx and df(x) are infinitesimal changes in x and f(x). The difference between small and infinitesimal is simply 'a for all practical purposes' thing. Δx is small but different from 0. dx is also small, but for all practical purposes, it can be considered zero when you want it to be, even though it isn't. Exactly how small depends on the problem being considered, but is it assumed in an intuitive treatment of calculus, such can always be found. The idea is to get around the divide-by-zero thing. If dy and dx were 0, dy/dx would have no meaning. But if infinitesimal, everything is ok. Consider y = x^2. dy/dx = ((x + dx)^2 - x^2)/dx = (x^2 +2xdx +dx^2 - x^2)/dx = 2x + dx. This is where we assume that for all practical purposes dx =0 and get dy/dx = 2x. If we had used Δx instead, you would get Δy/Δx = 2x + Δx - close to but not for all practical purposes the same as 2x. In this way of doing calculus, when we say limit x→c f(x) = z, we mean when x is infinitesimally close to c, but not exactly c even though f(x) may not even be defined at c.

Why not do calculus using real analysis from the start? Real analysis requires some familiarity with rigorous formal proof. For its use in engineering, economics etc., you don't need to study this (for a thinking student, it does solve Zeno's paradox, for example. Start a new thread if interested), so the intuitive approach is done first, and the rigorous approach later for those that need to know it or are interested in knowing it. It's needed for advanced topics like Rigged Hilbert Spaces that those interested in mathematical physics often want to know to make rigorous sense of things like the Dirac Delta function. Still, it is surprising how far you can go with the intuitive approach - even that can be given an intuitive treatment. It usually is - very few people study Rigged Hilbert Spaces. Just nuts like me :DD:DD:DD:DD.

Thanks
Bill
 
Last edited:
  • #24
bhobba said:
The only way to solve the 'paradoxes' of calculus is by using rigorous arguments found in real analysis. But at an intuitive level Δx, Δ f(x) = f(x + Δx) - f(x) etc, are simply small changes in x or f(x). dx and df(x) are infinitesimal changes in x and f(x). The difference between small and infinitesimal is simply 'a for all practical purposes' thing. Δx is small but different from 0. dx is also small, but for all practical purposes, it can be considered zero when you want it to be, even though it isn't. Exactly how small depends on the problem being considered, but is it assumed in an intuitive treatment of calculus, such can always be found. The idea is to get around the divide-by-zero thing. If dy and dx were 0, dy/dx would have no meaning. But if infinitesimal, everything is ok. Consider y = x^2. dy/dx = ((x + dx)^2 - x^2)/dx = (x^2 +2xdx +dx^2 - x^2)/dx = 2x + dx. This is where we assume that for all practical purposes dx =0 and get dy/dx = 2x. If we had used Δx instead, you would get Δy/Δx = 2x + Δx - close to but not for all practical purposes the same as 2x. In this way of doing calculus, when we say limit x→c f(x) = z, we mean when x is infinitesimally close to c, but not exactly c even though f(x) may not even be defined at c.

Why not do calculus using real analysis from the start? Real analysis requires some familiarity with rigorous formal proof. For its use in engineering, economics etc., you don't need to study this (for a thinking student, it does solve Zeno's paradox, for example. Start a new thread if interested), so the intuitive approach is done first, and the rigorous approach later for those that need to know it or are interested in knowing it. It's needed for advanced topics like Rigged Hilbert Spaces that those interested in mathematical physics often want to know to make rigorous sense of things like the Dirac Delta function. Still, it is surprising how far you can go with the intuitive approach - even that can be given an intuitive treatment. It usually is - very few people study Rigged Hilbert Spaces. Just nuts like me :DD:DD:DD:DD.

Thanks
Bill
In my understanding, dy is the linear change that approximates the actual change of the function. So if y=x^2, dy =2x is the linear approximation to the local change of f(x)=x^2. I guess we can also formulate it in terms of the exterior derivative of a differential form.
 
  • Like
Likes bhobba
  • #25
WWGD said:
In my understanding, dy is the linear change that approximates the actual change of the function. So if y=x^2, dy =2x is the linear approximation to the local change of f(x)=x^2. I guess we can also formulate it in terms of the exterior derivative of a differential form.

Yes, that is another way of looking at it. You take any smooth differentiable graph. Pick a point and then zoom in closer and closer. Eventually, again for all practical purposes, it will be a straight line that passes through that point. You can look at dy and dx as the sides of a right-angled triangle with that line as the hypotenuse. Intuitive calculus I think can be presented in many ways. But its rigorous underpinning is real analysis.

Thanks
Bil
 
  • Like
Likes WWGD

FAQ: Will we get an infinitesimal x when we neglect ##x^2## in ##x+x^2##?

What is an infinitesimal x?

An infinitesimal x is a value that is extremely small and approaching zero, but not exactly equal to zero. It is often used in calculus and other mathematical fields to represent a value that is infinitely close to zero.

Why do we neglect ##x^2## in ##x+x^2##?

In many mathematical calculations, the value of ##x^2## becomes negligible when compared to the value of x. Neglecting ##x^2## allows for simpler and more manageable calculations without significantly affecting the overall result.

What is the significance of neglecting ##x^2## in ##x+x^2##?

Neglecting ##x^2## in ##x+x^2## can help simplify equations and make them easier to solve. It is often used in approximations and estimations, where the value of ##x^2## is not significant enough to affect the overall outcome.

Are there any situations where neglecting ##x^2## in ##x+x^2## would not be appropriate?

Yes, there are certain situations where neglecting ##x^2## in ##x+x^2## would not be appropriate. For example, in cases where the value of ##x^2## is significant and cannot be ignored, neglecting it would result in a significantly different and inaccurate answer.

How does neglecting ##x^2## in ##x+x^2## affect the overall result?

Neglecting ##x^2## in ##x+x^2## can slightly alter the overall result, but it should not significantly change the outcome. In most cases, the difference between including and neglecting ##x^2## is very small and can be disregarded for practical purposes.

Back
Top