Uncovering the Mystery of an Unlikely Limit Result

In summary, the limit does not equal zero when evaluated using the two different methods. One method (using the leading term) does not work, while the other (substituting the equivalent term) does.
  • #1
Gib Z
Homework Helper
3,351
7
I just read the following fact and it somewhat defies my usual logic.

[tex]\lim_{x\to\infty} \sqrt{x^2+x} - x =\frac{1}{2}[/tex].

Now my usual logic tells me that as x becomes large, the leading term in a polynomial sequence is the major one and therefore the [itex]x^2+x[/itex] becomes [itex]x^2[/itex] so [itex]\sqrt{x^2+x}[/itex] becomes just x.

The same logic seems to work for this limit here:https://www.physicsforums.com/showthread.php?t=166906

However the limit is obviously not equal to zero, where is the error in this logic?
 
Physics news on Phys.org
  • #2
divide top and bottom by x after you get rid of the radical on the top
 
  • #3
I know perfectly well how to evaluate the limit that way, Multiply by its conjugate, use difference of two squares and simplify, so on so forth. What I wanted to know was why the other method does not work.
 
  • #4
Gib Z said:
I just read the following fact and it somewhat defies my usual logic.

[tex]\lim_{x\to\infty} \sqrt{x^2+x} - x =\frac{1}{2}[/tex].

Now my usual logic tells me that as x becomes large, the leading term in a polynomial sequence is the major one and therefore the [itex]x^2+x[/itex] becomes [itex]x^2[/itex] so [itex]\sqrt{x^2+x}[/itex] becomes just x.

Im sure your logic will tell you that
[tex]\sqrt{x^2+x} [/tex]
is the same as
[tex]\sqrt{x^2+x + \frac{1}{4} } [/tex]
when x becomes large.
And this is the same as
[tex]x + \frac{1}{2}[/tex] .
 
  • #5
in https://www.physicsforums.com/showthread.php?t=166906
the logic you have just mentioned doesn't work either
[tex]\lim_{x\rightarrow\infty} \sqrt{x^2+x+1} - \sqrt{x^2-3x}[/tex] gives you zero too if you blindly replaces all polynomials with just the term of highest degree. what makes the eventual "substitution" to work in that other example was not that you replaces everything using that logic of yours ... but to look at "limit" as x goes large, namely:

[tex]\lim_{x\rightarrow\infty} \sqrt{x^2+x+1} - \sqrt{x^2-3x}
= \lim_{x\rightarrow\infty} \frac{4x+1}{\sqrt{x^2+x+1} + \sqrt{x^2-3x}}[/tex]
then divide top and bottom by x to get

[tex]\lim_{x\rightarrow\infty} \frac{4+1/x}{\sqrt{\frac{x^2}{x^2}+\frac{x}{x^2}+\frac{1}{x^2}} + \sqrt{\frac{x^2}{x^2}-\frac{3x}{x^2}}}[/tex]

then if you now take the limit, you get what you want.

the lesson here is that [tex]\lim_{x\rightarrow \infty}\sqrt{x^2+x}\neq \sqrt{x^2}[/tex]
 
  • #6
It does work, you can solve this by inspection. get the conjugate and and the denominator the highest order variable is x^2, ignore the others. Walla, same at the other thread.:wink:
 
  • #7
Rogerio said:
Im sure your logic will tell you that
[tex]\sqrt{x^2+x} [/tex]
is the same as
[tex]\sqrt{x^2+x + \frac{1}{4} } [/tex]
when x becomes large.
And this is the same as
[tex]x + \frac{1}{2}[/tex] .

Yes it does. In fact, I usually complete the square when questions come up like that. mjsd - But look at Hurkyl's comment in that thread...Sure he was not replacing the polynomials with the leading term right from the start, but he did on an equivalent expression! The fact the he couldn't do it at the start but can on an equivalent expression confuses me even more!

O and completeing the square works for that limit as well.
 
Last edited:
  • #8
mathPimpDaddy said:
It does work, you can solve this by inspection. get the conjugate and and the denominator the highest order variable is x^2, ignore the others. Walla, same at the other thread.:wink:

Yes exactly, so it does work once we have changed the form. How come it works like that, and what form does it need to be in for it to work?
 
  • #9
Gib Z said:
Yes exactly, so it does work once we have changed the form. How come it works like that, and what form does it need to be in for it to work?

remember when you were first learning limits? They say you will get a fraction answer if you only focused on highest order and if the order are some on top and bottom? Are the orders same on top and bottom? you sould say yes. because limit approaches zero on all others if you rationalize the top and bottom. Also, you cannot directly evauate that limit, so you have to change its idenitity to legally find it limit that is in disguise.
 
Last edited:
  • #10
Ahh I see. So for this method to work, the degree of the numerator must be the same as the denominator. Ok thanks guys!
 
  • #11
As a zeroth-order approximation, [itex]\sqrt{x^2 + x}[/itex] is equal to x, plus some terms less important than x.

Therefore, [itex]\sqrt{x+2} - x[/itex] is equal to 0, plus some terms less important than x.


Alas, this doesn't tell us the limit! So, we try a first-order approximation.

[itex]\sqrt{x^2 + x} \approx x + 1/2[/itex], plus some terms less important than 1. (using the fact [itex](A + \epsilon)^n \approx A^n + nA^{n-1} \epsilon[/itex], plus terms less important than [itex]A^{n-1}\epsilon[/itex])

Therefore, [itex]\sqrt{x+2} - x \approx 1/2[/itex], plus some terms less important than 1.

Happily, terms less important than 1 don't matter here, so the answer is 1/2. (It would matter if, say, we had a fraction where both the numerator and denominator were zero + terms less important than 1)
 
Last edited:
  • #12
Where are you getting these nth order approximations from? What would the 2nd order approximation be?
 
  • #13
Gib Z said:
Where are you getting these nth order approximations from? What would the 2nd order approximation be?
Here, I'm using the binomial theorem:
[tex]
(A + \epsilon)^n = A^n + n A^{n-1} \epsilon +
\frac{n(n-1)}{2} A^{n-2} \epsilon^2 + \cdots
[/tex]
including the fact that if A dominates [itex]\epsilon[/itex], then each term in this series dominates the tail.

In general, you would use a differential approximation, or a Taylor series if you needed even more terms.
 
Last edited:
  • #14
hey Hurkle what class did you get that theorem from? I only have limited knowledge of only Calc, so I use the obvious method:redface:
 
Last edited:
  • #15
Umm ok but how did we know the the first order approximation didn't give us the limit and that its zero. We did you know it wasnt zero and you had to get a better approximation?
 
  • #16
Gib Z said:
Umm ok but how did we know the the first order approximation didn't give us the limit and that its zero. We did you know it wasnt zero and you had to get a better approximation?

The error in the approximation [itex]\sqrt{x^2 + x} \approx x[/itex] is asymptotically less than x. Actually, because it's differentiable, we can do better: we know it's asymptotically on the order of 1 or smaller.

So, we know the error in the approximation [itex]\sqrt{x^2 + x} - x \approx 0[/itex] is on the order of 1. So, all this tells us is when x gets big, [itex]\sqrt{x^2 + x} - x[/itex] remains bounded.


The error in the approximation [itex]\sqrt{x^2 + x} \approx x + 1/2[/itex] is asymptotically less than 1. Actually, because it's differentiable, we can do better: we know it's asymptotically on the order of 1/x or smaller.


So, we know the error in the approximation [itex]\sqrt{x^2 + x} - x \approx 1/2[/itex] is on the order of 1/x. So we know the limit is 1/2 as x gets big.



Note that if we were instead looking at

[tex]\lim_{x \rightarrow +\infty} \sqrt{4x^2 + x} - x[/tex]

we only need the zeroth-order approximation: [itex]\sqrt{4x^2 + x} - x \approx 2x - x = x[/tex]. The error is still asymptotically on the order of 1, so x dominates, and we know the limit is [itex]+\infty[/itex].
 
  • #17
Hurkyl said:
The error in the approximation [itex]\sqrt{x^2 + x} \approx x[/itex] is asymptotically less than x. Actually, because it's differentiable, we can do better: we know it's asymptotically on the order of 1 or smaller.
Formally, this means
[tex]\exists c: \forall x:|\sqrt{x^2 + x} - x| < c \cdot 1[/tex]
Or, if you know big-Oh notation,
[tex]\sqrt{x^2 + x} = x + O(1)[/tex]

The less strict condition is simpler:
[tex]
\lim_{x \rightarrow +\infty} \frac{\sqrt{x^2 + x} - x}{x} = 0
[/tex]
or
[tex]\sqrt{x^2 + x} = x + o(x)[/tex]


The error in the approximation [itex]\sqrt{x^2 + x} \approx x + 1/2[/itex] is asymptotically less than 1. Actually, because it's differentiable, we can do better: we know it's asymptotically on the order of 1/x or smaller.
And this means
[tex]\exists c: \forall x:|\sqrt{x^2 + x} - (x + 1/2)| < c \cdot (1/x)[/tex]
or
[tex]\sqrt{x^2 + x} = x + 1/2 + O(1/x)[/tex]

The less strict one is
[tex]
\lim_{x \rightarrow +\infty} \frac{\sqrt{x^2 + x} - (x + 1/2)}{1} = 0
[/tex]
or
[tex]\sqrt{x^2 + x} = x + 1/2 + o(1)[/tex]
 
Last edited:
  • #18
You seem like you got the proofs down packed pretty good Hurkle. Are you a professor? Where did you learn to master proofs? Advanced Calc?
 
  • #19
I'm sorry If I seem slow, but what do you mean by asymptotically on the order of?
 
  • #20
The intuition from asymptotic analysis (also given by nonstandard analysis) is that by asking for the limit, you want the value of the function "rounded" to the nearest constant. (or to the nearest standard number, in the nonstandard picture)

The error in [itex]\sqrt{x^2 + x} \approx x[/itex] is on the order of a constant, which is not enough precision for the answer we desire.

The error in [itex]\sqrt{x^2 + x} \approx x + 1/2[/itex] is on the order of 1/x, which is enough precision to get the desired answer.

P.S. I wrote another post before your reply. :wink:
 
  • #21
Why is the error in [itex]\sqrt{x^2 + x} \approx x + 1/2[/itex] on the order of 1/x, how do we know that? And why is that enough precision for the desired answer but the order of a constant is not?
 
  • #22
Gib Z said:
And why is that enough precision for the desired answer but the order of a constant is not?
That one's easy: if I told you

"the value of that function is zero, plus some function of x that's bounded by c"

would you be able to figure out the limit? No. The best you could say is that, if the limit exists, it's somewhere between -c and c.

But if I told you

"the value of that function is 1/2, plus some function of x that's bounded by c/x"

would you be able to find the limit? Yes, because the limit of that error term is zero.
 
  • #23
Thank you soo soo very much, I understand now. Ill read over the posts again tomorrow to see if I get it, otherwise Ill post here again. Thanks!
 
  • #24
Ok I re read it and I need help again :(

As To Post 21, you answered the 2nd bit but not the 1st, which I am still completely lost on. How do we know what order the error is on for a certain approximation, and how does it being differentiable help us do better..
 
  • #25
The case where x goes to 0 is easier to describe, I think.

Suppose we make the approximation f(x) ~ f(0). The error term E(x) is given by
f(x) = f(0) + E(x)

Exercise 1: Prove that if f is continuous, then [itex]\lim_{x \rightarrow 0} E(x) = 0[/itex].

Exercise 2: Prove that if f is differentiable, and L > 0, then there exists a C such that whenever [itex]|x| < L[/itex], we have [itex]|E(x)| < C|x|[/itex].


Suppose f is differentiable, and we make the approximation f(x) ~ f(0) + x f'(0). The error term E(x) is given by
f(x) = f(0) + x f'(0) + E(x)

Exericse 3: Prove that [itex]\lim_{x \rightarrow 0} E(x) / x = 0[/itex].

Exercise 4: Figure out what the next exercise should be. :smile:


You can keep iterating to higher derivatives, and you're essentially led to theorem[/url] to cover the general case.


Exercise 5a: Suppose f is continuous and strictly positive, g is continuous, and [itex]\lim_{x \rightarrow +\infty} g(x) / f(x) = 0[/itex]. Consider the approximation [itex]\left( f(x) + g(x) \right)^n \approx f(x)^n[/itex]. Using exercise 1, show that [itex]\lim_{x \rightarrow +\infty} E(x) / f(x)^n = 0[/itex].

Exercise 5b: If the f and g are also differentiable, use exercise 2 to show there is an L and a C such that for all x > L, [itex]E(x) < C |f(x)^{n-1} g(x)|[/itex].
 
Last edited by a moderator:
  • #26
1. [itex]f(x)= f(0) + E(x), f(0)=f(0) + E(0)[/itex] So E(x)=0 when x is zero, so that limit follows.

2. It completely loses me.All I can reduce it to is that proving that for some C, |E(x)| < C|x|, is eqivalent to the condition for exists a C where |E(x)| < CL. So I guess we can just choose a really large C, but I doubt I am correct.

I Would also like to ask that I really appreciate your help Hurkyl, it has been a long time since I've actually learned any new mathematics. For the last year I've been learning a new fact or some tricks here and there, but I'm actually doing something now. Thanks alot.
 
  • #27
Gib Z said:
mjsd - But look at Hurkyl's comment in that thread...Sure he was not replacing the polynomials with the leading term right from the start, but he did on an equivalent expression! The fact the he couldn't do it at the start but can on an equivalent expression confuses me even more!

with all those subsequent posts after this, I don't think I needed to add any more except to say that I understood what your concern was. I just wasn't clear enough in my response. My point was that I believe what master Hurkyl actually did in that part was to divide top and bottom by x and then looked at the limit as x->infty for each individual term ... and those terms that did disappear was because of 1/x ->0 and NOT because that x^2 >> x and hence leading to approximating (x^2 +x) by x^2...etc.
 
  • #28
Gib Z said:
2. It completely loses me.All I can reduce it to is that proving that for some C, |E(x)| < C|x|, is eqivalent to the condition for exists a C where |E(x)| < CL. So I guess we can just choose a really large C, but I doubt I am correct.
I thought this might give some trouble, but I figured I'd give you a shot at it before giving the big hints.

The key is this theorem:
Let f be continuous on the interval I = [a, b]. Then f has a maximum and a minimum value on I.​

I don't remember if it's taught in elementary calculus. It might be tough to prove, since it makes essential use of the completeness of the reals. It's false for the rationals; [itex]f(x) = 1 / (x^2 - 2)[/itex] is an example of continuous function on the rationals that is unbounded on [1, 2]. So if you haven't seen this theorem, it might be worth simply accepting it for now, and then worry about proving it as a different exercise.

The desired inequality then follows from the mean value theorem. Alternatively, you could prove |E(x) / x| < C from the definition of the derivative.

(Drawing a picture might be interesting too)


I Would also like to ask that I really appreciate your help Hurkyl, it has been a long time since I've actually learned any new mathematics. For the last year I've been learning a new fact or some tricks here and there, but I'm actually doing something now. Thanks alot.
I had that problem once; I had no idea that there was mathematics beyond trigonometry, so all I could do was study that. Alas, I know of too many things to study now. :frown:


mjsd said:
My point was that I believe what master Hurkyl actually did in that part was to divide top and bottom by x
If I were to write a formal proof, that's probably what I would do. But it's not how I think of it: my actual thought processes are actually what I said in that other post. And one can write a proof along those lines too. Typically, this is all you need:

[tex]\frac{f + o(f)}{g + o(g)}
= \frac{f}{g}\left(1 + o(1) \right),[/tex]

though if necessary, you can improve error term in the r.h.s. if you have better bounds on the errors in the l.h.s.
 
Last edited:
  • #29
Hurkyl said:
Alternatively, you could prove |E(x) / x| < C from the definition of the derivative.

Ok From f(x) = f(0) + E(x), I get that proving |E(x) / x| < C is the same as proving [itex]\frac{f(x)-f(0)}{x} < C[/itex] for some C. Taking limits, x approaching zero, it becomes [itex]f'(x) < \lim_{x\to 0} C[/itex].

I'm a tiny bit lost as to what that Actually achieves...Does L mean anything in particular or is it just any number?

I don't really understand what I'm trying to prove..I know f is continuous, therefore all values are finite, as also shown by your theorem. Since f(x) is finite, and so is f(0), the error must also be finite. So to show that it is less than C|x| for some x we just have to choose a really large C, is that somewhat correct even if not rigorous?
 
  • #30
I prefer to see this from the standpoint of Newton's generalized binominal formula: [tex]\sqrt{x^2+x}[/tex] =(x^2)^(1/2)+(1/2)(x^2)^(-1/2)x+(1/2)(-1/2)(1/2!)(x^2)^(-3/2)x^2+ -+=

[tex]x+\frac{1}{2}-\frac{1}{8x}+\frac{1}{16x^2}-+- [/tex]

The formula is valid for [tex]\sqrt{Y+a}[/tex] anytime Y>a.
 
Last edited:
  • #31
Gib Z said:
I don't really understand what I'm trying to prove..I know f is continuous, therefore all values are finite, as also shown by your theorem. Since f(x) is finite, and so is f(0), the error must also be finite. So to show that it is less than C|x| for some x we just have to choose a really large C, is that somewhat correct even if not rigorous?

You have to use the fact that [itex]f(x)[/itex] is differentiable as well. Take the function [itex]f(x)=x^{1/3}[/itex]. All values of this function are finite, but the function is not less than [itex]C|x|[/itex] for any [itex]C[/itex] whatsoever when [itex]x[/itex] is sufficiently close to 0.
 

FAQ: Uncovering the Mystery of an Unlikely Limit Result

What is the unlikely limit result?

The unlikely limit result refers to a mathematical concept where a sequence or function approaches a certain value that is considered unlikely or unexpected based on its initial conditions or parameters.

Why is uncovering this mystery important?

Uncovering the mystery of an unlikely limit result can provide insight into the underlying principles and patterns of the system being studied. It can also lead to new discoveries and advancements in various fields such as mathematics, physics, and engineering.

How do scientists approach uncovering this mystery?

Scientists use a combination of mathematical analysis, experimentation, and theoretical models to uncover the mystery of an unlikely limit result. They may also collaborate with other experts in the field and utilize advanced technology and tools to aid in their research.

What are some potential applications of understanding this unlikely limit result?

Understanding the unlikely limit result can have various practical applications in different fields. For example, it can help in predicting and controlling complex systems, optimizing processes, and improving the accuracy of mathematical models.

Are there any challenges in uncovering this mystery?

Yes, there can be challenges in uncovering the mystery of an unlikely limit result. It may require a significant amount of time, resources, and expertise to fully understand and explain the phenomenon. Additionally, unexpected results and complex systems can make the process more challenging.

Similar threads

Replies
3
Views
977
Replies
1
Views
1K
Replies
17
Views
1K
Replies
1
Views
1K
Replies
3
Views
1K
Replies
3
Views
2K
Back
Top