Proving Limits: Rigorous Definition & Difficulties

In summary, the conversation discusses the difficulty of proving limits and finding the right values for epsilon and delta. The author advises to restrict attention to positive values of delta less than or equal to 1 and explains how to find the appropriate value for delta using the triangle inequality. They also mention the usefulness of proving limits for understanding the definition of limits and suggest trying different limit problems. The summary also includes a proof for the limit of the square root function and the author's struggle with a particular limit problem.
  • #1
agro
46
0
I think I understand the formal/rigorous definition of limit, but I find proving various limits (or following proofs of them) extremely difficult. I hope you all will help me. I think I won't advance my calculus study until I really get this limit proving thing btw...

Here's one limit prove that baffles me...

Prove that limx->3 x2 = 9

We must show that given any [ee] > 0 there exist [pard] > 0 such that

|x2 - 9| < [ee] if 0 < |x - 3| < [pard] (1)

OK here...

|x + 3| |x - 3| < [ee] if 0 < |x - 3| < [pard] (2)

Using triangle inequality, we see that

|x + 3| = |(x - 3) + 6| [<=] |x - 3| + 6

No problem here...

Therefore if 0 < |x - 3| < [pard]

|x + 3| |x - 3| [<=] (|x - 3| + 6) |x - 3| < ([pard] + 6)[pard]

Fine... Now it's getting more difficult

It follows that (2) will be satisfied for any positive value of [pard] such that ([pard] + 6)[pard] [<=] [ee]. Let us agree to restrict our attention to positive values of [pard] such that [pard] [<=] 1. With this restriction, ([pard] + 6)[pard] [<=] 7[pard], so that (2) will be satisfied as long as it is also the case that 7[pard] [<=] [ee].

Understandable...

We can achieve this by taking [pard] to be the minimum of the numbers [ee]/7 and 1.

Whoa... Now I didn't really understand this last part... The writer just states it like magic (to me)... Anyone can give a more detailed explanation? I attached an image relating to my understanding btw...

Thanks a lot (other limit question will follow).

PS: I also found it is possible (and more straightforward for me) to prove the existence of the limit by solving ([pard] + 6)[pard] [<=] [ee] using the abc formula...
 

Attachments

  • limit01.gif
    limit01.gif
    931 bytes · Views: 470
Last edited:
Mathematics news on Phys.org
  • #2
"We can achieve this by taking to be the minimum of the numbers /7 and 1.

Whoa... Now I didn't really understand this last part... The writer just states it like magic (to me)... Anyone can give a more detailed explanation? I attached an image relating to my understanding btw..."

There have been two inequalities set up: delta< 1 and delta< epsilon/7. Delta must satisfy BOTH of them. Of course, it delta is smaller than the SMALLER of 1 and epsilon/7, then it will be smaller than both.
 
  • #3
"Let us agree to restrict our attention to positive values of such that [<=] 1."

this is the standard trick when looking at such limits. it is because of this statement that we take delta to be the min of the two stated values. think real carefully about the fact that delta is smaller than both epsilon/7 and 1. it allows you to restrict your attention in the way described.

here's the next limit for you to prove:
lim (x->0+) SQRT(x) = 0. it's a nice exercise in right hand limits. a related limit, a two-sided limit, is this: lim (x->9) SQRT(x) = 3. then try lim (x->a) SQRT(x) = SQRT(a), where a > 0.

i think the main usefulness of proving such limits is not the actual work but that it helps you understand the definition of limits. it's quite a clever way to avoid using infinitesimal quantities. however, in nonstandard analysis, limits turn into algebra and it's quite cool.

may your journey be graceful,
phoenix
 
  • #4
I'll use "e" to denote epsilon, "d" for delta.
If e/7>1, then d may possibly be greater than one and still satisfy (2). The author has decided to ignore values of d that are >1. So either d=1<e/7 or d<e/7<1.

In this step:
With this restriction, (d + 6)d [<=] 7d, so that (2) will be satisfied as long as it is also the case that 7d [<=]e.

The author makes a calculation based on the assumption that d<1. So for choosing e, he must make sure that this assumption is not violated. Of course, he could have used any arbitrary value of d for this process. If he chose d<2, then d would have to be less than the minimum of e/8 and 2.

For example, suppose e=8. Then in order to guarantee that |x2-9|<8, you need to choose d such that if |x - 3|<d then |x2-9|<8. Consider d<2. Then 1<x<5. But |4.52 - 9| =11.25 which is not less than 8. So d=2, then (2) is not satisfied. Consider d < e/8 = 1. So 2<x<4. Try x=3.99. |3.992-9|=6.9201<8. Since e/8<2 you should choose d<e/8 to guarantee that (2) is satisfied. Hope that example helps.

BTW, read what I write carefully. I had (and still do have) a tough time with this topic.
 
  • #5
Thanks for all! I'll try to comprehend your replies...

Btw I'll try the f(x)=sqrt(x) limits also...
 
  • #6
Hey, I proved all of them, phoenixthoth!

I'll show the proof that limx->a sqrt(x) = sqrt(a) with a > 0

We must show that for every e > 0 we can find d > 0 such that
if 0 < |x - a| < d then |sqrt(x) - sqrt(a)| < e (1)

0 < |sqrt(x) - sqrt(a)||sqrt(x) + sqrt(a)| < d (2)

By triangle inequality,

|sqrt(x) + sqrt(a)| = |{sqrt(x) - sqrt(a)} + 2sqrt(a)| [<=] |sqrt(x) - sqrt(a)| + 2sqrt(a) (3)

Since |sqrt(x) - sqrt(a)| < e, it follows that

|sqrt(x) - sqrt(a)| + 2sqrt(a) < e + 2sqrt(a)

By (3), we get

|sqrt(x) + sqrt(a)| < e + 2sqrt(a) (4)

Combining (4) by |sqrt(x) - sqrt(a)| < e gives

|sqrt(x) + sqrt(a)||sqrt(x) - sqrt(a)| < (e + 2sqrt(a))e (5)

By looking at (5) and (2), we can see that (1) will be fullfilled if

d [<=] (e + 2sqrt(a))e

Which means the limit exists!

btw I'm stuck at a particular limit problem but I think I'll play around with it some more before asking hints here...
 
  • #7
excellent job!

so the internet can give an education after all. heck, what has this world come to?

i like your "btw" comment, by the way; i think that's the right attitude. now that you have some experience leading to confidence you want to test yourself a little bit more. i bet you're suddenly working on a limit i having a master's degree in math would have trouble with. but i bet i'd see it in 24 hours. now, someone with a phd would see it in about five seconds. God already knows the answer before the question is asked. just signposts and measuring sticks...

cheers,
phoenix
 
  • #8
Hello everyone.

I have begun partial derivatives in my maths class and we have been talking about functions of more than one variable. Now, for a function f(x,y) with two variables the limit laws are similar.

The limit of f(x,y) is L as (x,y) approaches (a,b) for every e > 0 there is a d > 0 such that |f(x,y) - L| < e whenever (x,y) E D and 0<[sqrt(x-a)^2 + (y-b)^2<d

Right, now with that out of the way I have some questions.

If we want to show that (x^2 - y^2)/(x^2 + y^2) has a limit we would have to perform our limit operations along enough lines as to show that it has a limit or not. In other words we take the limit along the x-axis...
f(x,0) = x^2/x^2 = 1
Now this is not enough to prove that it has a limit. We should try for the y-axis...
f(o,y) = -y^2/y^2 = -1
Hence the limit does not exist.

Can anyone help me with this reasoning? Is there more to it?

Thanks.
 
  • #9
no, that's it. that limit doesn't exist.

proving a limit doesn't exist is always easier than proving it exists. proving it exists usually entails using the epsilon/delta business.

i forget the minutae. there was a name for this rule in one dimensional limits where under the right times, the limit was the same as the limit of the derivatives of the numerator and denomiator. oh yes! l'hospital's rule. is there an analogue to that for two dimensional limits? [my teachers didn't know one way or the other.] that would make life easy.

cheers,
phoenix
 
  • #10
Guys, I tried to prove that lim[x->1/3] 1/x = 3, but the final result is invalid! I couldn't spot what was wrong so please point me out...

To prove that lim[x->1/3] 1/x = 3, we must show that for every e>0 we can find d>0 such that

if 0 < |x - 1/3| < d then |1/x - 3| < e

multiplying the first inequality by 3 and modifying the left term in the second inequality, we get

if 0 < |3x - 1| < 3d then |(1 - 3x)/x| < e

but |3x - 1| = |1 - 3x|, so we get

if 0 < |1 - 3x| < 3d then |(1 - 3x)/x| < e

multiplying the middle term in the first inequality by |x|/|x| = 1, we get

if 0 < |(1 - 3x)/x| |x| < 3d then |(1 - 3x)/x| < e

Here's my trick...

|x| = |(x-1/3) + 1/3)| [<=] |x - 1/3| + 1/3

But |x - 1/3| < d so |x - 1/3| + 1/3 < d + 1/3

That means |x| < d + 1/3

Multiplying the above inequality by |(1 - 3x)/x| < e, we get

|(1 - 3x)/x| |x| < e(d + 1/3)

But we also have the inequality

|(1 - 3x)/x| |x| < 3d

That means both of the inequality will be satisfied if 3d = e(d + 1/3) or if
d = e/3(3 - e)

which means that the limit exists...

But wait a moment, that formula for d implies that we can have values [<=] 0 (which contradicts d > 0). It will even be undefined for e = 3 (which doesn't follow the limit definition: "... for every value of e > 0, we can find a number d > 0..."). That means that we have a contradiction (damn that 1/x function, I spent days to get to this point and the solution still has a problem)...

Let's address the above problem later... Let's try it for e = 1.49 first... That means d = e/3(3 - e) = 1/6.

Do we have 1.51 < f(x) < 4.49 if 0 < |x - 1/3| < 0.33

Heck for x = 1/3 - 0.329 = 0.00433 we have f(x) = 230.95 which invalidates f(x) < 4.49

That means d = e/3(3 - e) is a wrong formula... That means I made a mistake somehow, somewhere... Could anyone point out in what step?

Thank you very very very much!
 
  • #11
The first problem is easy.

Nothing says you have to use the same formula for all &epsilon;! Suppose you have a formula for &delta; that works for &epsilon; < 3... then all you need to do is pick a single value of &delta; that works for all &epsilon; >= 3! A good candidate is, say, the value of &delta; corresponding to &epsilon; = 2.



As for the other problem... I think you lost your way when you multiplied the two inequalities; it tends to be difficult to work backwards after multiplying two inequalities. You have freedom in setting &delta;, but I think that operation obliterates any chance of forcing the inequality |(1 - 3x)/x| < e.


My hunch is to do this. From here:

0 < |(1 - 3x)/x| |x| < 3d

divide through by |x|, and then find an upper bound for 3d / |x|
 
  • #12
1/(x+h) - 1/x = -h/(x(x+h)) = -h/[x^2 + xh], so if |h| is less than |x/2|, then

|xh| < |x^2/2|, |x^2 + xh| > |x^2/2|. Hence if |h| is less than |x/2|, then

|h/[x^2 + xh]| < |h|/|x^2/2| = 2|h|/|x^2|. now if also |h| < e |x^2|/2,

then |1/(x+h) - 1/x| = |h/[x^2 + xh]| < 2|h|/|x^2| < e.

Hence if x ≠ 0, then 1/(x+h) approaches 1/x as h approaches 0.

I'll bet this is a big help!

but the point is that to show something approaches something else, subtract them, and show the difference approaches zero. i.e. to show f(a+h) approaches f(a) as h approaches zero, express the difference f(a+h)-f(a) as an expression that has h in it, hopefully as a factor, and then see how small h has to be to make the whole expression smaler than e.

just trying to wake the dead, i.e. the archives.
 

FAQ: Proving Limits: Rigorous Definition & Difficulties

What is the rigorous definition of a limit?

The rigorous definition of a limit is a mathematical concept that describes the behavior of a function as the input approaches a certain value. It is defined as the value that the function approaches as the input gets infinitely close to the specified value.

What are the difficulties in proving limits?

There are several difficulties in proving limits, including the complexity of the function, the existence of a limit, and the existence of a single value that the function approaches. Additionally, the proof may involve advanced mathematical concepts and require a deep understanding of the properties of limits.

How can we prove a limit exists?

To prove that a limit exists, we must show that the function approaches a single value as the input approaches the specified value. This can be done by evaluating the function at values closer and closer to the specified value, and showing that the values are getting closer to a single number.

What are the different methods used to prove limits?

There are several methods used to prove limits, including the epsilon-delta method, the squeeze theorem, and the definition of a limit. These methods involve different approaches and techniques, but ultimately they all aim to show that the function approaches a single value as the input gets infinitely close to the specified value.

Why is proving limits important?

Proving limits is important because it allows us to understand the behavior of a function, particularly near critical points such as discontinuities or points of inflection. It is also a crucial tool in calculus and other branches of mathematics, as it helps in determining the convergence or divergence of a series and solving optimization problems.

Back
Top