Precise (Or Epsilon-Delta) Definition of a Limit

In summary, the Precise Definition of a Limit states that a limit exists if for every value of ε > 0, there is a δ > 0 that "encloses" a range of x values whose outputs satisfy the inequality: l f (x) - L l < ε. This means that as ε decreases, the range of x values that satisfy the inequality also decreases, indicating that as x gets infinitely close to c, the value of the function f (x) approaches L. This can also be thought of in terms of open balls, where the tolerance level ε is represented by an open ball and the range of x values that satisfy the inequality is represented by the radius of the ball. A continuous function will have an allowable
  • #1
ProPM
66
0
Hello guys!

I am trying to get a solid grasp of the Precise Definition of a Limit. I am having a particular hard time linking the intuition of the limit I developed a while ago to the Epsilon-Delta definition.

I understand the basics: a limit exists/is only true if and only if for every value of ε > 0 there is a δ > 0 that "encloses" a range of x values whose outputs satisfy the inequality: l f (x) - L l < ε.

Now, I simply can't understand how on Earth that attests that the value of a function, f (x), approaches L as x gets infinitely close to, e.g. c ...

Here is my take on it (I hope it is at least mildly correct!):

Delta is a function of epsilon. Namely, if epsilon decreases (if we close-in on L from both sides), Delta decreases (meaning the x values approach c from both sides)

If the limit is true/exists, we can make epsilon as small as we want (get as close as we wish to L from both sides) thereby making Delta increasingly small (making the x values get closer and closer to c.) This shows that as f (x) approaches L, x approaches c from both sides: the limit is correct/true.

Am I on the right track?

Thank you very much in advance for any help whatsoever (this thing is really bothering me)!
 
Physics news on Phys.org
  • #2
Do you know what an open ball is? Try to formulate it conceptually in terms of open balls; I find the ##\epsilon-\delta## definition of a limit to be very intuitive if I think of it in terms of open balls (motivated by topology).
 
  • #3
You're on the right track.

Epsilon-delta definitions actually have a lot to do with basic physics. It's the same idea of an approximation.

Let's say that you're in a shower. You like your shower to have a temperature of 40°C or approximate. You can regulate your temperature by turning some knob. Of course, you can never get exactly 40°C by turning the knob (you don't have that precision), but you can get close. In fact you can get arbitrary close.

For example, let's say that you like 40°C, but ±5°C is ok too. Then you got a certain number of positions that are ok. In fact, you got an entire range of positions that is ok. The 5°C is called the ε, while the range of positions has to do with the δ.
However, if you're more sensitive and if you like 40°C, but only ±0.5°C is ok. Then there are also some positions of the knob that are ok, but there are significantly less positions that are ok.

In general, let's say that you like 40°C with a tolerance of ε°C. Then there is a certain range of positions of the knob that are ok. This is the δ-range. The smaller you take ε, the smaller the δ-range will be. But whatever we see, there will always be a δ-range.

So choosing your temperature in the shower is continuous.

An example of a discontinuous function would be the following. Consider the following graph and imagine that it is landscape: (the following graph IS continuous, but that's not the function I'm talking about)



So your landscape is flat between 1 and 3, and decreases outside that. Let's say that you have a ball that you want to place on the landscape. Your goal is to place the ball exactly on 3. Of course, this is impossible to do. So we will allow some degree of tolerance. Let's say we want to place the ball 1 distance from the spot 3. We can always do this by placing it on the left of 3. But we can't do that by placing it on the right. Indeed, if we place it on the right, then the ball will just roll away and will roll outside of the allowable range. So we see that there is no allowable interval of tolerance around 3: the space left to 3 is ok, but the space right to 3 is not ok. This means that the process is not continuous.

Abstractly, you work with functions ##f:\mathbb{R}\rightarrow \mathbb{R}##. You should see the domain as some kind of knob in the shower. So we can put the knob on any value we please , but only approximately. The function ##f## regulates the temperature. So let's say we put the knob on ##0##, then we will feel temperature ##f(0)##. The entire process of epsilon-delta definitions is to get as close as we want to some specific temperature by turning the knob and thus getting within a certain allowable range of the knob. If no matter how close we want to get to the temperature, there is always an allowable interval of the knob to the left and to the right, then the function is continuous.
 
Last edited by a moderator:
  • Like
Likes 1 person
  • #4
ProPM said:
Hello guys!

I am trying to get a solid grasp of the Precise Definition of a Limit. I am having a particular hard time linking the intuition of the limit I developed a while ago to the Epsilon-Delta definition.

I understand the basics: a limit exists/is only true if and only if for every value of ε > 0 there is a δ > 0 that "encloses" a range of x values whose outputs satisfy the inequality: l f (x) - L l < ε.

Now, I simply can't understand how on Earth that attests that the value of a function, f (x), approaches L as x gets infinitely close to, e.g. c ...

Here is my take on it (I hope it is at least mildly correct!):

Delta is a function of epsilon. Namely, if epsilon decreases (if we close-in on L from both sides), Delta decreases (meaning the x values approach c from both sides)

If the limit is true/exists, we can make epsilon as small as we want (get as close as we wish to L from both sides) thereby making Delta increasingly small (making the x values get closer and closer to c.) This shows that as f (x) approaches L, x approaches c from both sides: the limit is correct/true.

Am I on the right track?

Thank you very much in advance for any help whatsoever (this thing is really bothering me)!

In symbols :

$$\forall \epsilon > 0, \exists \delta > 0 \space | \space 0 < |x-a| < \delta \Rightarrow |f(x) - L| < \epsilon$$

Considering ##0 < |x-a| < \delta##, we know that |x-a| must always be positive, so we never actually consider what happens at x=a, only what happens as we approach it. That's where ##|x-a| < \delta## comes into play.

Expanding we get ##-\delta < x-a < \delta## and then ##x + \delta > a > x -\delta##.

So there exists a ##\delta > 0## such that ##a## is bounded between ##x + \delta## and ##x -\delta## and we can only arbitrarily get close to ##a##.

Using this, what does it say about ##|f(x) - L| < \epsilon##?

Well first, let's consider ##-\epsilon < f(x) - L < \epsilon## which yields ##f(x) + \epsilon > L > f(x) - \epsilon##. So ##\forall \epsilon > 0## we can make ##f(x)## as close to ##L## as we like. How close do we need to be you might ask? Sufficiently close.

What defines sufficiently close? Well, ##f(x)## varies according to the values of ##x##. How far away ##x## is from ##a## depends on ##\delta##. So the values of how far ##f(x)## is away from ##L## ( which change according to ##\epsilon## ) will also vary according to ##\delta##. Hence ##\epsilon## will change according to the ##\delta## we choose.

So in conclusion we know we can choose a ##\delta(\epsilon)## as to make ##f(x)## as close to ##L## as we like.
 
  • Like
Likes 1 person
  • #5
This shows that as f (x) approaches L, x approaches c from both sides: the limit is correct/true.
This seems backwards: we generally say "as x approaches c, f(x) approaches L."

I think of a limit as stating that, for x values "close" to c, we can make f(x) as close to L as we want. [itex]0 < |x - c| < \delta[/itex] denotes a deleted neighborhood of c - basically an interval centered on c, but with with c removed. Then, the epsilon-delta definition of a limit simply states [itex]\lim_{x->c}f(x) = L[/itex] whenever there exists a (nonempty) deleted neighborhood of c (call it N) such that for every x in N, f(x) is within an epsilon of L.

To better understand this definition, let's consider the rational indicator function, [itex]I_Q[/itex]. [itex]I_Q(x)[/itex] is defined as 1 if x is rational, and zero otherwise. We can see that the limit [itex]\lim_{x->1}I_Q(x)[/itex] will not exist for any x, as whenever [itex]\epsilon \le 1[/itex], there is no deleted neighborhood of 1 which does not contain an irrational number. No matter how "close" our x values are to c, there is a limit to how close f(x) can be to any given number; thus, the limit fails to exist.

I'd also like to point out that, technically, there is no requirement that delta decrease with epsilon. Consider, for example, the function f(x) = 0: [itex]|f(x) - 0| < \epsilon[/itex] for ANY choice of delta.
 
  • #6
ProPM said:
Am I on the right track?

I agree with micromass that you're on the right track and I agree with Strants that it's more in the spirit of the formal definition to think of delta being small as "causing" (i.e. implying) that |f(x) - L | is small instead of thinking about epsilon making delta small.

When proof about a limit is written, the person writing the proof usually provides a way to state a suitable delta by making it a function of epsilon. But this is a symptom of the fact that some people accept mathematical proofs when the reasoning is done in a backward manner. (For example, in "proving" trig identities, most teachers accept writing the identity to be proven and then performing steps till we reach an identity already known to be true. This looks like you assume the thing that is to be proven as the first step! A proper proof would consist of writing the steps in reverse order so that you begin with identity known to be true and derive the identity to be proven.)

The forwards order for a limit proof could begin something like "Given epsilon > 0, pick delta = epsilon/3". However, it would seem that "delta = epsilon/3" was pulled out of thin air. So often these proofs are written as if we are working backwards and trying to "solve for delta" as a function of epsilon. But the formal reasoning goes in the reverse order. It says that making delta small causes |f(x) -L| to be smaller than epsilon. The fact that making epsilon small forces us to search for smaller deltas is generally true, but it isn't the fact that makes the proof work.
 
  • #7
Thank you very much for all the replies! They even helped me understand parts of the definition I thought I had already comprehended!

But I think I need to give myself some more time to digest the content of the responses; I am still having a hard time convincing myself of some things :redface:

I think what confused me was that I thought that the purpose of the precise definition was to find the limit - show that, as x approaches, e.g. c, f (x) approaches L.

I think I got what is the function of the definition now:

The true "role" of the precise definition is to prove/confirm a limit is in fact L. So, if someone claims that the limit of f (x) as x approaches, e.g. c, is L, then the precise definition can be used to prove that right or wrong.

How does it do that?

By testing if for every ε > 0 there is a δ > 0 that "houses" a range of x values whose outputs satisfy l f (x) - L l < ε. Meaning, that we can get as close as we want to the limit, L.

How does that sound?

I would like to thank all of you guys one more time! I will keep reading your posts one by one!
 
  • #8
Yes, that sounds good.

So indeed, the purpose of the epsilon-delta definition is to prove certain limits are true. So in order to prove things like ##\lim_{x\rightarrow a} f(x) = L##, you will need to take an ##\varepsilon>0## arbitrary, and then find a ##\delta## that works for that ##\varepsilon##. Works means here that for all ##x## in the ##\delta##-range, we have that ##f(x)## is in the ##\varepsilon## range.

It might be good to talk a bit of history now. Because when limits and continuity were invented, they looked totally different and didn't use epsilon-delta at all. Let's say we want to calculate

[tex]\lim_{h\rightarrow 0} \frac{(x+h)^2 - x^2}{h}[/tex]

Historically, this was solved by infinitesimals. These are not real numbers. Infinitesimals are "things" that lie extremely close to 0 (closer than any real nonzero number), but aren't 0. So let ##e## be an infinitesimal, then we calculate

[tex]\frac{(x+e)^2 - x^2}{e} = \frac{x^2 + 2xe + e^2 - x^2}{e} = \frac{2xe + e^2}{e} = 2x + e[/tex]

But since ##e## is close to ##0##, we can set it ##0##. So we get that the limit is ##2x##.

This is how limits were done in the past. And everything worked fine. If you want to calculate limits, then doing things like this will give you the right answer.

But there are problems. For example, why can we set ##e=0##, what justification is there? And what is an infinitesimal anyway?
Furthermore, we have a real function that we want to calculate the limit of. And the answer is a real number. But the relation between them requires things that aren't real numbers, but infinitesimals! It would be much more elegant if we can find limits by only working with real numbers or properties of real numbers.

These issues plagued calculus for over 100 years. No answer was found. Not until they invented the epsilon-delta definition. This was a satisfactory answer and was much more rigorous than the infinitesimal approach.

So if you want to calculate limits, then you don't need epsilon-delta at all. You will rarely ever need it for calculating specific limits. But you need it to put calculus on solid ground.

That said, the approach with infinitesimals was made rigorous too, but only very recently. This is the approach of hyperreal or surreal numbers.
 
  • Like
Likes 2 people
  • #9
Here's how I like to think of it. Say we have a function ##f## continuous at some ##x_0## and let's say for starters that you give me any ##\epsilon > 0## whatsoever; what you have glibly done is given me an open interval around ##f(x_0)## with radius ##\epsilon##. I can then guarantee you an open interval of radius ##\delta## around ##x_0## whose image under ##f## will fit into the open interval that you prescribed me. Now let's say you pinch the open interval around ##f(x_0)## to make it even smaller (i.e. choose a smaller ##\epsilon##) then I can contest you and sufficiently pinch the open interval around ##x_0## so that its image once again fits into your newly pinched open interval around ##f(x_0)##. We can then keep doing this indefinitely in the sense that you can keep pinching the open interval around ##f(x_0)## to arbitrarily small sizes and I will always be able to pinch the open interval around ##x_0## to sufficiently small sizes so that under ##f## it fits into your open interval around ##f(x_0)##. This tells me that no matter how small an open interval you make around ##f(x_0)##, I can always find an open interval around ##x_0## which can be fit into your interval under ##f##.

Consider the function ##f(x) = \begin{cases}
0 \text{ if } x\leq 0 \\
1 \text{ if } x> 0
\end{cases}## and let's say we want to evaluate continuity at ##x = 0##. So say you give me an open interval of radius ##\epsilon = \frac{1}{2}## about ##f(0) = 0##; if you imagine this function as a graph in ##\mathbb{R}^{2}## then said open interval can be pictured as being centered on the origin and lying along the ##y##-axis. Now can I manage to find you an open interval of some radius ##\delta## such that under ##f## this interval fits into yours (my interval can be pictured as being centered on the origin and lying along the ##x##-axis)? Well note that no matter how small an open interval I take around ##x = 0##, it will always contain some ##x < 0## and some ##x > 0## so the image will always be ##\{0,1\}##. But there is no way this can fit inside your original open interval so this function can't be continuous in the above sense, as we would expect. More explicitly, if we assume ##f## is continuous at ##x = 0## then for ##\epsilon = \frac{1}{2}## there exists a ##\delta > 0## such that for all ##x\in (-\delta,\delta)##, ##f(x)\in (-\frac{1}{2},\frac{1}{2}) ## which is a contradiction as ##f(x) = 1## for ##x > 0##. I hope that helps!

EDIT: Here's an animation that depicts what I was talking about above: http://www2.seminolestate.edu/lvosbury/calculusI_folder/EpsilonDelta.htm
 
Last edited:
  • Like
Likes 2 people

FAQ: Precise (Or Epsilon-Delta) Definition of a Limit

What is the precise definition of a limit?

The precise definition of a limit is a mathematical concept that describes the behavior of a function as its input approaches a certain value. It involves the use of epsilon and delta, where epsilon represents a small distance from the limit point and delta represents a small distance from the input value.

Why is the precise definition of a limit important?

The precise definition of a limit is important because it allows us to rigorously prove the existence and value of a limit, as well as understand the behavior of a function at a specific point. Without this definition, we would not have a formal way to determine the limit of a function.

How do you use the epsilon-delta definition to prove a limit?

To use the epsilon-delta definition to prove a limit, we must show that for any given value of epsilon (representing a small distance from the limit point), there exists a corresponding value of delta (representing a small distance from the input value) such that when the input is within delta of the limit point, the output is within epsilon of the limit value.

What is the difference between a one-sided limit and a two-sided limit?

A one-sided limit only considers the behavior of a function as the input approaches the limit point from one direction (either the left or the right). A two-sided limit, on the other hand, considers the behavior of a function as the input approaches the limit point from both the left and right sides.

Can a function have a limit at a point where it is not defined?

Yes, a function can have a limit at a point where it is not defined. This can happen when the function has a "hole" or point of discontinuity at that point. In this case, the limit may exist and be well-defined, but the function itself is not defined at that point.

Similar threads

Replies
25
Views
3K
Replies
3
Views
1K
Replies
2
Views
1K
Replies
16
Views
3K
Replies
1
Views
1K
Back
Top