Is understanding limits necessary for senior-level real analysis?

  • MHB
  • Thread starter SweatingBear
  • Start date
  • Tags
    Confusion
In summary, the conversation discusses the concept of epsilon-delta proofs and addresses two specific questions about the use of strict inequalities and the rationale behind choosing the smallest value for delta. It also includes a helpful explanation and a mathematical proof for the property of the minimum.
  • #1
SweatingBear
119
0
Regarding the post below in http://www.mathhelpboards.com/f10/epsilon-delta-proof-confusion-4745/:

Prove It said:
I suggest the OP reads these PDFs.

https://www.physicsforums.com/attachments/817View attachment 818

There are just two things left I need to wrap my mind around, after that I think I will have comprehended the epsilon-delta concept.

In example 3 in the document epsilon-delta1.pdf where the task is to show that \(\displaystyle \lim_{x \to 5} \, (x^2) = 25\), they assume that there exists an \(\displaystyle M\) such that \(\displaystyle |x + 5| \leqslant M\).

(1) Is it not supposed to be a strict inequality i.e. \(\displaystyle |x+5| < M\) and not \(\displaystyle |x+5| \leqslant M\)? Why would the eventual equality between \(\displaystyle M\) and \(\displaystyle |x+5|\) ever be interesting?

They make the aforementioned requirement when one arrives at

\(\displaystyle |x-5| < \frac {\epsilon}{|x+5|} \, .\)

We somehow, normally through algebraic manipulations, wish to arrive at \(\displaystyle |x-5| < \frac{\epsilon}{M}\) and in their procedure, they write

\(\displaystyle |x-5||x+5| < \epsilon \iff |x-5|M < \epsilon \, .\)

(2) The steps above have overlooked something. Sure, I can buy that \(\displaystyle |x-5||x+5| < |x-5|M\) because we stipulated an upper bound for \(\displaystyle |x+5|\) but just because \(\displaystyle |x-5|M\) is greater than \(\displaystyle |x-5||x+5|\) does not mean that it also must be less than epsilon, right?

Drawing a number line, one can readily conclude that having a < c and a < b does not imply b < c.

What is going on?
 
Physics news on Phys.org
  • #2
sweatingbear said:
Regarding the post below in http://www.mathhelpboards.com/f10/epsilon-delta-proof-confusion-4745/:
There are just two things left I need to wrap my mind around, after that I think I will have comprehended the epsilon-delta concept.

In example 3 in the document epsilon-delta1.pdf where the task is to show that \(\displaystyle \lim_{x \to 5} \, (x^2) = 25\), they assume that there exists an \(\displaystyle M\) such that \(\displaystyle |x + 5| \leqslant M\).

(1) Is it not supposed to be a strict inequality i.e. \(\displaystyle |x+5| < M\) and not \(\displaystyle |x+5| \leqslant M\)? Why would the eventual equality between \(\displaystyle M\) and \(\displaystyle |x+5|\) ever be interesting?

You can just make them all strict. I don't think it matters a whole lot in these cases.

They make the aforementioned requirement when one arrives at

\(\displaystyle |x-5| < \frac {\epsilon}{|x+5|} \, .\)

We somehow, normally through algebraic manipulations, wish to arrive at \(\displaystyle |x-5| < \frac{\epsilon}{M}\) and in their procedure, they write

\(\displaystyle |x-5||x+5| < \epsilon \iff |x-5|M < \epsilon \, .\)

(2) The steps above have overlooked something. Sure, I can buy that \(\displaystyle |x-5||x+5| < |x-5|M\) because we stipulated an upper bound for \(\displaystyle |x+5|\) but just because \(\displaystyle |x-5|M\) is greater than \(\displaystyle |x-5||x+5|\) does not mean that it also must be less than epsilon, right?

Well, I'm not sure I agree that the steps have overlooked something. You let $\delta= \min(1, \epsilon/11)$. Therefore, $\delta \le 1$, which implies that $|x+5|<11$. So you can control that piece. But, $\delta \le \epsilon/11$, which means that $|x-5|<\epsilon/11$. Therefore, the product
$$|x^{2}-5| = |x-5| |x+5| \le 11( \epsilon/11) = \epsilon.$$
So the answer to your question is that it does imply that $|x-5||x+5| \le \epsilon$, because of the way you chose your $\delta$.

Remember that $\epsilon$ is always chosen first. Then your goal is to find a formula $\delta = \delta( \epsilon)$ so that the condition of the limit holds.

Drawing a number line, one can readily conclude that having a < c and a < b does not imply b < c.

What is going on?

I would agree that $a<c$ and $a<b$ does not imply that $b<c$. However, that's not what's going on here. Instead, you have something more like $0<a<b$ and $0<c<d$, and therefore $ac<bd$: that is true.

By the way, I would like to take this moment to write that I really appreciate the fact that you take the time to write meticulously crafted questions! Your $\LaTeX$ is impeccable, and really adds to the clarity of your questions.
 
  • #3
@Ackbach: First off: Thank you very much for your kind remarks and your helpful reply.

Ackbach said:
You can just make them all strict. I don't think it matters a whole lot in these cases.

All right, I understand.
Ackbach said:
Well, I'm not sure I agree that the steps have overlooked something. You let $\delta= \min(1, \epsilon/11)$.

This reminds me: I have always wondered why we want \(\displaystyle \delta\) to be smallest of those two quantities (i.e. the two upper bounds). What is the rationale behind that? My guess is that since we are making \(\displaystyle x\) very close to the point of interest, we thusly wish to choose a small distance as possible to the mentioned point of interest. Is this a valid conclussion?

Ackbach said:
I would agree that $a<c$ and $a<b$ does not imply that $b<c$. However, that's not what's going on here. Instead, you have something more like $0<a<b$ and $0<c<d$, and therefore $ac<bd$: that is true.

All right, I see that requiring the numbers to positive allows to conclude that \(\displaystyle ac < bd\). However, I do not think that I have seen this one before:

\(\displaystyle (0 < a < b) \ \wedge \ (0 < c < d) \implies ac < bd \, .\)

Could you perhaps direct to me a source where I can read up on it and/or eventually find some kind of proof? I checked out this Wikipedia-article but to no avail.
 
  • #4
sweatingbear said:
@Ackbach: First off: Thank you very much for your kind remarks and your helpful reply.

You're very welcome. (Nod)

This reminds me: I have always wondered why we want \(\displaystyle \delta\) to be smallest of those two quantities (i.e. the two upper bounds). What is the rationale behind that?

The reason is that we're going to need both conditions met simultaneously (a logical AND operation); that is, we need BOTH the condition $|x+5|\le 11$ AND $|x-5|< \epsilon/11$ to be true simultaneously. If you defined $\delta= \max (1, \epsilon/11)$, one of those conditions might fail. So you're banking on the property of the minimum, that if $a= \min(b,c)$, then $a\le b$ AND $a\le c$.

My guess is that since we are making \(\displaystyle x\) very close to the point of interest, we thusly wish to choose a small distance as possible to the mentioned point of interest. Is this a valid conclussion?

I'm interpreting your question as, "We thusly wish to choose as small a distance as possible to the mentioned point of interest. Is this a valid conclusion?" (emphasis added) The answer is that there is no smallest distance to the point of interest, if you are in the real number system. The idea of a limit is not that you get as close as possible to a point of interest, but that you get as close as you want.

All right, I see that requiring the numbers to positive allows to conclude that \(\displaystyle ac < bd\). However, I do not think that I have seen this one before:

\(\displaystyle (0 < a < b) \ \wedge \ (0 < c < d) \implies ac < bd \, .\)

Could you perhaps direct to me a source where I can read up on it and/or eventually find some kind of proof? I checked out this Wikipedia-article but to no avail.

From the wiki, you get that if $a<b$, and $c>0$, then $ac<bc$. So, apply this logic twice, along with the transitivity of inequalities.

Theorem: If $0<a<b$ and $0<c<d$, then $ac<bd$.

Proof:

1. Assume that $0<a<b$, and that $0<c<d$.
2. Because $c>0$, it follows that $ac<bc$.
3. By the transitivity of inequalities, it must be that $b>0$.
4. Therefore, applying our logic again, it is the case that $bc<bd$.
5. Therefore, $ac<bc<bd$.
6. By the transitivity of inequalities, it follows that $ac<bd$. QED.

The same logic follows through if you use non-strict inequalities.
 
  • #5
When I arrive at \(\displaystyle |x-5||x+5| < \epsilon \)

I usually do the following \(\displaystyle |x+5|= |x-5+10|\leq |x-5|+10\)

So we have the following

\(\displaystyle |x-5||x+5| \leq |x-5| \, ( |x-5|+10 )\)

which can be made arbitrarily small .
 
  • #6
Thanks again for such a thorough reply!

Ackbach said:
If you defined $\delta= \max (1, \epsilon/11)$, one of those conditions might fail.

Honestly, I do not see why/how it would fail; would you mind elaborating?

Ackbach said:
So you're banking on the property of the minimum, that if $a= \min(b,c)$, then $a\le b$ AND $a\le c$.

Intuitively, I cannot grasp that having $a$ equal the smallest of the quantities $b$ and $c$ allows $a$ to equal $b$ and $c$ at the same time. I do not see how that makes sense, neither for the $\min(b,c)$ nor why $\max(b,c)$ cannot allow $a$ to equal $b$ and $c$ simultaneously.

Ackbach said:
The answer is that there is no smallest distance to the point of interest, if you are in the real number system. The idea of a limit is not that you get as close as possible to a point of interest, but that you get as close as you want.

Ah! Thank you for that clarification, helped me see things differently.

Ackbach said:
From the wiki, you get that if $a<b$, and $c>0$, then $ac<bc$. So, apply this logic twice, along with the transitivity of inequalities.

Theorem: If $0<a<b$ and $0<c<d$, then $ac<bd$.

Proof:

1. Assume that $0<a<b$, and that $0<c<d$.
2. Because $c>0$, it follows that $ac<bc$.
3. By the transitivity of inequalities, it must be that $b>0$.
4. Therefore, applying our logic again, it is the case that $bc<bd$.
5. Therefore, $ac<bc<bd$.
6. By the transitivity of inequalities, it follows that $ac<bd$. QED.

The same logic follows through if you use non-strict inequalities.

Thanks for that proof, I will have a thorough look at it very soon.
 
  • #7
sweatingbear said:
Honestly, I do not see why/how it would fail; would you mind elaborating?

Well, suppose $1>\epsilon/11$, and so you let $\delta= \max(1,\epsilon/11)$. Then you'd have $\delta=1$. But you could have a small epsilon, say, $\epsilon=1$. Then $x$ could range up to, say, $5.9$, and you'd have
$$|x+5||x-5|=10.9(0.9)=9.81> \epsilon = 1.$$

Intuitively, I cannot grasp that having $a$ equal the smallest of the quantities $b$ and $c$ allows $a$ to equal $b$ and $c$ at the same time.

It doesn't. It let's $a \le b$ and $a \le c$ at the same time. $a\le b$ is the same thing as saying $a$ is less than OR equal to $b$. Review your logic!
 
  • #8
@Ackbach: I must thank you for your patience replying to my posts, much appreciated.

Ackbach said:
Well, suppose $1>\epsilon/11$, and so you let $\delta= \max(1,\epsilon/11)$. Then you'd have $\delta=1$. But you could have a small epsilon, say, $\epsilon=1$. Then $x$ could range up to, say, $5.9$, and you'd have
$$|x+5||x-5|=10.9(0.9)=9.81> \epsilon = 1.$$

I do not see the "problem" or "contradiction"; sorry but my mind is unrelenting and not allowing me to wrap it around this concept. Do you care to elaborate further?

Ackbach said:
It doesn't. It let's $a \le b$ and $a \le c$ at the same time. $a\le b$ is the same thing as saying $a$ is less than OR equal to $b$. Review your logic!

What if for example \(\displaystyle b = 4\) and \(\displaystyle c = 1\)? If we use the "AND" operator, then that means we have two requirements for \(\displaystyle a\) that must be met: \(\displaystyle a\) must be less than or equal to \(\displaystyle 4\) and, simultaneously, less than or equal to \(\displaystyle 1\). Effectively, this implies that a simply must be less than or equal to \(\displaystyle 1\).

Hm, wait a minute... So – effectively – thanks to the "AND" operator, the upper bound for \(\displaystyle a\) became the smallest quantity of \(\displaystyle b\) and \(\displaystyle c\). So this is why we let \(\displaystyle \delta\) equal the minimum of the two different bounds and not the maximum?

I think I am on the verge of having a Heureka-moment, come on, somebody please push me into goal!
 
  • #9
sweatingbear said:
@Ackbach: I must thank you for your patience replying to my posts, much appreciated.

You're quite welcome! I wouldn't say, though, that replying to your posts is requiring all that much patience. It's the bad attitudes of some posters I've dealt with in the past that are tiresome to work with.

I do not see the "problem" or "contradiction"; sorry but my mind is unrelenting and not allowing me to wrap it around this concept. Do you care to elaborate further?

Well, we want to guarantee that if $\delta$ equals something as a function of $\epsilon$, then $|f(x)-L|<\epsilon$. But I just provided an example where $\delta$ was some function of $\epsilon$ (in this case, just the constant function $1$), and that did not imply that $|f(x)-L|<\epsilon$. So that $\delta$ didn't work.

What if for example \(\displaystyle b = 4\) and \(\displaystyle c = 1\)? If we use the "AND" operator, then that means we have two requirements for \(\displaystyle a\) that must be met: \(\displaystyle a\) must be less than or equal to \(\displaystyle 4\) and, simultaneously, less than or equal to \(\displaystyle 1\). Effectively, this implies that a simply must be less than or equal to \(\displaystyle 1\).

Precisely.

Hm, wait a minute... So – effectively – thanks to the "AND" operator, the upper bound for \(\displaystyle a\) became the smallest quantity of \(\displaystyle b\) and \(\displaystyle c\). So this is why we let \(\displaystyle \delta\) equal the minimum of the two different bounds and not the maximum?

You've got it.

I think I am on the verge of having a Heureka-moment, come on, somebody please push me into goal!

You've fallen off the Eureka cliff... er... maybe I need to come up with a better analogy.
 
  • #10
Ackbach said:
You're quite welcome! I wouldn't say, though, that replying to your posts is requiring all that much patience. It's the bad attitudes of some posters I've dealt with in the past that are tiresome to work with.
Well, we want to guarantee that if $\delta$ equals something as a function of $\epsilon$, then $|f(x)-L|<\epsilon$. But I just provided an example where $\delta$ was some function of $\epsilon$ (in this case, just the constant function $1$), and that did not imply that $|f(x)-L|<\epsilon$. So that $\delta$ didn't work.
Precisely.
You've got it.
You've fallen off the Eureka cliff... er... maybe I need to come up with a better analogy.

Maybe "You've climbed to the Eureka peak"? :P
 
  • #11
Ackbach said:
You're quite welcome! I wouldn't say, though, that replying to your posts is requiring all that much patience. It's the bad attitudes of some posters I've dealt with in the past that are tiresome to work with.

:)

Ackbach said:
Well, we want to guarantee that if $\delta$ equals something as a function of $\epsilon$, then $|f(x)-L|<\epsilon$. But I just provided an example where $\delta$ was some function of $\epsilon$ (in this case, just the constant function $1$), and that did not imply that $|f(x)-L|<\epsilon$. So that $\delta$ didn't work.

All right thank you for elaborating, now I know what to look for in your stated example. I will reflect upon it for some time and hopefully have an epiphany. But meanwhile, I just have two last question (see below).

Ackbach said:
Precisely.
Ackbach said:
You've got it.
Ackbach said:
You've fallen off the Eureka cliff... er... maybe I need to come up with a better analogy.
Prove It said:
Maybe "You've climbed to the Eureka peak"? :P

That is so lovely to hear; I finally understand! Thank you guys very much.
_________________________

My final questions before I "master" the epsilon-delta concept:

(1) In these proofs we have \(\displaystyle 0 < |x - a| < \delta\) and \(\displaystyle |f(x) - \text{L}|<\epsilon\). Why do we not also require \(\displaystyle 0 < |f(x) - \text{L}| < \epsilon\) i.e. that the distance between \(\displaystyle f(x)\) and \(\displaystyle L\) is also greater than zero?

If we do not, then that means that \(\displaystyle f(x)\) can equal the limit \(\displaystyle \text{L}\) but how can that ever be if \(\displaystyle x\) never equals the point which yields that limit-value? How can \(\displaystyle f(x) := x^2\) equal \(\displaystyle 25\) without \(\displaystyle x\) ever being \(\displaystyle 5\)?

Should we not also require that the function output never actually approaches the limit-value, just as the input never really equals the approachedpoint? Surely the function will never reach that limit exactly because \(\displaystyle x\) never equals the approached point exactly, so why do we not say \(\displaystyle 0 < |f(x) - \text{L}| < \epsilon\)?

(2) I am still wrestling with this equivalence:

\(\displaystyle |x-5|\cdot |x+5| < \epsilon \iff |x-5|\cdot M < \epsilon \, ,\)

where \(\displaystyle M\) is an arbitrary upper bound for \(\displaystyle |x+5|\). I tried applying

\(\displaystyle (0 < a < b) \wedge (0 < c < d) \implies ac < bd\)

but to no avail. I would really appreciate if somebody could show me an algebraic argument which justifies that \(\displaystyle |x-5|\cdot M\) must also be less than \(\displaystyle \epsilon\) whenever \(\displaystyle |x-5|\cdot |x+5| < \epsilon\).

I read somewhere that for \(\displaystyle |x-5| < \frac {\epsilon}{|x+5|}\) one wishes to make the upper bound \(\displaystyle \frac{\epsilon}{|x+5|}\) "really small" since we are closing in on \(\displaystyle x = 5\) and one way to achieve that is to make \(\displaystyle |x+5|\) close to its upper bound \(\displaystyle M\). So a "really small" upper bound would in that case be \(\displaystyle \frac{\epsilon}{M}\), hence us also requiring that \(\displaystyle |x-5| < \frac {\epsilon}{M}\). But I am dubious about this example, can somebody help me see things clearer?
 
  • #12
sweatingbear said:
:)
All right thank you for elaborating, now I know what to look for in your stated example. I will reflect upon it for some time and hopefully have an epiphany. But meanwhile, I just have two last question (see below).
That is so lovely to hear; I finally understand! Thank you guys very much.
_________________________

My final questions before I "master" the epsilon-delta concept:

(1) In these proofs we have \(\displaystyle 0 < |x - a| < \delta\) and \(\displaystyle |f(x) - \text{L}|<\epsilon\). Why do we not also require \(\displaystyle 0 < |f(x) - \text{L}| < \epsilon\) i.e. that the distance between \(\displaystyle f(x)\) and \(\displaystyle L\) is also greater than zero?

If we do not, then that means that \(\displaystyle f(x)\) can equal the limit \(\displaystyle \text{L}\) but how can that ever be if \(\displaystyle x\) never equals the point which yields that limit-value? How can \(\displaystyle f(x) := x^2\) equal \(\displaystyle 25\) without \(\displaystyle x\) ever being \(\displaystyle 5\)?

Should we not also require that the function output never actually approaches the limit-value, just as the input never really equals the approachedpoint? Surely the function will never reach that limit exactly because \(\displaystyle x\) never equals the approached point exactly, so why do we not say \(\displaystyle 0 < |f(x) - \text{L}| < \epsilon\)?

(2) I am still wrestling with this equivalence:

\(\displaystyle |x-5|\cdot |x+5| < \epsilon \iff |x-5|\cdot M < \epsilon \, ,\)

where \(\displaystyle M\) is an arbitrary upper bound for \(\displaystyle |x+5|\). I tried applying

\(\displaystyle (0 < a < b) \wedge (0 < c < d) \implies ac < bd\)

but to no avail. I would really appreciate if somebody could show me an algebraic argument which justifies that \(\displaystyle |x-5|\cdot M\) must also be less than \(\displaystyle \epsilon\) whenever \(\displaystyle |x-5|\cdot |x+5| < \epsilon\).

I read somewhere that for \(\displaystyle |x-5| < \frac {\epsilon}{|x+5|}\) one wishes to make the upper bound \(\displaystyle \frac{\epsilon}{|x+5|}\) "really small" since we are closing in on \(\displaystyle x = 5\) and one way to achieve that is to make \(\displaystyle |x+5|\) close to its upper bound \(\displaystyle M\). So a "really small" upper bound would in that case be \(\displaystyle \frac{\epsilon}{M}\), hence us also requiring that \(\displaystyle |x-5| < \frac {\epsilon}{M}\). But I am dubious about this example, can somebody help me see things clearer?

To answer 1, the reason it is not required that \(\displaystyle \displaystyle \begin{align*} 0 < | f(x) - L | \end{align*}\) is because your function may not necessarily be one-to-one. If your function is many-to-one, then it is possible for your function to have the same value at different values of x. This is possible even in your small \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) neighbourhood, and if this is the case, then the difference between your function value at x and our ideal value (limit) L WOULD be 0.

As for 2, it might help with a metaphor. When I do \(\displaystyle \displaystyle \begin{align*} \epsilon - \delta \end{align*}\) proofs, I think of myself pulling pizzas out of an oven (I used to work in a pizza shop). Think of there being an "ideal" level of cooking for your pizza. Obviously, it is not going to be possible to get this "ideal" amount of cooking for every pizza (or possibly even any pizza), but there is a certain "tolerance" you can have for over-cooking or under-cooking before you consider it raw or burnt. As long as you are reasonably close to the right amount of time needed, then your level of cooking will be considered acceptable. Then as you gain more experience, you should be able to get closer and closer to keeping the pizzas in the oven for the ideal amount of time, thereby making your pizzas closer and closer to the ideal level of cooking, which means you would expect that your tolerance would decrease as you'd be getting used to your pizzas being cooked properly.

So if we were to call the amount of time in the oven \(\displaystyle \displaystyle \begin{align*} x \end{align*}\), then the level of cooking is some function of x \(\displaystyle \displaystyle \begin{align*} f(x) \end{align*}\). We said there is an ideal level of cooking, we could call that \(\displaystyle \displaystyle \begin{align*} L \end{align*}\), which means there is a point in time \(\displaystyle \displaystyle \begin{align*} x = c \end{align*}\) which gives this ideal level of cooking. Remember we said that as long as we have kept the pizzas in the oven for an amount of time reasonably close to \(\displaystyle \displaystyle \begin{align*} c \end{align*}\), say \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) units of time away from it, then our level of cooking would be considered acceptable, or within some tolerance which we could call \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\). So we need to show that \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) and \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) are related, so that you are guaranteed that as you get experience and keep your pizzas in the oven closer to the right amount of time ( i.e. \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) gets smaller) then so will your tolerance \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) get smaller and closer to the ideal level of cooking.

Do you see now what it means to show \(\displaystyle \displaystyle \begin{align*} 0 < |x - c| < \delta \implies |f(x) - L | < \epsilon \end{align*}\)? It means if you have set a tolerance around your ideal limiting value, then as long as you are reasonably close to \(\displaystyle \displaystyle \begin{align*} x = c \end{align*}\), then you are guaranteed that your function value is within your tolerance away from the limiting value, and by showing the relationship between \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) and \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\), you are guaranteed that as your \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) gets smaller and you close in on \(\displaystyle \displaystyle \begin{align*} x = c \end{align*}\), then your tolerance will get smaller and your \(\displaystyle \displaystyle \begin{align*} f(x) \end{align*}\) will close in on \(\displaystyle \displaystyle \begin{align*} L \end{align*}\).

So to answer your question 2. The reason that you are not fully understanding getting an upper bound and saying that you are guaranteed it is less than \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is because you are forgetting that \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is INDEPENDENT, it's the tolerance that YOU set. So as long as you CAN find an upper bound, you are ALLOWED to set \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) greater than it, because \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is controlled by YOU.Hope that helped :)
 
  • #13
Prove It said:
To answer 1, the reason it is not required that \(\displaystyle \displaystyle \begin{align*} 0 < | f(x) - L | \end{align*}\) is because your function may not necessarily be one-to-one. If your function is many-to-one, then it is possible for your function to have the same value at different values of x. This is possible even in your small \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) neighbourhood, and if this is the case, then the difference between your function value at x and our ideal value (limit) L WOULD be 0.

Yes of course, it makes perfect sense! Thank you for that enlightenment.

Prove It said:
As for 2, it might help with a metaphor. When I do \(\displaystyle \displaystyle \begin{align*} \epsilon - \delta \end{align*}\) proofs, I think of myself pulling pizzas out of an oven (I used to work in a pizza shop). Think of there being an "ideal" level of cooking for your pizza. Obviously, it is not going to be possible to get this "ideal" amount of cooking for every pizza (or possibly even any pizza), but there is a certain "tolerance" you can have for over-cooking or under-cooking before you consider it raw or burnt. As long as you are reasonably close to the right amount of time needed, then your level of cooking will be considered acceptable. Then as you gain more experience, you should be able to get closer and closer to keeping the pizzas in the oven for the ideal amount of time, thereby making your pizzas closer and closer to the ideal level of cooking, which means you would expect that your tolerance would decrease as you'd be getting used to your pizzas being cooked properly.

So if we were to call the amount of time in the oven \(\displaystyle \displaystyle \begin{align*} x \end{align*}\), then the level of cooking is some function of x \(\displaystyle \displaystyle \begin{align*} f(x) \end{align*}\). We said there is an ideal level of cooking, we could call that \(\displaystyle \displaystyle \begin{align*} L \end{align*}\), which means there is a point in time \(\displaystyle \displaystyle \begin{align*} x = c \end{align*}\) which gives this ideal level of cooking. Remember we said that as long as we have kept the pizzas in the oven for an amount of time reasonably close to \(\displaystyle \displaystyle \begin{align*} c \end{align*}\), say \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) units of time away from it, then our level of cooking would be considered acceptable, or within some tolerance which we could call \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\). So we need to show that \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) and \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) are related, so that you are guaranteed that as you get experience and keep your pizzas in the oven closer to the right amount of time ( i.e. \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) gets smaller) then so will your tolerance \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) get smaller and closer to the ideal level of cooking.

Do you see now what it means to show \(\displaystyle \displaystyle \begin{align*} 0 < |x - c| < \delta \implies |f(x) - L | < \epsilon \end{align*}\)? It means if you have set a tolerance around your ideal limiting value, then as long as you are reasonably close to \(\displaystyle \displaystyle \begin{align*} x = c \end{align*}\), then you are guaranteed that your function value is within your tolerance away from the limiting value, and by showing the relationship between \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) and \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\), you are guaranteed that as your \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) gets smaller and you close in on \(\displaystyle \displaystyle \begin{align*} x = c \end{align*}\), then your tolerance will get smaller and your \(\displaystyle \displaystyle \begin{align*} f(x) \end{align*}\) will close in on \(\displaystyle \displaystyle \begin{align*} L \end{align*}\).

So to answer your question 2. The reason that you are not fully understanding getting an upper bound and saying that you are guaranteed it is less than \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is because you are forgetting that \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is INDEPENDENT, it's the tolerance that YOU set. So as long as you CAN find an upper bound, you are ALLOWED to set \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) greater than it, because \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is controlled by YOU.Hope that helped :)

I must say that I appreciate you taking your time and wording out the reply. The pizza-analogy was a quite good one.

All right, the concept of taking the liberty to choose \(\displaystyle \epsilon\) to be greater than the expression in question was rather unfamiliar to me. But evidently that has something to do with \(\displaystyle \epsilon\) being independent. Albeit I understand what independent means linguistically, I do not think I am entirely sure of what you exactly mean – in a mathematical context – when you write that \(\displaystyle \epsilon\) is independent. How exactly does this particular property dislodge \(\displaystyle \epsilon\) from any confinements that I seemingly assumed \(\displaystyle \epsilon\) had?

Much appreciated if you would care to elaborate, I think I will have understood everything if I just comprehend the independence property of \(\displaystyle \epsilon\).
 
Last edited:
  • #14
Have you not heard of the terms "Independent Variable" and "Dependent Variable"?

A function can be thought of like a computer program, where numbers go in and the function does something to them to spit numbers out. The number going in is the independent variable (we usually denote this as "x") and the number coming out is the dependent variable (we usually denote this as "y").

Do you see why we would call them independent and dependent? Clearly the dependent variable is so named because the numbers coming out DEPEND on the numbers going in and on the function itself. The independent variable is so named because you CHOOSE the numbers to go in, everything going in is completely independent of the function.

It's the same thing here, \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is FREELY chosen, and we try to write \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) as a function of \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\), so \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is an independent variable and \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) is a dependent variable.
 
  • #15
Prove It said:
Have you not heard of the terms "Independent Variable" and "Dependent Variable"?

A function can be thought of like a computer program, where numbers go in and the function does something to them to spit numbers out. The number going in is the independent variable (we usually denote this as "x") and the number coming out is the dependent variable (we usually denote this as "y").

Do you see why we would call them independent and dependent? Clearly the dependent variable is so named because the numbers coming out DEPEND on the numbers going in and on the function itself. The independent variable is so named because you CHOOSE the numbers to go in, everything going in is completely independent of the function.

It's the same thing here, \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is FREELY chosen, and we try to write \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) as a function of \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\), so \(\displaystyle \displaystyle \begin{align*} \epsilon \end{align*}\) is an independent variable and \(\displaystyle \displaystyle \begin{align*} \delta \end{align*}\) is a dependent variable.

Ah, well yes of course! So this is what it actually means when one says "for every \(\displaystyle \epsilon\), there exists a \(\displaystyle \delta\)"? I.e. that \(\displaystyle \epsilon\) is arbitrary but \(\displaystyle \delta\) is not? Therefore it is not interesting of what \(\displaystyle \epsilon\) is, but rather what \(\displaystyle \delta\) becomes and furthermore what property it exhibits for a given \(\displaystyle \epsilon\)?

I think I got it now, but I am still not certain on why exactly we can claim

\(\displaystyle |x-5|\cdot |x+5| < \epsilon \implies |x-5|\cdot M < \epsilon \, ,\)

simply due to the independence property of \(\displaystyle \epsilon\).
 
  • #16
sweatingbear said:
Ah, well yes of course! So this is what it actually means when one says "for every \(\displaystyle \epsilon\), there exists a \(\displaystyle \delta\)"? I.e. that \(\displaystyle \epsilon\) is arbitrary but \(\displaystyle \delta\) is not? Therefore it is not interesting of what \(\displaystyle \epsilon\) is, but rather what \(\displaystyle \delta\) becomes and furthermore what property it exhibits for a given \(\displaystyle \epsilon\)?

Pretty much.

I think I got it now, but I am still not certain on why exactly we can claim

\(\displaystyle |x-5|\cdot |x+5| < \epsilon \implies |x-5|\cdot M < \epsilon \, ,\)

simply due to the independence property of \(\displaystyle \epsilon\).

You are thinking too much into this, probably because of the misuse of notation. The only reason they are using the logical implication symbol there is because the logic behind how these limits are set up implies what you make as an \(\displaystyle \displaystyle \epsilon \) bound, not so much as a mathematical implication.
 
  • #17
Thanks for all replies. I cannot claim that I have fully comprehended it, but I have definitely discovered a different way of thinking about these proofs.
 
  • #18
Don't sweat it. I didn't understand limits until I had had Calculus I and II three years in a row (yes, it did take me that long to get it), followed by Multi-variable Calculus, followed by complex variables, followed by senior-level real analysis. Then I finally understood limits. The good news is that if you are not continuing on in your mathematical studies, the $\epsilon-\delta$ proofs, while good to see, are not essential for doing most day-to-day computations in calculus.
 
  • #19
Ackbach said:
Don't sweat it.

Loving that pun!

Ackbach said:
I didn't understand limits until I had had Calculus I and II three years in a row (yes, it did take me that long to get it), followed by Multi-variable Calculus, followed by complex variables, followed by senior-level real analysis. Then I finally understood limits. The good news is that if you are not continuing on in your mathematical studies, the $\epsilon-\delta$ proofs, while good to see, are not essential for doing most day-to-day computations in calculus.

Well at least it's nice to hear that I am not unique to take much time to fully comprehend this, so thanks for sharing. Happy math! :)
 
  • #20
I don't think it's a good idea to ever tell a student not to sweat something. Nothing is ever achieved without hard work, and the OP should be encouraged for working hard and asking very thoughtful and thought-provoking questions because he really wants to understand the logic and mathematics behind \(\displaystyle \epsilon - \delta\) proofs. Yes it is difficult to get your head around, but it's not worth putting aside just because it's difficult and is an area of mathematics that took a long time to develop originally.

Mathematics is 1% inspiration and 100% perspiration, with a 1% margin of error :P
 
  • #21
Prove It said:
I don't think it's a good idea to ever tell a student not to sweat something. Nothing is ever achieved without hard work, and the OP should be encouraged for working hard and asking very thoughtful and thought-provoking questions because he really wants to understand the logic and mathematics behind \(\displaystyle \epsilon - \delta\) proofs. Yes it is difficult to get your head around, but it's not worth putting aside just because it's difficult and is an area of mathematics that took a long time to develop originally.

Mathematics is 1% inspiration and 100% perspiration, with a 1% margin of error :P
I hope you guys don't mind me barging in the thread, but I would like to address this. :)

I do not think Ackbach has told him not to "sweat it" with that intention, but rather as an attempt to ease sweatingbear by communicating something along the lines that "don't worry if you're not understanding it all now, as you continue your study things will get in perspective and everything is going to be much clearer when you have a greater vision of how it all fits together". It's not putting aside to never come back, instead giving it time to sink in. I firmly believe that some topics are learned best when you don't overwork yourself over every little stopping point. We must try hard to understand things as we go, but not halt at it until it's all comprehensible. :)

Cheers!
 
  • #22
I will have my Heureka-moment in due time (hopefully)!
 
  • #23
Fantini said:
I hope you guys don't mind me barging in the thread, but I would like to address this. :)

I do not think Ackbach has told him not to "sweat it" with that intention, but rather as an attempt to ease sweatingbear by communicating something along the lines that "don't worry if you're not understanding it all now, as you continue your study things will get in perspective and everything is going to be much clearer when you have a greater vision of how it all fits together". It's not putting aside to never come back, instead giving it time to sink in. I firmly believe that some topics are learned best when you don't overwork yourself over every little stopping point. We must try hard to understand things as we go, but not halt at it until it's all comprehensible. :)

Cheers!

Couldn't have put it better myself. I think Prove it has a point as well, though. There are things that you do need to put in the effort to learn cold at the time, and you should put in the necessary effort. My point here was more that, while it certainly is nice if you understand limits when you take Calculus, it's not necessary. It is necessary to understand them if you take senior-level real analysis.
 

FAQ: Is understanding limits necessary for senior-level real analysis?

What is "Epsilon-delta confusion #2"?

"Epsilon-delta confusion #2" refers to a common issue that arises in calculus and real analysis, where students struggle to understand the formal definition of a limit using epsilon and delta notation.

How does the concept of "Epsilon-delta confusion #2" relate to calculus?

The concept of "Epsilon-delta confusion #2" is directly related to calculus as it involves understanding the formal definition of a limit, which is a fundamental concept in calculus.

Why is understanding "Epsilon-delta confusion #2" important in mathematics?

Understanding "Epsilon-delta confusion #2" is important in mathematics because it lays the foundation for more advanced concepts, such as continuity and differentiability, which are crucial in many fields like physics, engineering, and economics.

What are some common mistakes students make when dealing with "Epsilon-delta confusion #2"?

Some common mistakes students make when dealing with "Epsilon-delta confusion #2" include confusing the roles of epsilon and delta, not fully understanding the concept of a limit, and making incorrect substitutions when using the formal definition of a limit.

How can students overcome "Epsilon-delta confusion #2"?

Students can overcome "Epsilon-delta confusion #2" by practicing and understanding the concept of a limit, working through examples and proofs, and seeking clarification or help from their teacher or peers if needed.

Similar threads

Replies
4
Views
2K
Back
Top