How does the delta ε definition prove derivatives?

In summary, the conversation discusses the concept of proving limits and the use of epsilon-delta definitions to find suitable values for a given epsilon. The discussion also touches on the difficulty of working with nonlinear functions in this context. A specific example is given for the function f(x) = x² and its difference quotient, with an explanation of how to graph and understand the epsilon-delta definition in this case. The conversation also includes a question about the use of 2xh+h² ÷ h instead of 2x+h in the proof, and whether δ is equivalent to ε.
  • #1
INTP_ty
26
0
The exercises in my imaginary textbook are giving me an ε, say .001, & are making me find a delta, such that all values of x fall within that ε range of .001. The section that I'm working on is called "proving limits." Well, that is not proving a limit. All that's doing is finding values of what f(x) could be by whatever ε was given, in our case .001. Any reasonable person could understand that by making ε smaller, the values of what f(x) could be, are going to be closer to the limit, L. But still that isn't proving the limit. So if I can't make ε out to be any small discrete value, how am I supposed to prove a limit? Well, if you could prove that delta was a function of ε, then this would work. Why? Because you won't have to put yourself in that position → you don't actually have to pick an ε, so you won't be stuck with just a range of possible f(x)'s. Take, for example, f(x) = 5x. Given ε>0, d=ε/5 will satisfy. d=ε/5 says that for ANY ε given (smaller than any number you can think of), delta will always be the ε given divided by 5. The fraction is irrelevant. It's simply the fact that delta is a function of ε that proves the limit.Here's a new one, & a relevant one to what differential calculus is supposed to be about. Suppose f(x)=x². DQ reads: 2x+h. lim x² h→0 =2x. I understand the whole 0/0 thing & the need for a limit, but I don't understand how the delta ε def'n works here. First of all, how am I supposed to graph this? In the previous example where f(x)=5x, it was graphed as such & ε was the range of f(x)'s around the limit point, L, & the deltas were simply some unknown range around x. Pretty straight forward. Well how does that work in our new example? I think I'm going to stop here. Delta is no longer just an arbitrary range around x. It's an arbitrary range around x+h. And ε is of course, the range of f(x)'s around the limit point L, but in this case, the output, f(x), is going to be the slope over the interval x+h. Am I supposed to be graphing h as a function of 2x+h? That doesn't even make sense...

:/
 
Physics news on Phys.org
  • #2
INTP_ty said:
The exercises in my imaginary textbook are giving me an ε, say .001, & are making me find a delta, such that all values of x fall within that ε range of .001. The section that I'm working on is called "proving limits." Well, that is not proving a limit. All that's doing is finding values of what f(x) could be by whatever ε was given, in our case .001. Any reasonable person could understand that by making ε smaller, the values of what f(x) could be, are going to be closer to the limit, L. But still that isn't proving the limit. So if I can't make ε out to be any small discrete value, how am I supposed to prove a limit? Well, if you could prove that delta was a function of ε, then this would work. Why? Because you won't have to put yourself in that position → you don't actually have to pick an ε, so you won't be stuck with just a range of possible f(x)'s. Take, for example, f(x) = 5x. Given ε>0, d=ε/5 will satisfy. d=ε/5 says that for ANY ε given (smaller than any number you can think of), delta will always be the ε given divided by 5. The fraction is irrelevant. It's simply the fact that delta is a function of ε that proves the limit.
Finding the ##\delta## that works for a given ##\epsilon## is easier if you're working with linear functions. It's quite a bit harder if you're working with nonlinear functions, such as f(x) = x2.
INTP_ty said:
Here's a new one, & a relevant one to what differential calculus is supposed to be about. Suppose f(x)=x². DQ reads: 2x+h. lim x² h→0 =2x. I understand the whole 0/0 thing & the need for a limit, but I don't understand how the delta ε def'n works here.
Same as before. The difference quotient is ##\lim_{h \to 0}\frac{f(x + h) - f(x)}{h} = \lim_{h \to 0}\frac{(x + h)^2 - x^2}{h} = \lim_{h \to 0}\frac{2xh + h^2}{h}##.
You could use the definition of a limit to evaluate this limit, or you could make life easier by using some properties of limits that are derived from the limit definition, and determine that the last limit works out to 2x. If you choose to follow the more rigorous path, you should realize that the variable here is h, and that x is assumed to be fixed. You want to show that ##|\frac{2xh + h^2}{h} - 2x|## can be made arbitrarily close to 0 (i.e., smaller than ##\epsilon##) when h is close to 0 (that's the ##\delta##).
INTP_ty said:
First of all, how am I supposed to graph this? In the previous example where f(x)=5x, it was graphed as such & ε was the range of f(x)'s around the limit point, L, & the deltas were simply some unknown range around x. Pretty straight forward. Well how does that work in our new example? I think I'm going to stop here. Delta is no longer just an arbitrary range around x. It's an arbitrary range around x+h. And ε is of course, the range of f(x)'s around the limit point L, but in this case, the output, f(x), is going to be the slope over the interval x+h. Am I supposed to be graphing h as a function of 2x+h? That doesn't even make sense...

:/
 
  • #3
How come you write 2xh+h² ÷ h instead of 2x+h? Aren't they equivalent? If so, then the proof should be,

For any ε>0, there exists a δ>0 such that |2x+h - 2x|<ε whenever |h - 0|<δ.

So, δ=ε?

And do I really have to say "for any ε>0, there exists a δ>0 such that" or can I just put 0<|f(x)-L|<ε whenever 0<|x-a|<δ as my proof?
 
  • #4
INTP_ty said:
How come you write 2xh+h² ÷ h instead of 2x+h? Aren't they equivalent?
The two expressions are equal; I just didn't carry the math that far. Also, since the limit is as ##h \to 0##, you need to convince yourself that ##\frac{2xh + h^2}{h}## can be simplified to 2x + h. If h = 0, ##\frac{2xh + h^2}{h}## is undefined.
INTP_ty said:
If so, then the proof should be,

For any ε>0, there exists a δ>0 such that |2x+h - 2x|<ε whenever |h - 0|<δ.

So, δ=ε?
Yes, in this case, since the function involved is linear. I.e., h is a linear function of h. Things would be more difficult if you were calculating the derivative of sin(x) or some other function.
INTP_ty said:
And do I really have to say "for any ε>0, there exists a δ>0 such that" or can I just put 0<|f(x)-L|<ε whenever 0<|x-a|<δ as my proof?
You don't need to say "for any ε>0, there exists..." In your proof you are showing that, given an ε>0, you can show a δ that works. However, you should start your proof off with "Given an ε>0.." and it should end up with a δ that works.
 
  • #5
Mark44 said:
Yes, in this case, since the function involved is linear. I.e., h is a linear function of h. Things would be more difficult if you were calculating the derivative of sin(x) or some other function.

I remember trying to calculate the derivative of sinx using the method included in the link below (see Post #1). I never really figured it out. It kept changing on me. The only reason I got the answer is because I just so happened to have had a table of cosx sitting beside me. I was wondering when this was going to surface again.

https://www.physicsforums.com/threa...ial-equation-using-logic.880255/#post-5532720

Few things,

It doesn't seem at all obvious that the 2x, the limit, would be preserved the smaller you made h. No wonder it took so long for someone to figure this out. I have mixed feelings as to when the δ-ε def'n should be included in a Calculus course/textbook. I think most people could agree on guessing what the derivative may be, but understanding just exactly how the algebra works out is a completely different story & this is where I got hung up You can make h smaller indefinitely & still end up with 2x+h for there are an infinite set of numbers between 0 & 0+h. If you wanted the slope at instantaneous x, you'd get 0/0. The δ-ε def'n cleared all this up & I never would have understood Calculus without it. Lastly, I have my own reservations as to whether or not my "elementary" method for computing derivatives is legitimate [see link again]. I can understand why it fails from a purely mathematical standpoint, but I don't think some of these functions that I'm seeing in my Calculus textbook are applicable to the real world, at least not when you define the variables to be so & so. Maybe there exists a unit of time so small that it wouldn't make sense to make h any smaller. Maybe there exists a limit to just exactly how much a body can be accelerated over some given time interval. Unfortunately, I don't know enough about physics to support either of these arguments, but if I ever find anything, I will be taking another look at my old method.
I have a new question.

f(x)=sin(Bx). Is it possible to make B infinitely large, so when you go to graph the function, it'd be a solid rectangle from y=1 to y=-1 & then from x=-∞ to x=∞. If so, how would you express that?
 
Last edited:
  • #6
INTP_ty said:
I have a new question.

f(x)=sin(Bx). Is it possible to make B infinitely large, so when you go to graph the function, it'd be a solid rectangle from y=1 to y=-1 & then from x=-∞ to x=∞. If so, how would you express that?
Something like this, I guess...
##g(x) = \lim_{B \to \infty}\sin(Bx)##
 
Last edited:
  • #7
Mark44 said:
Something like this, I guess...
##g(x) = \lim_{B \to \infty}\sin(Bx)##
Are Fourier Transforms limited to expressing functions of one variable or can they be used to express 3 dimensional bodies described by the multi-variable calculus?
 
Last edited by a moderator:
  • #8
INTP_ty said:
Are Fourier Transforms limited to expressing functions of one variable or can they be used to express 3 dimensional bodies described by the multi-variable calculus?
I think you're really asking about Fourier Series, rather than Fourier Transforms. An ordinary Fourier Series (see https://en.wikipedia.org/wiki/Fourier_series) expresses a function (of one variable) in terms of an infinite sum of sine and cosine terms. These series can also be extended to functions of two or more variables. See https://en.wikipedia.org/wiki/Fourier_series#Extensions.
 
  • #9
Mark44 said:
Something like this, I guess...
##g(x) = \lim_{B \to \infty}\sin(Bx)##

Suppose we were modeling a piece of paper through time. Now suppose I added the constraint that B was not infinite at say, t=30 seconds. What would happen at t=30 seconds? Would the piece of paper cease to exist at that instant of time? And just ignore the fact that I'm allowing a body to exist without any breadth. I'm sure another function can be used to describe it, but I think you get the point.



See 0:37

The resultant waveform is only of 2 dimension. I'm having a difficult time trying to visualize how it'd be set up if we were trying to model a sphere over time ...not just another curve. All of the axis' are already used up when modeling 2D.
 
Last edited by a moderator:
  • #10
Hey INTP_ty.

The understanding of using delta-epsilon "methods" has to do with continuity.

Derivatives assume continuity and analytic behaviour (which involves continuity existing in particular ways corresponding to a "constraint") uses limits which are based on two things approaching the same point as you "shrink" the area you are investigating.

With calculus, the limits always exist at each point and the mappings are both continuous (i.e. the limits equal the function for all values) and differentiable (assume continuity and then make sure the derivative limit exists and that it's function is essentially continuous). If you have both limits for function and derivative existing and if you have both functions (for normal and derivative) being continuous then it means you have a function that can be differentiated.

In fact the main theorem of complex analysis is such that if you can meet a specific condition of differentiation and limits then a function is analytic in the entire plane.

Geometrically, I would advise you shrink the region low enough so that you know that as you continue to go further (in terms of the shrinking) then you will get even closer to what the limit is (without getting there of course).

The multi-variable approach does this with a hyper-sphere and you are looking at a circular region in n-dimensions instead of a distance in one dimension.
 

Related to How does the delta ε definition prove derivatives?

1. What is the delta ε definition of derivatives?

The delta ε definition, also known as the limit definition of derivatives, is a mathematical formula used to calculate the slope of a tangent line to a curve at a specific point. It is written as lim Δx→0 (f(x+Δx)-f(x))/Δx, where Δx represents a small change in the x-values and f(x) represents the function.

2. How does the delta ε definition prove derivatives?

The delta ε definition proves derivatives by taking the limit of the difference quotient as Δx gets closer and closer to 0. This essentially means finding the slope of the tangent line at a specific point by zooming in on the curve and calculating the slope of smaller and smaller secant lines. As Δx approaches 0, the secant lines become closer and closer to the tangent line, thus proving the derivative at that point.

3. What is the significance of the delta ε definition in calculus?

The delta ε definition is significant in calculus because it is the fundamental concept used to define and calculate derivatives. It allows us to find the instantaneous rate of change of a function at a specific point, which has numerous applications in mathematics, physics, and engineering.

4. Are there any limitations to using the delta ε definition to prove derivatives?

One limitation of the delta ε definition is that it can only be used to prove derivatives of continuous functions. It also requires a lot of algebraic manipulation and can be time-consuming for more complex functions. Additionally, it may not always be possible to find the exact value of the limit, leading to approximations.

5. Can you provide an example of using the delta ε definition to prove a derivative?

Sure, let's say we have the function f(x) = x^2 and we want to find the derivative at x = 3 using the delta ε definition. We would plug in the values into the formula lim Δx→0 (f(x+Δx)-f(x))/Δx and simplify to get (3+Δx)^2 - 3^2 / Δx. This simplifies to (6Δx + Δx^2)/Δx. As Δx approaches 0, the Δx^2 term becomes negligible and we are left with 6, which is the derivative of x^2 at x = 3.

Similar threads

Replies
25
Views
2K
Replies
14
Views
3K
Replies
6
Views
2K
Replies
25
Views
3K
Replies
3
Views
1K
Replies
11
Views
1K
  • Calculus
Replies
9
Views
2K
Replies
18
Views
2K
Replies
4
Views
1K
Back
Top