Moving limits in and out of functions

In summary, the equivalence $$\lim_{x \to a} f(g(x)) = f(\lim_{x \to a} g(x))$$ is valid when the function f is continuous. This is important in various proofs, such as showing the equivalence of the limit definition of the number e to the definition of the inverse of the natural logarithm. The proof involves comparing the limit definition of f(g(x)) with f(lim g(x)) using a specific example. The general case can be proven by using epsilon-delta definitions of continuity and showing that they are equivalent.
  • #1
Only a Mirage
59
0
When is the following equivalence valid?

$$\lim_{x \to a} f(g(x)) = f(\lim_{x \to a} g(x))$$

I was told that continuity of f is key here, but I'm not positive.

This question comes up, for instance in one proof showing the equivalence of the limit definition of the number e to the definition of the inverse of the natural logarithm.
 
Physics news on Phys.org
  • #2
Only a Mirage said:
I was told that continuity of f is key here, but I'm not positive.
That's the key. Try comparing ##\lim_{x \to 0} f(g(x))## against ##f(\lim_{x \to 0} g(x))## with ##g(x)=x## and ##f(x)=0\,\forall\,x\ne 0,\, f(0)=1##.
 
  • #3
Thanks for the specific example.

Can you prove (or point me to the proof) of the general case?
 
  • #4
That's one very reasonable definition of continuity.
 
  • #5
Only a Mirage said:
Thanks for the specific example.

Can you prove (or point me to the proof) of the general case?

If you are using the [itex]\epsilon - \delta[/itex] definition of continuity, then the idea is that eventually [itex]x_n[/itex] will be within to [itex]x[/itex] and so [itex]f(x_n)[/itex] will be withi in [itex]\epsilon[/itex] of [itex]f(x)[/itex]. But this is exactly what it means for [itex]f(x_n)[/itex] to converge to [itex]f(x)[/itex].
 
  • #6
Robert1986 said:
If you are using the [itex]\epsilon - \delta[/itex] definition of continuity, then the idea is that eventually [itex]x_n[/itex] will be within to [itex]x[/itex] and so [itex]f(x_n)[/itex] will be withi in [itex]\epsilon[/itex] of [itex]f(x)[/itex]. But this is exactly what it means for [itex]f(x_n)[/itex] to converge to [itex]f(x)[/itex].

What exactly do you mean by the sequence [itex]x_n[/itex] here?
 
  • #7
economicsnerd said:
That's one very reasonable definition of continuity.

Interesting. But how would you show that this definition is equivalent to, for example, the epsilon-delta definition?
 
  • #8
If [tex] \lim_{x\to a} g(x) [/tex] exists, and f(x) is continuous, then the statement is true. If [tex]\lim_{x\to a} g(x)[/tex] does not exist, then the right hand side does not make sense as written, so the statement cannot be true, and if f(x) is not continuous then the statement is not true by DH's example.

As for showing that [tex]\lim_{x\to a} g(x) = L[/tex] implies that [tex] \lim_{x\to a} f(g(x)) = L[/tex] when L is continuous, you should just slam it with epsilons and deltas until it works - I don't think there's a particularly clever trick
 
  • #9
Only a Mirage said:
Interesting. But how would you show that this definition is equivalent to, for example, the epsilon-delta definition?
Suppose that ##f## is continuous at ##x## in the epsilon-delta sense. Let ##\epsilon > 0##. Then there is a ##\delta > 0## such that ##|f(y) - f(x)| < \epsilon## for all ##y## satisfying ##|y - x| < \delta##. Let ##(x_n)## be a sequence converging to ##x##. Then there is an ##N## such that ##|x_n - x| < \delta## for all ##n > N##. Thus for all ##n > N## we have ##|f(x_n) - f(x)| < \epsilon##. We can do this for any ##\epsilon > 0##, so this means that ##f(x_n) \rightarrow f(x)##.

Conversely, suppose that ##f(x_n) \rightarrow f(x)## for any sequence ##(x_n)## such that ##x_n \rightarrow x##. Let ##\epsilon > 0##. We claim that there is a ##\delta > 0## such that ##|f(y) - f(x)| < \epsilon## whenever ##|y - x| < \delta##. Suppose this were not the case. Then it must be true that for every ##\delta > 0##, there is some ##y## satisfying ##|y - x| < \delta## but ##|f(y) - f(x)| \geq \epsilon##. Let ##(\delta_n)## be any sequence of positive numbers converging to zero. Then we can find a sequence ##(x_n)## satisfying ##|x_n - x| < \delta_n## and ##|f(x_n) - f(x)| \geq \epsilon##. The conditions ##|x_n - x| < \delta_n## and ##\delta_n \rightarrow 0## imply that ##x_n \rightarrow x##, so our hypothesis implies that ##f(x_n) \rightarrow f(x)##. But this contradicts ##|f(x_n) - f(x)| \geq \epsilon##.
 
  • #10
jbunniii said:
Suppose that ##f## is continuous at ##x## in the epsilon-delta sense. Let ##\epsilon > 0##. Then there is a ##\delta > 0## such that ##|f(y) - f(x)| < \epsilon## for all ##y## satisfying ##|y - x| < \delta##. Let ##(x_n)## be a sequence converging to ##x##. Then there is an ##N## such that ##|x_n - x| < \delta## for all ##n > N##. Thus for all ##n > N## we have ##|f(x_n) - f(x)| < \epsilon##. We can do this for any ##\epsilon > 0##, so this means that ##f(x_n) \rightarrow f(x)##.

Conversely, suppose that ##f(x_n) \rightarrow f(x)## for any sequence ##(x_n)## such that ##x_n \rightarrow x##. Let ##\epsilon > 0##. We claim that there is a ##\delta > 0## such that ##|f(y) - f(x)| < \epsilon## whenever ##|y - x| < \delta##. Suppose this were not the case. Then it must be true that for every ##\delta > 0##, there is some ##y## satisfying ##|y - x| < \delta## but ##|f(y) - f(x)| \geq \epsilon##. Let ##(\delta_n)## be any sequence of positive numbers converging to zero. Then we can find a sequence ##(x_n)## satisfying ##|x_n - x| < \delta_n## and ##|f(x_n) - f(x)| \geq \epsilon##. The conditions ##|x_n - x| < \delta_n## and ##\delta_n \rightarrow 0## imply that ##x_n \rightarrow x##, so our hypothesis implies that ##f(x_n) \rightarrow f(x)##. But this contradicts ##|f(x_n) - f(x)| \geq \epsilon##.

Ahh... Thank you for the detailed proof! I was able to use the ideas from your proof to prove the result in my original post (basically the exact same proof, but mine involved a function ##g(x)##, whereas yours involved the sequence ##\{x_n\}##)

Anyway, thanks again! And thanks to everyone else :)
 

FAQ: Moving limits in and out of functions

What is meant by "moving limits in and out of functions"?

Moving limits in and out of functions refers to the process of changing the limit of a function without changing the function itself. This can be done by using certain mathematical techniques, such as substitution or algebraic manipulation, to manipulate the function and its limit in order to simplify or evaluate the limit.

Why is it necessary to move limits in and out of functions?

Moving limits in and out of functions is necessary in order to solve complex mathematical problems involving limits. It allows us to evaluate limits that may be otherwise difficult or impossible to solve using traditional methods. Additionally, it helps us better understand the behavior of a function at a particular point or as it approaches a certain value.

Can limits be moved in and out of any type of function?

Yes, limits can be moved in and out of any type of function, including polynomial, rational, exponential, and trigonometric functions. However, the methods used to move the limits may vary depending on the type of function.

What are some common techniques for moving limits in and out of functions?

Some common techniques for moving limits in and out of functions include substitution, factoring, simplifying, and using limit laws. These techniques involve manipulating the function algebraically or using known properties of limits to evaluate the limit.

Are there any limitations to moving limits in and out of functions?

While moving limits in and out of functions can be a useful tool, it is important to note that it may not always work for every function. In some cases, the limit may not exist or may be indeterminate, making it impossible to evaluate using traditional methods. Additionally, some functions may require more advanced techniques or approaches to solve the limit.

Similar threads

Replies
9
Views
2K
Replies
4
Views
2K
Replies
16
Views
3K
Replies
2
Views
1K
Replies
6
Views
1K
Replies
6
Views
2K
Back
Top