Uniform convergence and derivatives -- difference between two theorems?

  • #1
psie
261
32
TL;DR Summary
I am comparing two theorems in two different texts on uniform convergence and differentiability; the first one is from some lecture notes, the second from Rudin's PMA. Apart from working on open versus closed intervals, what is the difference between these two theorems?
The first theorem is from here (page 9 in the pdf):

Theorem 9.18. Suppose that ##(f_n)## is a sequence of differentiable functions ##f_n:(a,b)\to\mathbb R## such that ##f_n\to f## pointwise and ##f_n'\to g## uniformly for some ##f,g:(a,b)\to\mathbb R##. Then ##f## is differentiable on ##(a,b)## and ##f'=g##.

The second theorem is from baby Rudin:

7.17 Theorem Suppose ##\{f_n\}## is a sequence of functions, differentiable on ##[a,b]## and such that ##\{f_n(x_0)\}## converges for some point ##x_0## on ##[a,b]##. If ##\{f_n'\}## converges uniformly on ##[a,b]##, then ##\{f_n\}## converges uniformly on ##[a,b]##, to a function ##f##, and $$f'(x)=\lim_{n\to\infty}f_n'(x)\quad (a\leq x\leq b).$$

Obviously the open/closed intervals is one difference, but if we were to replace the open intervals with closed intervals, is the first theorem a special case of the second one? Some have said they are even roughly equivalent, which I don't see. The way I see it is that the first theorem assumes ##f_n(x)## converges for all ##x## in ##(a,b)## (or ##[a,b]## if we were to replace the open intervals with closed intervals), whereas the second theorem only assumes ##f_n(x)## converges for a single point. I'll be honest and say I have read neither proofs so far. What I have read is a proof under additional assumptions (continuity of ##f_n##), but that's another theorem. Any comments are appreciated.
 
Physics news on Phys.org
  • #2
I'm not sure I see a difference, except that 7.17 looks sloppy to me. How did they define differentiability at ##a## or ##b##? By the definition of manifolds with boundaries? Since when is ##x\longmapsto |x|## defined on ##[0,1]## differentiable at ##x=0##? By which argument do we require the coincidence of left and right limits on ##(0,1)## but are satisfied by a one-sided limit at ##x=0## or ##x=1##?

There is more work to do in 7.17. But differentiability of all ##f_n## plus uniform convergence of ##f'_n## are strong conditions. I could imagine that they are sufficient to conclude from one point ##x_0## of convergence to (pointwise!) convergence everywhere on ##(a,b)## by patching one neighborhood to another. (I'm still not convinced how to deal with the boundaries, since differentiability is a local phenomenon, i.e. defined in an open neighborhood.) This is probably the main difference, how to get rid of ##x_0:##
$$
f_n(x_0)\to y_0 \wedge f_n \text{ diff. on } (a,b) \wedge f'_n \text{ conv. unif. on} (a,b) \stackrel{(*)}{\Longrightarrow} f_n(x)\to y \text{ for all }x\in (a,b)
$$
Uniform convergence of the ##f'_n## is primarily necessary to be allowed to swap limit and integral: If ##f## is differentiable then
$$
f(x)=\int f'\,dx = \int \lim_{n \to \infty}f'_n \,dx = \lim_{n \to \infty} \int f'_n\,dx = \lim_{n \to \infty} f_n\,dx \quad (**)
$$
so it's probably not necessary to use it in ##(*)##.

Edit: I corrected ##(**)##.
 
Last edited:
  • Like
Likes PeroK and psie
Back
Top