Uniform convergence and pointwise convergence

  • #1
chwala
Gold Member
2,753
388
TL;DR Summary
What is the key difference between uniform convergence and pointwise convergence? I would appreciate an example that is easier to follow and understand. Let me be the student here.
I am trying to follow this link:

https://people.math.wisc.edu/~angenent/521.2017s/UniformConvergence.html

...not getting it... of course i know what convergence is, just to mention for e.g given a sequence, ##\dfrac{1}{n}## I know that the sequence tends to ##0## or rather converges to ##0## as ##n## increases i.e from limits knowledge.

also,
consider the series, ##1, 0.5, 0.25, ...## i know that the series converges to ##2##.

Maybe an easier example would point me to the right direction. Do note that analysis was not an area that i focused on in my studies...

Digging further by looking at wikipedia,

https://en.wikipedia.org/wiki/Uniform_convergence#:~:text=Uniform convergence admits a simplified,similar definition of uniform continuity).

The bottom example seems clear to me. Uniform convergence simply means convergence in literal terms. Correct?

1724844621684.png
 
Last edited:
Physics news on Phys.org
  • #2
The distinction is one which is only applicable to sequences of functions.

A sequence of functions [itex]f_n : [a,b] \to \mathbb{R}[/itex] converges pointwise to [itex]f : [a,b] \to \mathbb{R}[/itex] if and only if for each [itex]x \in [a,b][/itex] and each [itex]\epsilon > 0[/itex], there exists an [itex]N \in \mathbb{N}[/itex] such that if [itex]n > N[/itex] then [itex]|f_n(x) - f(x)| < \epsilon[/itex]. Note that here [itex]N[/itex] is allowed to vary with [itex]x[/itex].

The sequence converges uniformly if and only if for each [itex]\epsilon > 0[/itex] there exists an [itex]N \in \mathbb{N}[/itex] such that for all [itex]x \in [a,b][/itex], if [itex]n > N[/itex] then [itex]|f_n(x) - f(x)| < \epsilon[/itex]. Here [itex]N[/itex] cannot vary with [itex]x[/itex]; the same [itex]N[/itex] must work for each [itex]x[/itex]. This is equivalent to the condition that for every [itex]\epsilon > 0[/itex] there exists [itex]N \in \mathbb{N}[/itex] such that if [itex]n > N[/itex] then [itex]\sup |f_n - f| < \epsilon[/itex].
 
  • Like
  • Informative
Likes chwala, dextercioby and PeroK
  • #3
I like to imagine it like this...
(I use the words "line" and [function] interchangably here so upload that to your mind before reading this!!!)

I like to imagine it like this...

A *use case* (woah, a use case, imagine!!!) of _a sequence of functions_ may be "to estimate another function" (pick any function you want and draw it in your head on a 2D graph). (f)

Now imagine a sequence of lines/functions that slowly get closer and closer to your desired line. (f1, f2, f3, ... fn .)

You may wish to know:
If you stop the estimate at function n, will you be 'close enough' to your desired function?

Let's phrase this statement more logically/mathematically in words...

For this we need to understand what we mean by close enough? Depends on the person. Depends on the use case. Would be helpful, for example, to know what n you need to go to, so that fn lies within a strip of width e somewhere around your desired function, whatever e you might want, whether 0.1mm or 0.1nanometres.
Then you can calibrate your n so that f of n is 'close enough'

This strip idea is central to uniform convergence.

Go back to the last paragraph and read it again.


For concrete understanding...
The strip region (2D) is::: (x,y) is included if (y) is in the range [f(x)-e, f(x)+e]
)

---- Right ......
Remember our original question/task:
If you stop the estimate at function n, will you be 'close enough' to your desired function?

Let's make a test called The Strip Test (this is unofficialTM) that tests whether a function fn(x) is within a strip (of width e) about f(x)...
Then we can use it to say after which n the functions fn are close enough (pass the test).

The Strip Test says...
- Close enough, if the line "(x,y) such that y=fn(x)" lies fully inside the strip.
i.e. if for all x, fn(x) is between f(x)-e and f(x)+e

- Not Close enough, if any bit of the line y=fn(x) lies outside of _The Strip Region_.
i.e. if there's an x, such that fn(x) is not within e of f(x)
---------------
Quote:
"Then we can use it to say after which n the functions fn are close enough (pass the test).
--- Me, a few lines up.

The functions fn(x) "converge to within e of f" after step n if, for all n2 from n onwards, f_n2(x) passes The Strip Test (for e and f).
(We need long term behaviour not just one odd function to go within the strip, hence the for all n2 after...)

}}} Hey, you can now use this definition to find the n you need to make all the fn's lie within the strip you want of width e ! }}
-----
If you want to know about infinite stuff, however, and whether the fn s converge to f completely, you can make this simple test:
The functions fn(x) "converge completely to f" ::;
if, there's an n after which the functions fn(x) "converge to within e of f" and not just for any e, FOR ALL "strip widths e" THAT YOU CAN THINk OF!!!

This definition goes by the name of uniform convergence in the official mathematical communeity.

---------- Pause for tea --------
But while you drink it, pause for a moment and remember the guarantees that we've created. That if your sequence passes the "converge to within e of f" test,
then you can be sure in which region your desired f lies, even if you don't know f exactly numerically, but estimate it with fofn.
Isn't that great!

Now ask yourselves this, as you sip your Earl Grey or English Breakfast or ...
Would you have the same guarantees if you only knew that your sequence passes the "pointwise convergence test" you've so aptly defined? What guarantees would you even have for your estimates fn? Where could f be?

In this respect, uniform convergence, or even our "convergence to within e of f" is much stronger than simple point-wise convergence, for it tells us where f could be given an fn that we have so aptly estimated. (stronger as a test that is)

I'll leave some things out of this answer, but these are also interesting things that may be answered in someone else's reply:

Why does uniform convergence imply pointwise convergence?

and

Why is uniform convergence related to the max norm of the difference between fn and f?

Max
 
  • Like
Likes chwala
  • #4
The difference is that "uniform" means: "at the same rate for every ##x##". In formulas, it is (edits by me):
pasmith said:
A sequence of functions [itex]f_n : [a,b] \to \mathbb{R}[/itex] converges pointwise to [itex]f : [a,b] \to \mathbb{R}[/itex] if and only if for each [itex]x \in [a,b][/itex] and each [itex]\epsilon > 0[/itex], there exists an [itex]N\mathbf{=N(\epsilon, x)} \in \mathbb{N}[/itex] such that if [itex]n > N[/itex] then [itex]|f_n(x) - f(x)| < \epsilon[/itex]. Note that here [itex]N[/itex] is allowed to vary with [itex]x[/itex].

The sequence converges uniformly if and only if for each [itex]\epsilon > 0[/itex] there exists an [itex]N\mathbf{=N(\epsilon)} \in \mathbb{N}[/itex] such that for all [itex]x \in [a,b][/itex], if [itex]n > N[/itex] then [itex]|f_n(x) - f(x)| < \epsilon[/itex]. Here [itex]N[/itex] cannot vary with [itex]x[/itex]; the same [itex]N[/itex] must work for each [itex]x[/itex]. This is equivalent to the condition that for every [itex]\epsilon > 0[/itex] there exists [itex]N \in \mathbb{N}[/itex] such that if [itex]n > N[/itex] then [itex]\sup |f_n - f| < \epsilon[/itex].

It is one of the reasons I don't like the "##N##" in the epsilon-delta notation. It should be noted as ##\mathbf{N(\epsilon, x)}## or ##\mathbf{N(\epsilon)}## to strengthen that they depend on the quantities in the existence quantifier. It's the same with the ordinary sequences: it is always an ##\mathbf{N(\epsilon)}.##
 
Last edited:
  • Like
Likes Maxicl, chwala and dextercioby
  • #5
chwala said:
consider the series, ##1, 0.5, 0.25, ...## i know that the series converges to ##2##.

Sorry, why does that converge to 2?
 
  • Like
Likes WWGD
  • #6
berkeman said:
Sorry, why does that converge to 2?
##\displaystyle{\sum_{k=0}^\infty 2^{-k}=2}##

series = sum
sequence = list
 
  • Informative
  • Like
Likes chwala and berkeman
  • #7
Thank you fresh. I thought he meant sequence. :doh:
 
Last edited:
  • #8
berkeman said:
Thank you fresh. I thought me meant sequence. :doh:
Not to distract the thread, but shouldn't he have used ##+## signs in his statement instead of commas? Or is it just understood if you say "series" that "," = "+" ? Thanks.
 
  • Like
Likes PeroK
  • #9
berkeman said:
Not to distract the thread, but shouldn't he have used ##+## signs in his statement instead of commas? Or is it just understood if you say "series" that ##,## = ##+## ? Thanks.
I think this specific series is so popular (Zenon paradox, von Neumann anecdotes, standard example in math) that ##1,\frac{1}{2},\frac{1}{4},\frac{1}{8},\ldots ## and ##2## is already sufficient that everybody understands what he meant. The word series is automatically a sum in this context. Exception: time series (Brockwell / Davis), where the word series is used to emphasize the order and not the summation, although they only informally "define" them as sets of pairs.
 
  • Informative
Likes berkeman
  • #10
If "the series ##1, \frac 1 2, \frac 1 4, \frac 1 8 \dots##"means ##1 + \frac 1 2 + \frac 1 4 + \frac 1 8 + \dots##", then what does "the sequence ##1 + \frac 1 2 + \frac 1 4 + \frac 1 8 + \dots##" mean? 🤔
 
  • Haha
Likes berkeman and fresh_42
  • #11
Fun fact: the example - regardless of whether ##\displaystyle{\lim_{n \to \infty}\sum_{k=0}^n 2^{-k}=2}## or ##\displaystyle{\lim_{n \to \infty}2^{-n}=0}## - do not fit to the question as already asserted in post #2.
pasmith said:
The distinction is one which is only applicable to sequences of functions.
 
  • #12
An example where not all points in a sequence of functions will converge to a continuous function is ##x^n; n=1,2,.. ; x \in [0,1]##. Points closer to ##1## go to zero at a much slower rate than those near ##0##.
 
  • Like
Likes chwala
  • #13
My book has with ##n>1## and ##f_n\, : \,[0,1] \longrightarrow \mathbb{R}##
$$
f_n(x) =\max\left(n-n^2\cdot \left|x-\dfrac{1}{n}\right|,0\right)
$$
as an example of pointwise, not uniform convergence.
 
  • #14
..a bit clear now...

In general, looking at limits of functions... the 'epsilons' simply means small changes in y values and the n>M simply means x values ... In the context of limits of functions...
 
Last edited:
  • #15
chwala said:
..a bit clear now...

I get that the 'epsilons' simply means small changes in y values and the n>M simply means x values ... In the context of limits of functions...
You should try to prove the non-uniformness of the two examples @WWGD and I gave to see that you cannot choose N independently of the location. Consider ##|f_n(x)-0| < 1## for all ##x\in [0,1]## in my example.
 
  • #16
I'll try and analyse the example given by @WWGD looks a bit straightforward then get back.
 
  • #17
I understand the distinction in this way, not hard to follow my challenge is majorly on the semantics used on analysis...

To check for uniform convergence, for the case ##0≤x<1##.

## \lim_{n→∞} sup |f_n (x) - f(x)| =0 ##
i.e the maximum deviation between ##f_n (x)## and ##f(x)## approaches ##0## as ##n→∞##

##\lim_{n→∞} x^n =0## on ##0≤x<1##.

This implies that the sequence, ##x^n## converges pointwise to the function ##f(x)=0## on ##0≤x<1##.

For uniform convergence, we check for

## sup |x^n - 0| = sup |x^n|##

since ##|x^n|## is maximized at ##x=1-ε##, for small ##∈##.

##x^n## can be quite large near ##x=1## even though it approaches ##0## for ##x## strictly less than ##1##. For e.g ##0.99^{100} ≈ 0.366##. The rate at which ##x^n## approaches 0 depends heavily on how close ##x## is to ##1##. In conclusion, the convergence is not uniform.

For the case ##x=1##,

##x^n## remains equal to ##1, ∀n## hence no uniform convergence.
 
Last edited:

Similar threads

Replies
7
Views
660
Replies
5
Views
2K
Replies
5
Views
477
Replies
5
Views
2K
Replies
11
Views
1K
Replies
3
Views
1K
Back
Top