What is the difference between these two power series theorems?

  • #1
psie
212
26
TL;DR Summary
I'm studying probability generating functions (PGF) and it is claimed that if two RVs have the same PGF, then they have the same distribution. The author of my book refers to the literature for this result and remarks this follows from the "uniqueness theorem" for power series. I have been trying to look up this theorem and found two theorems in regards to this, but to a great extent different proofs. This makes me think these theorems are not the same, but I can't tell the difference.
The first theorem is from baby Rudin, i.e. his book PMA. It reads as follows:

8.5 Theorem Suppose the series ##\sum a_n x^n## and ##\sum b_n x^n## converge in the segment ##S=(-R,R)##. Let ##E## be the set of all ##x\in S## at which $$\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty b_n x^n.$$ If ##E## has a limit point in ##S##, then ##a_n=b_n## for ##n=0,1,2,\ldots##. Hence the above equation holds for all ##x\in S##.

I omit the proof, as it is a bit technical. But basically one needs to prove that ##A##, the set of all limit points of ##E## in ##S##, is open, from which it'll follow that ##E=S##.

The second theorem is from these lecture notes:

Corollary 10.23. If two power series $$\sum_{n=0}^{\infty} a_n(x-c)^n, \quad \sum_{n=0}^{\infty} b_n(x-c)^n$$ have nonzero-radius of convergence and are equal in some neighborhood of ##c##, then ##a_n=b_n## for every ##n=0,1,2, \ldots##.

Proof. If the common sum in ##|x-c|<\delta## is ##f(x)##, we have $$a_n=\frac{f^{(n)}(c)}{n!}, \quad b_n=\frac{f^{(n)}(c)}{n!},$$ since the derivatives of ##f## at ##c## are determined by the values of ##f## in an arbitrarily small open interval about ##c##, so the coefficients are equal.

The proof here is different from that in Rudin, much simpler it seems. But I don't know if maybe the corollary is weaker than the theorem in Rudin's book. Does anyone what the difference is between these two statements?
 
Physics news on Phys.org
  • #2
I think I have spotted the difference. In Rudin's theorem, ##E## is simply a set that has a limit point, whereas ##E## in the corollary is a neighborhood around ##c##. Thus the corollary has a stronger hypothesis.
 
  • Like
Likes WWGD and Euge
  • #3
psie said:
I think I have spotted the difference. In Rudin's theorem, ##E## is simply a set that has a limit point, whereas ##E## in the corollary is a neighborhood around ##c##. Thus the corollary has a stronger hypothesis.
As an example, ##E## may equal ##\{ 1/n ; n \in \mathbb N \} ##.
 
  • Like
Likes psie
  • #4
you have spotted the difference. it may not help, but here is my take on the stronger version. one wants to show that (by subtraction) a power series whose set of zeroes has a limit point, is the zero series, all an=0. this is called the principle of isolated zeroes for analytic functions.

I.e. let c be any point of the interval and re-expand the power series about c. Either all coefficients are zero, or it has form (x-c)^k.f(x-c), where f is a power series in (x-c) with non- zero constant term. Hence f is non zero near c, and (x-c)^k is zero only at c, so c is an isolated zero.
Hence if c is a limit point in the set of zeroes, the re-expansion of the series about c has all coefficients zero. (Hence the function is actually zero on a neighborhood of c, and the set of limit points is open.) But then we can obtain the original series by re-expanding back from the zero series, hence the original series is also zero. (You can also argue that the set of limit points is also closed, and thus equals the whole interval, since a non empty proper subset of an interval cannot be both open and closed.)
 
Last edited:
  • Like
Likes psie
  • #5
Maybe too, you can expand your Real series into a Complex Analytic one, where the Identity theorem would apply, making the difference between series identically zero. Edit: I believe the existence of the extension may be justified by Cauchy-Riemann.
 
Last edited:
  • #6
nice idea, but I think this is mostly the same story. i.e. a series with real coefficients that converges for real x such that |x| ≤ R, also converges for complex z with |z| ≤ R. this is essentially because absolute convergence implies convergence for both real and complex series, and absolute convergence depends only on the length |z|. but we don't gain anything since the proof of the identity theorem for the complex series is exactly the same as the one for the real series.

i.e. the identity theorem is a characteristic of analytic functions, i.e. functions locally defined by power series, and rests on the principle of isolated zeroes for such functions. whether the coefficients and arguments are real or complex is immaterial.

there is indeed a stronger theorem for complex functions which says that if they are simply differentiable on a connected open set, then the identity theorem holds for them, the proof is first to show that such functions are actually analytic, i.e. locally defined by power series, and then apply the identity principle for analytic functions.

so real differentiable functions can be very different from complex differentiable ones, but real analytic functions behave much like complex analytic ones. the problem with real differentiable functions is we don't know necessarily whether they are analytic, but if we do know that, then we already have many of the good properties of complex analytic ones. Not all of course, no open mapping property, and no path integral formulas. At least this is my take on it.
 
Last edited:

Similar threads

Replies
6
Views
1K
Replies
2
Views
1K
Replies
1
Views
782
Replies
4
Views
1K
Replies
5
Views
890
  • Calculus and Beyond Homework Help
Replies
1
Views
597
  • Calculus and Beyond Homework Help
Replies
2
Views
990
  • Differential Equations
Replies
7
Views
798
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
626
Back
Top