On (real) entire functions and the identity theorem

In summary, the paper discusses real entire functions, which are functions that are holomorphic across the entire complex plane and take real values for real inputs. It explores the identity theorem, which states that if two entire functions agree on a set of points with a limit point, they must be identical everywhere. The paper examines the implications of this theorem for real entire functions, highlighting uniqueness and properties of such functions in relation to their zeros and growth rates. It also considers various examples and applications within the broader context of complex analysis.
  • #1
psie
269
32
TL;DR Summary
In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.
In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:

In fact, any function which has a power series with infinite radius of convergence [...] is completely determined by its values on ##[0,\infty)##. This is so since ##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##.

Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
 
Physics news on Phys.org
  • #2
A function [itex]f[/itex] having an infinite radius of convergence means that there exists a sequence [itex](a_n)_{n \geq 0}[/itex] of real numbers such that [tex]
f(t) = \sum_{n=0}^\infty a_n t^n[/tex] is true for every [itex]t \in \mathbb{R}[/itex]. It follows by direct differentiation (which can be done term-by-term within the radius of convergence) that [tex]
f^{(n)}(0) = n!a_n[/tex] so that [itex]f^{(n)}(0)[/itex] exists.

Since [itex]f^{(n)}(0)[/itex] exists, it must be equal to the one-sided limit [tex]\lim_{t \to 0^{+}} \frac{f^{(n-1)}(t) - f^{(n-1)}(0)}{t}[/tex] which depends only on the values of [itex]f^{(n-1)}[/itex], and hence ultimately of [itex]f[/itex], on the interval [itex][0, \infty)[/itex].

(The converse does not hold: for [itex]f^{(n)}(0)[/itex] to exist we need the existence and value of the limit to be independent of the direction in which we approach the origin.)
 
  • Like
Likes psie
  • #3
psie said:
TL;DR Summary: In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.

In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:
Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
I believe the Identity theorem, that two functions that agree on a subset containing a limit point ( i.e., not just in a discrete set) are equal everywhere, only applies for Complex-Analytic functions, not Real-Analytic ones. Maybe it applies if the latter can be extended into a Complex-Analytic function ( as its Real part).
,
 

FAQ: On (real) entire functions and the identity theorem

What is an entire function?

An entire function is a complex function that is holomorphic (complex differentiable) at every point in the complex plane. These functions can be represented by a power series that converges for all complex numbers. Examples include polynomials and the exponential function.

What is the identity theorem for entire functions?

The identity theorem states that if two entire functions agree on a set of points that have a limit point within the complex plane, then the two functions must be identical everywhere in the complex plane. This theorem highlights the strong uniqueness property of entire functions.

How do real entire functions differ from complex entire functions?

Real entire functions are a subset of entire functions where the function is defined and takes real values for real inputs. While all real entire functions are also complex entire functions, the properties and behaviors may differ, especially in terms of their growth and the nature of their zeros.

What is the significance of the zeros of entire functions?

The zeros of entire functions play a crucial role in their analysis. According to the Weierstrass factorization theorem, any entire function can be expressed as a product involving its zeros. The distribution of these zeros can also provide insights into the growth and behavior of the function, with implications for stability and convergence.

How can the identity theorem be applied in practical scenarios?

The identity theorem can be applied in various fields such as engineering, physics, and applied mathematics, particularly in signal processing and control theory. It allows for the determination of function equivalence based on limited data, which can simplify complex analyses and facilitate the design of systems that rely on function approximation.

Similar threads

Back
Top