Regarding approximation theorem

In summary, the discussion is about the effect of a periodic function and Parseval's Theorem. The effect of a function is defined as P(f) = 1/(2\pi) \int_{-\pi}^\pi |f(t)|^2 \ dt and Parseval's Theorem states that P(f) = \sum_{n=\infty}^\infty |c_n|^2. The conversation then moves on to discussing partial sums and how they can be defined in various ways. The main question is how to prove that the inequality P(Sn)/P(f) \geq \delta is satisfied only if \sum_{|n|>N} |c_n|^2 \leq (1-\
  • #1
standardflop
48
0
Hello,
The effect of a 2pi periodic function f is defined as
[tex] P(f) = 1/(2\pi) \int_{-\pi}^\pi |f(t)|^2 \ dt[/tex]
and Parsevals Theorem tells us that
[tex] P(f) = \sum_{n=\infty}^\infty |c_n|^2 [/tex]. Now, it seems rather intuituve that the effect of the N'te partial sum is
[tex] P(Sn) = \sum_{n=-N}^N |c_n|^2 [/tex] But what is the in-between math argument? And furthermore, how can i proove that the inequality [itex] P(Sn)/P(f) \geq \delta [/itex] is satisfied only if
[tex] \sum_{|n|>N} |c_n|^2 \leq (1-\delta)P(f) [/tex]

Thanks
 
Physics news on Phys.org
  • #2
First Part:
You can define "partial sum" (almost) any way you want here. If you have a series
[tex]
\sum_{n = A}^B \text{ (something)} \, ,
[/tex]
any of the following are partial sums:
[tex]
\begin{align*}
f(S_1) &= \sum_{n = A^\prime}^B \text{ (something)} \, , \quad (A^\prime > A) \, ;\\
f(S_2) &= \sum_{n = A}^{B^\prime} \text{ (something)} \, , \quad (B^\prime < B) \, ;\\
f(S_\text{crazy}) &= \sum_{n = -1249742}^{197494614} \text{ (something)} \, .\\
\end{align*}
[/tex]
(The limits in the last expression are both between A and B.)
You have it right. All you've done is written down a nice, symmetric partial sum, which is the one that will solve the...
Second Part:
Notice
[tex]
P(f) = P(Sn) + \sum_{|n| > N} |c_n|^2 \, .
[/tex]
Take it from there.
 
Last edited:
  • #3
bigplanet401 said:
First Part:
You can define "partial sum" (almost) any way you want here. If you have a series
[tex]
\sum_{n = A}^B \text{ (something)} \, ,
[/tex]
any of the following are partial sums:
[tex]
\begin{align*}
f(S_1) &= \sum_{n = A^\prime}^B \text{ (something)} \, , \quad (A^\prime > A) \, ;\\
f(S_2) &= \sum_{n = A}^{B^\prime} \text{ (something)} \, , \quad (B^\prime < B) \, ;\\
f(S_\text{crazy}) &= \sum_{n = -1249742}^{197494614} \text{ (something)} \, .\\
\end{align*}
[/tex]
(The limits in the last expression are both between A and B.)
You have it right. All you've done is written down a nice, symmetric partial sum, which is the one that will solve the...
Sorry, but I am not sure i understand what you're trying to say. Shouldn one use the definition of effect to show that
[tex] P(Sn) = \sum_{n=-N}^N |c_n|^2 [/tex]
bigplanet401 said:
Second Part:
Notice
[tex]
P(f) = P(Sn) + \sum_{|n| > N} |c_n|^2 \, .
[/tex]
Take it from there.
Yes. Ok. I see, that greatest value of [tex] \sum_{|n| > N} |c_n|^2 [/tex] is bound to be no higher than P(f), and only when [itex] \delta = 0 [/itex]. And ofcourse when [itex] \delta = 1 [/itex], [tex] \sum_{|n| > N} |c_n|^2 \leq 0[/tex]. But i can't see why all the in-between values of delta must satisfy
[tex] \sum_{|n| > N} |c_n|^2 \leq (1-\delta ) P(f)[/tex]
 

FAQ: Regarding approximation theorem

What is the approximation theorem?

The approximation theorem, also known as the approximation property, is a mathematical concept that states that any continuous function on a closed interval can be approximated by a sequence of polynomial functions.

What is the significance of the approximation theorem?

The approximation theorem is significant because it allows for the use of polynomial functions, which are easier to work with, to approximate more complex and continuous functions. This makes it a powerful tool in many areas of mathematics, including calculus and numerical analysis.

How is the approximation theorem different from the intermediate value theorem?

The intermediate value theorem states that if a function is continuous on a closed interval, it must take on all values between its endpoints. On the other hand, the approximation theorem deals specifically with the ability to approximate functions using polynomial functions.

Are there any limitations to the approximation theorem?

Yes, there are limitations to the approximation theorem. It only applies to continuous functions on a closed interval and requires the use of polynomial functions, which may not always be the most accurate approximation method.

Can the approximation theorem be extended to functions on open intervals?

Yes, the approximation theorem can be extended to functions on open intervals through the use of techniques such as Taylor series. However, the convergence of these approximations may not be as strong as in the case of functions on closed intervals.

Similar threads

Back
Top