What does it mean when an integral is evaluated over a single limit?

In summary, the discussion revolved around the use of Fourier series to represent periodic functions, with the main criteria being that the function must be periodic and bounded. The concept of integration over the period was also mentioned, with the DC component being the integral over the period. Different types of Fourier series representations were discussed, including one for functions with a jump discontinuity. The conversation also touched on the application of Fourier series on an interval and how the starting frequency is determined based on the function's period. The use of trigonometric form in Fourier transform was compared to the exponential form, with the latter being more suitable for representing non-periodic functions. Finally, the integration of an even function was shown as an example of how the Fourier series can
  • #1
PainterGuy
940
70
TL;DR Summary
Trying to understand the requirements for a function to be represented by a Fourier series
Hi,

A function which could be represented using Fourier series should be periodic and bounded. I'd say that the function should also integrate to zero over its period ignoring the DC component.

For many functions area from -π to 0 cancels out the area from 0 to π. For example, Fourier series representation #1 below approximates such a function.

For some functions area from -π to -π/2 cancels out area from -π/2 to 0, and then area from 0 to π/2 gets canceled by the area from π/2 to π. For example, Fourier series representation #2 below approximates such a function.

I'm not sure if the function needs to integrate to zero following these two patterns, or it should just amount to zero without actually following any pattern of area cancellation. Could you please let me know if I have it right?

Could you represent a function something like this using Fourier series? I'm just trying to get general concept about Fourier series right. Thank you for your help.

Fourier series representations #1
?hash=2b152a3b4473e0c52054976d601103ad.jpg


Fourier series representations #2
?hash=2b152a3b4473e0c52054976d601103ad.jpg
 

Attachments

  • fs_121.jpg
    fs_121.jpg
    20.6 KB · Views: 408
  • fs_123.jpg
    fs_123.jpg
    21.7 KB · Views: 415
  • fs_func.jpg
    fs_func.jpg
    14.7 KB · Views: 721
Mathematics news on Phys.org
  • #2
PainterGuy said:
I'd say that the function should also integrate to zero over its period ignoring the DC component.
That's a way of saying twice the same -- the DC component IS (proportional to) the integral over the period. And a Fourier series starts with the coefficient for ##\cos(0)## -- a constant.

[edit] and, to answer your question: yes, your slightly pathological function can also be represented by a Fourier seeries
 
  • Like
Likes PainterGuy
  • #3
Thank you!

But doesn't there exist a periodic function(s) which doesn't integrate to zero over its period? Thanks a lot for your help.
 
  • #4
You can add a constant to any periodic function and it remains periodic. And the only term that changes in the Fourier series is the ##a_0## term.
Your question is very strange for me :rolleyes: .
 
  • Like
Likes PainterGuy
  • #5
Thank you!

Yes, you are right it was a silly one! :)
 
  • #6
The original function doesn't need to be periodic. The more general application is that you want to represent the function on an interval [a, b], and you don't care about outside the interval.

The range of functions which can be approximated by a Fourier series on an interval is pretty broad. You can for instance have a jump discontinuity as in a step function. The series doesn't converge to f(x) at the point of discontinuity (for instance if you have a jump from 0 to 1 at x0, the series may converge to 1/2).

I don't recall the precise conditions for pointwise convergence of a Fourier series, but they're mentioned in this thread:
https://math.stackexchange.com/ques...nction-can-be-represented-as-a-fourier-series "The deeper fact is Carleson's theorem, which was one of the most difficult achievements in 20th century analysis, and tells us about the precise conditions for pointwise (actually, "pointwise almost everywhere") convergence of Fourier series"
 
  • Like
Likes PainterGuy and DrClaude
  • #7
Hi,

I had few related questions to this discussion so I thought it'd better to ask those here.

Question 1:

246301


A sinusoid, e.g. cosine, is mostly given as
cos(θ)=cos(ωt)
where ω=2πf

In Fourier series a sinusoid is mostly written as:
cos(nt)
If n=1 then,
cos(1⋅t)=cos(ω⋅t)
⇒1⋅t=ω⋅t
⇒ω=1
⇒2πf=1
⇒f=(1/(2π))= 0.15915

It would mean that for Fourier series of any function the starting frequency would be 0.15915 Hz but why isn't the starting frequency "1 Hz"?

Question 2:
I prefer trigonometric form of Fourier transform over the exponential form because it's easier to think of it as an extension of trigonometric form Fourier series. Given below are two excerpts about trigonometric form of Fourier transform.

Excerpt #1:
246367


Excerpt #2:
246368

Source: https://en.wikipedia.org/wiki/Fourier_transform#Sine_and_cosine_transforms

The given below Fourier transform for a unit pulse is found using exponential form of Fourier transform. Is it possible to find it using trigonometric form of Fourier transform? I'm sorry I didn't do it but it looks like it's not possible. Although considering the sufficient conditions given in excerpt #1 above, the f(t) is a piecewise continuous function.

246369


Thank you for your help and time.
 
  • #8
Question 1: That's not true. The series you give will reproduce a function which has a period of ##2\pi## seconds or which is defined on an interval of width ##2\pi## seconds, such as ##[0,2\pi]## or ##[-\pi,\pi]##

In general a function with period ##T## or defined on an interval of width ##T## will be represented by a series with fundamental frequency ##f = 1/T##, i.e. sums of ##\cos(2\pi n t/T)## and ##\sin(2\pi nt/T)##.

PainterGuy said:
It would mean that for Fourier series of any function the starting frequency would be 0.15915 Hz but why isn't the starting frequency "1 Hz"?

It would mean that the Fourier series of a function with period ##2 \pi## seconds was composed of sinusoids which are periodic over ##2\pi## seconds. Not 1 Hz because 1 Hz does not repeat itself in ##2 \pi## seconds.

In Question 2 you're looking at the continuous Fourier Transform, which is generalized from the series version. The series version as I said is appropriate for periodic functions or functions which are defined only over a finite interval. The continuous transform is defined for a different class of functions which are given in your excerpt.

They're not quite the same though obviously there's a connection.

Anyway in answer to your question, your ##f(t)## is an even function. That causes the integral for ##b(\lambda)## to be 0 for all ##\lambda##. The integral for ##a(\lambda)## just becomes an integral of ##\cos(2\pi\lambda t)## from ##-b## to ##+b## which gives the expression you're looking for.
 
  • Like
Likes PainterGuy
  • #9
Here are the integrals.
$$\int_{-\infty}^\infty f(t) \cos(2\pi \lambda t) dt = \int_{-b}^b \cos(2\pi \lambda t) dt \\
= \frac {1}{2\pi\lambda} \left [\sin(2\pi\lambda b) - \sin(-2\pi\lambda b) \right ] \\
= \frac {1}{2\pi\lambda} \left [\sin(2\pi\lambda b) + \sin(2\pi\lambda b) \right ] \\
= \frac {2 \sin(2 \pi \lambda b)} {2 \pi \lambda} \\
= \frac {2 \sin(\omega b)} {\omega} \text{ where } \omega = 2\pi\lambda$$

$$\int_{-\infty}^\infty f(t) \sin(2\pi \lambda t) dt = \int_{-b}^b \sin(2\pi \lambda t) dt \\
= \frac {1}{2\pi\lambda} \left [-\cos(2\pi\lambda b) + \cos(-2\pi\lambda b) \right ] \\
= \frac {1}{2\pi\lambda} \left [-\cos(2\pi\lambda b) + \cos(2\pi\lambda b) \right ] = 0 $$
 
Last edited:
  • Like
Likes PainterGuy
  • #10
This is a far more difficult question to answer completely than it may appear. It can be answered incompletely, including the following.
First suppose ##\int_0^{2\pi} |f(x)|^2\,dx < \infty.## (The absolute value sign is needed since ##f(x)## need not be a real number; it is a complex number, so its square need not be positive or even real.) That is enough to guarantee it has a Fourier series in which the sum of the squares of the absolute values of the coefficients is the same as the foregoing integral. But does the series converge to ##f(x)##? In one sense it does: $$ \lim_{n\to\infty} \left| f(x) - \sum_{k=-n}^n c_k e^{ikx}\right|^2 = 0. $$ But that falls short of saying that for every number ##x## you have $$\lim_{n\to\infty} \sum_{k=-n}^n c_k e^{ikx} = f(x) \qquad \text{(?)}$$ It was not until the 1960s that it was shown that for almost every value of ##x## that is true, and "almost every" means the measure of the set of exceptions is zero. That means that no matter how tiny you make a positive number ##\varepsilon,## the set of exceptions fits within a union of open intervals the sum of whose lengths is no more than ##\varepsilon.##
 
  • Like
Likes PainterGuy
  • #11
RPinPA said:
Question 1: That's not true. The series you give will reproduce a function which has a period of ##2\pi## seconds or which is defined on an interval of width ##2\pi## seconds, such as ##[0,2\pi]## or ##[-\pi,\pi]##

In general a function with period ##T## or defined on an interval of width ##T## will be represented by a series with fundamental frequency ##f = 1/T##, i.e. sums of ##\cos(2\pi n t/T)## and ##\sin(2\pi nt/T)##.

I believe that I understand it now. Actually "t" or "x" along the x-axis doesn't just represent time. Any periodic phenomenon could be stated in terms of degrees where 360° stands for one complete cycle.

246526


So, "x" is implicitly given in terms of "2πt".
When t=0 seconds: x=0°.
When t=0.5 seconds: x=180°.
When t=1 seconds: x=360°.
When t=2 seconds: x=720°.

I'm sorry if I'm still having it wrong.

Note to self: If you are measuring two periodic phenomena along the same axis, a slower phenomenon would take 360° to complete one period and the other faster phenomenon having double the frequency of slower phenomenon might apparently 'seem' to take just 180° but the faster phenomenon is also taking 360° to complete its period; the "180°" just mean that the second require comparatively just 180° of the slower phenomenon to complete its period.

RPinPA said:
It would mean that the Fourier series of a function with period ##2 \pi## seconds was composed of sinusoids which are periodic over ##2\pi## seconds.

I'm sorry to split hairs, but wouldn't only fundamental frequency be periodic over 2π seconds and the harmonics would be periodic over multiples of 2π seconds?

RPinPA said:
Here are the integrals.
$$\int_{-\infty}^\infty f(t) \cos(2\pi \lambda t) dt = \int_{-b}^b \cos(2\pi \lambda t) dt \\
= \frac {1}{2\pi\lambda} \left [\sin(2\pi\lambda b) - \sin(-2\pi\lambda b) \right ] \\
= \frac {1}{2\pi\lambda} \left [\sin(2\pi\lambda b) + \sin(2\pi\lambda b) \right ] \\
= \frac {2 \sin(2 \pi \lambda b)} {2 \pi \lambda} \\
= \frac {2 \sin(\omega b)} {\omega} \text{ where } \omega = 2\pi\lambda$$

246532


In your calculation, you used formulae from Excerpt #2. You didn't use the factor "2" in front of integral.

I did the calculation using formulae from Excerpt #1 and didn't use the factor of (1/π). I reached the same solution as you and apparently this is the correct solution. Then, why would Excerpt #1 and #2 have those factors?

246533

The most used form of Fourier transform is exponential form given as:

246534


By comparing trigonometric form and exponential form, we can see that A(α)=F(α).

Both A(α) and F(α) give us the magnitudes of involved frequencies and it implicitly means that if you add all the frequencies with given magnitudes, you would get f(x) back.

On the other hand, trigonometric form does convey the information by explicitly stating how to get f(x) back as shown below.

246535

I was trying to find that why the exponential form of Fourier transform is more popular compared to its trigonometric equivalent. I was able to find an answer, https://math.stackexchange.com/ques...er-series-versus-trigonometric-fourier-series, which gives the reasons in terms of exponential form of Fourier series and trigonometric form of Fourier series. It has more to do with cleanliness, compactness, and easy manipulation of exponential form compared to the trigonometric and it wouldn't be wrong to say that both are equally applicable mathematically.

Thanks a lot for your help and time!
 
  • #12
wouldn't only fundamental frequency be periodic over 2π seconds and the harmonics would be periodic over multiples of ##2\pi## seconds?

No, that's backwards. The harmonics would be over submultiples, i.e. divisors, of ##2\pi##, i.e. ##2\pi/2,\, 2\pi/3,\, 2\pi/4,\, 2\pi/5,\,\ldots## They have higher frequencies, hence shorter periods. Thus all of them would have ##2\pi## as a period, but not necessarily as a shortest period.
 
  • Like
Likes PainterGuy
  • #13
Hi

Check the dirichlet conditions
Any function that satisfies these, has a Fourier series rep.
 
  • #14
Hi,

Michael Hardy said:
It was not until the 1960s that it was shown that for almost every value of xxx that is true, and "almost every" means the measure of the set of exceptions is zero. That means that no matter how tiny you make a positive number ε,ε,\varepsilon, the set of exceptions fits within a union of open intervals the sum of whose lengths is no more than ε.

Thank you. This was also mentioned in post #6; see the quote below.

RPinPA said:
"The deeper fact is Carleson's theorem, which was one of the most difficult achievements in 20th century analysis, and tells us about the precise conditions for pointwise (actually, "pointwise almost everywhere") convergence of Fourier series"

Wikipedia article: https://en.wikipedia.org/wiki/Carleson's_theorem
AVBs2Systems said:
Hi

Check the dirichlet conditions
Any function that satisfies these, has a Fourier series rep.

Thanks. I agree with you.

246589
 
  • #15
Hi again,

I'm sorry that the questions below aren't that clear but I don't really know how to put them any other way.

Fourier transform (or series) could be represented in two forms, exponential form or trigonometric form, as is shown below.

1563522424680.png


Exponential form uses all the frequencies from -∞ to +∞; in other words it involves negative frequencies which many people, including me, find quite weird but it doesn't make sense to ask the question again that 'what are negative frequencies' when it has already been asked many, many times at many, many different places. On the other hand, trigonometric form of Fourier transform, which isn't used very often does not use negative frequencies.

Well, one question does come to mind that which of the two between negative and positive frequencies, are more real 'physically and practically'. I'm not even sure if it's a legit question. The answer could be that mathematically exponential form is superior because it's symmetric around the origin.

This is another related question. Suppose that the frequency spectrum of a modulating signal is found using the trigonometric form of Fourier transform, and, say, this spectrum extends from 0 Hz to 500 Hz. Now let's suppose that frequency of carrier wave is 2500 Hz. As there are no negative frequencies in the modulating signal therefore only upper side band (USB) should be generated and there should be no lower side band (LSB). But I have never seen any picture of a spectrum of modulated signal, such as AM, where LSB is missing; mostly modulated signal appears like this. Why isn't it possible to avoid LSB when using only positive frequencies, or is it just me?

Thank you for the help!
 

Attachments

  • lsb_usb.jpg
    lsb_usb.jpg
    19 KB · Views: 464
  • #16
PainterGuy said:
As there are no negative frequencies in the modulating signal therefore only upper side band (USB) should be generated

That's incorrect. Having only one frequency at baseband doesn't mean you'll have only one frequency after mixing up. Let's analyze a simple amplitude modulated carrier wave.

Let's say the carrier frequency is ##\Omega## (I'm going to use angular frequencies such as ##\Omega = 2\pi F## to avoid writing lots of ##2 \pi's##) so the carrier wave is ##\sin(\Omega t)##.

Now we modulate it at frequency ##\omega## so our signal is ##s(t) = \sin(\omega t) \sin(\Omega t)##

Let's derive a little trig identity that we'll need. Consider ##\cos(x + y) = \cos(x) \cos(y) - \sin(x) \sin(y)## and ##\cos(x - y) = \cos(x) \cos(y) + \sin(x) \sin(y)##. So ##\cos(x - y) - \cos(x + y) = 2\sin(x) \sin(y)##

Then ##s(t) = (1/2) \left [ \cos(\Omega - \omega)t - \cos(\Omega + \omega)t \right ]##

Mixing (multiplying by a sinusoid) produces both sum and difference frequencies, and removing one of those frequencies requires an extra filtering step.
 
  • Like
Likes PainterGuy
  • #17
Thank you for correcting me.

For some reason I was wrongly under the impression that lower side band is a result of using negative frequencies of the exponential form.

I understand that mathematically it's more practical to use the exponential form of Fourier transform. But at the same time one can do every calculation using the trigonometric form which only uses 'more sensible' positive frequencies and end up with the same result as with the exponential form. Now the question is why all this confusion about the 'physical significance' of negative frequencies is so important. A complex sinusoid, e^iωt or e^iθ, is mostly thought of as a counterclockwise rotating wheel. If that wheel can theoretically rotate one way then why not the other way in clockwise direction. In other words, it's like the flipping of a coin where the probability for both sides is equal, i.e. 1/2, and 1/2 + 1/2 =1. We know that the frequency spectrum found using exponential form is symmetrical around the y-axis, and the magnitudes of corresponding positive and negative frequencies are added to get full magnitude. This adding up of magnitudes is much like adding "1/2" probability of a coin to get "1". In short, negative frequencies are a mathematical construct or abstraction to provide symmetry.

I have also read about time going backwards in case of negative frequencies and going forward in case of positive frequencies. In the expression below, integral from -∞ to 0 involve negative frequencies because you need to sum up sinusoids made up of negative frequencies. Why don't we just say that "ω" is a kind of vector where +ω represents counterclockwise direction and -ω shows clockwise direction? I understand that strictly speaking calling "ω" a vector is a lame statement! (Edit: "ω" could be called a signed scalar just like +θ is considered being measured counterclockwise from the positive x-axis and -θ is the angle measured clockwise; θ=ωt so it could be said that -θ={-ω}t) But saying time going backwards is also a little bit of science fiction. Could you please let me know your opinion about this negative frequencies confusion, or do you think you could make it easier for me to understand? Thanks.

1563603819266.png
Please have a look on the attachment, fourier_expo1.

I'm not sure how the author gets to step 16. I tried it but "-" sign stands in my way as shown below.

1563594900040.png


Thank you for your help and time!
 

Attachments

  • fourier_expo1.jpg
    fourier_expo1.jpg
    46.7 KB · Views: 800
Last edited:
  • #18
PainterGuy said:
This adding up of magnitudes is much like adding "1/2" probability of a coin to get "1". In short, negative frequencies are a mathematical construct or abstraction to provide symmetry.

I can see why you prefer the trigonometric form and are a little distrustful of the complex form. These are representations of real signals. As such, they only have positive frequencies, and their value better turn out to be real.

The exponential form is much easier to deal with mathematically, but it can potentially lead to complex solutions. The reason there are negative frequencies, in my view, is simply because when we are constructing a real signal, every complex number must also be paired with its complex conjugate. Both the number and its conjugate must be present and added.

The negative frequencies add zero information when they arise from the transformation of a real signal. They don't have a physical meaning.

Another way I think of it is that there are two pieces of independent information needed to completely reconstruct a real signal. In the complex transform, they are contained in the real and imaginary parts of the transform (at positive frequencies). In the trigonometric version, they are the sine and cosine transforms.

PainterGuy said:
Why don't we just say that "ω" is a kind of vector where +ω represents counterclockwise direction and -ω shows clockwise direction?

I'm sorry but I don't really follow what point you're making in this paragraph.

A real-valued signal consisting of a modulated carrier wave can be thought of as having an instantaneous magnitude and phase relative to the carrier, that is as ##A(t) \sin [\Omega t + \phi(t)]##. Again, two pieces of information needed to describe it. In actual receiver logic I've often seen that what is measured is something like a sine and cosine transform which are then treated as the real and imaginary parts of the corresponding complex number.

When you transform to baseband, subtracting off the carrier frequency, you have an actual complex-valued signal which doesn't have conjugate symmetry. The negative frequencies have real physical meaning. But nothing exotic: what they really mean is a signal whose instantaneous frequency is less than the carrier. When you go the other way to put a complex modulation on a transmitted carrier, you're using real-valued amplitudes and phases. There's nothing actually complex or at "negative frequency" here.

I guess what I'm saying about negative frequencies is don't worry about it. Either think of them as a mathematical artifact from taking a complex transform of a real-valued thing, or think of them as relative to the carrier.
 
  • Like
Likes PainterGuy
  • #19
PainterGuy said:
Please have a look on the attachment, fourier_expo1.

I'm not sure how the author gets to step 16. I tried it but "-" sign stands in my way as shown below.

View attachment 246883

Thank you for your help and time!

The author says the integrand is an even function of ##\alpha##, so ##\int_{-\infty}^0 d\alpha## should be the same as ##\int_{0}^{\infty} d\alpha##. You have a sign error in your third line.

Let's define ##g(\alpha) = \int_{-\infty}^{\infty} f(t) \cos \alpha(t - x) dt##. Then ##g(-\alpha) = g(\alpha)## because ##\cos## is even, i.e., ##\cos [-\alpha(t - x)] = \cos \alpha(t - x)##.

So ##\int_{0}^{\infty} g(\alpha) d\alpha## = ##\int_{0}^{\infty} g(-\alpha) d\alpha## = -##\int_{0}^{-\infty} g(\beta) d\beta## where ##\beta = -\alpha, d\beta = -d\alpha##

Thus ##\int_{0}^{\infty} g(\alpha) d\alpha## = ##\int_{-\infty}^0 g(\beta) d\beta##

Intuitively, if you have an even function, so the graph to the left of the y-axis is the mirror image of the graph to the right of the y-axis, then the area under the left half should be the same as the area under the right half.
 
  • Like
Likes PainterGuy
  • #20
Thanks a lot for your comments about the negative frequencies.

RPinPA said:
The author says the integrand is an even function of αα\alpha, so ∫0−∞dα∫−∞0dα\int_{-\infty}^0 d\alpha should be the same as ∫∞0dα∫0∞dα\int_{0}^{\infty} d\alpha. You have a sign error in your third line.

I did read that statement about the integrand being an even function. In case of an even function, Fourier sine term is zero, and in case of odd function the cosine term is zero.

I understand what you said about integral of an even function in general but in this specific case I'm confused.

I believe the author is saying, like you, that the part in yellow results in an even function of α.

1563678743426.png


RPinPA said:
Let's define g(α)=∫∞−∞f(t)cosα(t−x)dtg(α)=∫−∞∞f(t)cos⁡α(t−x)dtg(\alpha) = \int_{-\infty}^{\infty} f(t) \cos \alpha(t - x) dt. Then g(−α)=g(α)g(−α)=g(α)g(-\alpha) = g(\alpha) because coscos\cos is even, i.e., cos[−α(t−x)]=cosα(t−x)cos⁡[−α(t−x)]=cos⁡α(t−x)\cos [-\alpha(t - x)] = \cos \alpha(t - x).

I agree that cosα(t-x) is an even function but the expression also involves f(t) and I don't think we really know if it's even or odd or both. Also, g(α) is an integral expression where integration variable is time and not α. My confusion stems from this point.
1563697857080.png


The product of two even functions is an even function. The product of two odd functions is an even function. The product of an even function and an odd function is an odd function.

For example, in this thread, https://www.physicsforums.com/threads/ambiguous-results-for-two-fourier-transform-techniques.974660/ , Fourier transform of f(t)=a.e^(-bt).u(t) was found to be a/(b+jω) or a/(b+jα); u(t) is a step function. The plot shown below is for a=b=1.

1563680212028.png

Source: http://pages.jh.edu/~signals/spectra/spectra.html

The magnitude of 1/(1+jα) is an even function but the phase is an odd expression. This, 1/(1+jα), function is same as g(α) or equivalent to the expression in yellow highlight.

So is g(α) or g(ω) an even function in this case?

Where am I going wrong? Could you please guide me?
 

Attachments

  • 1563697828118.png
    1563697828118.png
    1.2 KB · Views: 307
Last edited:
  • #21
PainterGuy said:
I agree that cosα(t-x) is an even function but the expression also involves f(t)

Which is independent of ##\alpha##, and therefore is unaffected when you change ##\alpha## to ##-\alpha##. When you make that change, the equation for ##g(\alpha)## is completely unchanged and therefore results in exactly the same function. The question of whether ##g(\alpha)## is even is in terms of that change, relative to an integral over ##\alpha##. The only question is what happens to it when you change ##\alpha## to ##-\alpha##.

PainterGuy said:
Also, g(α) is an integral expression where integration variable is time and not α. My confusion stems from this point.

##t## is a dummy variable in ##g(\alpha)##. After you perform the integration, there is no ##t## there, which is why you can write ##g## as a function of ##\alpha## with no dependence on ##t##. There is no ##t##. You could call it ##x##. You could call it ##s##. You could call it anything you want, but whatever you call it, it no longer appears after you do the integration.
 
Last edited:
  • Like
Likes PainterGuy
  • #22
Here is a simpler example of what's happening here.

Define ##g(\alpha) = \int_1^2 (\alpha t)^2 dt##. That may look like a function of ##t## to you, but it's not. ##t## is a dummy variable which does not exist outside the integral sign. We can explicitly calculate the value of ##g(\alpha)## by doing the integral.
##g(\alpha) = \alpha^2 \int_1^2 t^2 dt = \alpha^2 (2^2 - 1^3) = 3\alpha^2## and now you can see explicitly that (1) ##g## does not depend on ##t## and (2) ##g## is an even function of ##\alpha## and (3) it doesn't make sense to ask whether ##g## is an even function of ##t## or an odd function of ##t## because it is not a function of ##t## at all.

That's happening in your expression. The integral of ##g(\alpha)## when ##\alpha## goes from ##0## to ##\infty## is exactly the same as the integral of ##g(\alpha)## when ##\alpha## goes from ##-\infty## to ##0## because ##g(\alpha)## is a function of ##\alpha## which is unchanged when ##\alpha## is changed to ##-\alpha##. No matter what ##f(t)## is, ##g(\alpha)## does not contain a ##t##.

But even if it did contain other variables, the only thing that matters in the question of whether ##g## is even with respect to ##\alpha## is what happens when you change ##\alpha## to ##-\alpha##.
 
  • Like
Likes PainterGuy
  • #23
Thanks a lot for your help! I really appreciate it.

Could you please help me with a question at the end of next posting?

The following is a note to myself or for someone else like me who stumble upon this thread.

The author said, "We note that (16) follows from the fact that the integrand is an even function of a. In
(17) we have simply added zero to the integrand; ..., because the integrand is an odd function of a.
"

Given below is precise and straightforward answer.

1563793727932.png


We can probe it further to understand it better. Let's discuss a particular case, f(t), which resembles in the original expression being discussed. We are going to discuss Riemann sum too.

1563794062765.png

1563794355988.png


Now let's evaluate the same expression analytically.

1563794522600.png


Now let's focus on this part where the author said, "In (17) we have simply added zero to the integrand; ..., because the integrand is an odd function of a."

In the following calculation everything seems to cancel out but those differing "+" and "-" signs wouldn't let the expression cancel completely. Both signs should have been either "+" or "-". I wasn't able to track the error. The integral was evaluated using Symbolab.

I evaluated the same expression using TI-89 and the expression does cancel out to give "0" as said by the author. The TI-89 calculation is also shown.

1563795049383.png


This post continues into the next posting.
 
Last edited:
  • #24
1563795207575.png
Question:
By the way, let's say F(x)=∫f(x)dx. I understand that an integral is always evaluated using two limits like this this:
1563795530561.png


Does it mean anything when an integral is evaluated over only a single limit, like this F(b)? I understand that it'd given us a numeric value but does this numeric value mean anything?

Thank you for your help and time!
 

FAQ: What does it mean when an integral is evaluated over a single limit?

1. What is a Fourier series representation?

A Fourier series representation is a mathematical tool used to express a periodic function as a sum of trigonometric functions. It is named after French mathematician Joseph Fourier and is commonly used in signal processing, image processing, and other fields of science and engineering.

2. How is a Fourier series calculated?

A Fourier series is calculated using the Fourier coefficients, which are determined by integrating the function over one period and dividing by the period. These coefficients are then used to construct the Fourier series, which is an infinite sum of sine and cosine terms.

3. What is the difference between a Fourier series and a Fourier transform?

A Fourier series is used to represent a periodic function, while a Fourier transform is used to represent a non-periodic function. The Fourier transform also produces a continuous frequency spectrum, while the Fourier series produces a discrete frequency spectrum.

4. What are the applications of Fourier series representation?

Fourier series representation has many applications in science and engineering, including signal processing, image processing, data compression, and solving differential equations. It is also used in fields such as physics, chemistry, and economics to analyze periodic phenomena.

5. Are there any limitations to using Fourier series representation?

Yes, there are some limitations to using Fourier series representation. It can only be used for periodic functions, and the function must be integrable over one period. Additionally, the Fourier series may not converge for some functions, and the rate of convergence may be slow for others.

Similar threads

Back
Top