Math Challenge - January 2020

In summary, this conversation covers various topics such as calculus, functional analysis, and small groups. Questions were asked about proving inequalities, finding homomorphisms, and solving equations. Topics discussed include smooth functions, square integrable derivatives, and the properties of binary operations. Solutions were provided for various problems, including calculating the limit of a sequence and finding the volume and surface area of a solid of revolution.
  • #36
I'm not an expert on functional analysis, but I will try to solve this, since it looks like a good practice. Hopefully, I didn't miss the point of the exercise.
Eigenvectors of given operators will be real functions, for which we need to solve differential equations:
$$Af = \lambda f \qquad Bf = \lambda f \qquad Cf = \lambda f$$
Here ##\lambda## is an eigenvalue.
All of these equations are simple equations which are solved by separation of variables. Take the first one for example:
$$2x\frac{df}{dx} = \lambda f(x)$$
Separate:
$$2\frac{df}{f(x)} = \lambda\frac{dx}{x}$$
Integrating both sides, after short and simple algebra, we find a class of functions which are eigenvectors to the operator ##A##:
$$f(x) = Cx^{\frac{\lambda}{2}}$$
Where ##C## is arbitrary real number(integration constant).
We similarly solve for eigenvectors of ##B## and ##C##:
$$Bf = \lambda f \Leftrightarrow x^2\frac{df}{dx} = \lambda f(x) \Rightarrow \frac{df}{f(x)} = \lambda \frac{dx}{x^2} \Rightarrow f(x) = Ce^{-\frac{\lambda}{x}}$$
$$Cf = \lambda f \Leftrightarrow -\frac{df}{dx} =\lambda f(x) \Rightarrow \frac{df}{f(x)} = -\lambda dx \Rightarrow f(x) = Ce^{-\lambda x}$$
where the last implication involves integrating both sides and rearranging terms in appropriate way, and ##C## are arbitrary real numbers.

Now we inspect the multiplication structure of the given space. It's obvious that it's not closed under composition, since composition of any of those operators would involve second derivatives, but it looks like it is closed under commutators. So we will check for commutator algebra:
$$[A,B]f = 2x\frac{d}{dx}\left(x^2\frac{df}{dx}\right) - x^2\frac{d}{dx}\left(2x\frac{df}{dx}\right) = 2x^2\frac{df}{dx} = 2Bf$$
$$[B,C]f = x^2\frac{d}{dx}\left(-\frac{df}{dx}\right) +\frac{d}{dx}\left(x^2\frac{df}{dx}\right) = 2x\frac{df}{dx} = Af$$
$$[C,A]f = -\frac{d}{dx}\left(2x\frac{df}{dx}\right) -2x\frac{d}{dx}\left(-\frac{df}{dx}\right) = -2\frac{df}{dx} = 2Cf$$
So we have the following relations:
$$[A,B] = 2B \qquad [B,C] = A \qquad [C,A] = 2C$$
So the space spanned by these three operators is closed under commutator operation, so it forms a Lie algebra with structure constants given above(the only nontrivial ones).

Was this the whole object of the exercise? If I'm missing something, please point it out so I can add it. Thanks!
 
Last edited:
Physics news on Phys.org
  • #37
Antarres said:
I'm not an expert on functional analysis, but I will try to solve this, since it looks like a good practice. Hopefully, I didn't miss the point of the exercise.
Eigenvectors of given operators will be real functions, for which we need to solve differential equations:
$$Af = \lambda f \qquad Bf = \lambda f \qquad Cf = \lambda f$$
Here ##\lambda## is an eigenvalue.
All of these equations are simple equations which are solved by separation of variables. Take the first one for example:
$$2x\frac{df}{dx} = \lambda f(x)$$
Separate:
$$2\frac{df}{f(x)} = \lambda\frac{dx}{x}$$
Integrating both sides, after short and simple algebra, we find a class of functions which are eigenvectors to the operator ##A##:
$$f(x) = Cx^{\frac{\lambda}{2}}$$
Where ##C## is arbitrary real number(integration constant).
We similarly solve for eigenvectors of ##B## and ##C##:
$$Bf = \lambda f \Leftrightarrow x^2\frac{df}{dx} = \lambda f(x) \Rightarrow \frac{df}{f(x)} = \lambda \frac{dx}{x^2} \Rightarrow f(x) = Ce^{-\frac{\lambda}{x}}$$
$$Cf = \lambda f \Leftrightarrow -\frac{df}{dx} =\lambda f(x) \Rightarrow \frac{df}{f(x)} = -\lambda dx \Rightarrow f(x) = Ce^{-\lambda x}$$
where the last implication involves integrating both sides and rearranging terms in appropriate way, and ##C## are arbitrary real numbers.

Now we inspect the multiplication structure of the given space. It's obvious that it's not closed under composition, since composition of any of those operators would involve second derivatives, but it looks like it is closed under commutators. So we will check for commutator algebra:
$$[A,B]f = 2x\frac{d}{dx}\left(x^2\frac{df}{dx}\right) - x^2\frac{d}{dx}\left(2x\frac{df}{dx}\right) = 2x^2\frac{df}{dx} = 2Bf$$
$$[B,C]f = x^2\frac{d}{dx}\left(-\frac{df}{dx}\right) +\frac{d}{dx}\left(x^2\frac{df}{dx}\right) = 2x\frac{df}{dx} = Af$$
$$[C,A]f = -\frac{d}{dx}\left(2x\frac{df}{dx}\right) -2x\frac{d}{dx}\left(-\frac{df}{dx}\right) = -2\frac{df}{dx} = 2Cf$$
So we have the following relations:
$$[A,B] = 2B \qquad [B,C] = A \qquad [C,A] = 2C$$
So the space spanned by these three operators is closed under commutator operation, so it forms a Lie algebra with structure constants given above(the only nontrivial ones).

Was this the whole object of the exercise? If I'm missing something, please point it out so I can add it. Thanks!
No, that was it, well done. It was a bit of a test how scary members actually find "algebra" questions :wink:

It could only be added that
$$
\left( \,\operatorname{lin\, span}_\mathbb{R} \{\,A,B,C\,\}\, , \,\left[\,\cdot \, , \,\cdot \, \right]\,\right) \cong \mathfrak{sl}(2,\mathbb{R}) \cong \mathfrak{su}_\mathbb{R}(2,\mathbb{C})
$$
the simple real Lie algebra of type ##A_1##, the smallest semisimple one.
 
  • Like
Likes Antarres
  • #38
We shall denote the integral on the left hand side by:
$$I = \int_{-\infty}^{\infty} f\left(x-\frac{b}{x}\right)dx$$
It is reasonable to assume that both integrals of the identity are convergent(that is what I assumed the identity to mean, if they're divergent, I didn't work on whether they would diverge at the same rate). If that is the case, we can divide the integral into two parts, around the zero:
$$ I = I_1 + I_2 = \int_{-\infty}^0 f\left(x-\frac{b}{x}\right)dx + \int_0^{\infty} f\left(x-\frac{b}{x}\right)dx$$
We're doing this because in order to perform a substitution, we're looking for domain on which the substitution is completely defined. We can discuss this step a bit more. If it is the case that only one of the summands is divergent, and the other isn't, the integral we started from wouldn't converge, so that can't be the case. If they're both convergent, then we have no problems. Let's look at the case where they both diverge, but the divergences cancel out. Since they both diverge, we can have the case:
$$I_1 = \lim_{a\rightarrow -\infty}\lim_{p\rightarrow 0^-} \int_a^p f\left(x-\frac{b}{x}\right)dx \qquad I_2 = \lim_{c \rightarrow 0^+}\lim_{d \rightarrow \infty} \int_c^d f\left(x-\frac{b}{x}\right)dx$$
where we used the notation ##0^\pm## to mean approaching the zero from positive or negative side.
Boundaries ##a##, ##p##, ##c##, ##d## can be approaching infinities and zero at different rates. But this is ambiguous, because we can adjust these rates in such a way that we can choose the value to which the result integral would converge, so in that way, our integral we started from wouldn't be properly defined. So we will assume that ##I_1## and ##I_2## are defined properly as well, that is, convergent(If the limit above is completely symmetrical, both towards zero and infinity, these integrals must be well defined, convergent that is, for only the even part of the function remains, who's definition follows from the definition of the original integral). Now that we got that out of hand, we can perform substitutions on those integrals.

$$I_1 = \int_{-\infty}^{0} f\left(x-\frac{b}{x}\right)dx$$
We substitute ##t = -\frac{b}{x}##. This substitution leaves the argument of the function invariant. We look at the boundaries:
$$x \rightarrow -\infty \Rightarrow t \rightarrow 0^+ \qquad x\rightarrow 0^- \Rightarrow t \rightarrow +\infty$$
We find:
$$I_1 = \int_{0}^{\infty} f\left(x-\frac{b}{x}\right)\frac{b}{x^2}dx$$
Similarly, under the same substitution for ##I_2##, we get:
$$I_2 = \int_{-\infty}^{0} f\left(x-\frac{b}{x}\right)\frac{b}{x^2}dx$$
Now we will add and subtract ##I_2##(in it's first form) to ##I_1##.
$$I_1 + I_2 - I_2 = \int_{0}^{\infty} f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx - \int_0^\infty f\left(x-\frac{b}{x}\right)dx$$
Here in the first term, we will perform the substitution: ##t = x - \frac{b}{x} \rightarrow dt = \left(1 + \frac{b}{x^2}\right)dx##:
$$x \rightarrow 0^+ \Rightarrow t \rightarrow -\infty \qquad x \rightarrow \infty \Rightarrow t \rightarrow \infty$$
so we get:
$$I'_1 = \int_{0}^{\infty} f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx = \int_{-\infty}^{\infty} f(x)dx$$
We're going to do the same for ##I_2##, adding and subtracting the first form of ##I_1## to it:
$$I_2 + I_1 - I_1 = \int_{-\infty}^0 f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx - \int_{-\infty}^0 f\left(x-\frac{b}{x}\right)dx$$
Making the same substitution in the first term, we find:
$$t = x - \frac{b}{x} \rightarrow dt = \left(1 + \frac{b}{x^2}\right)dx$$
$$x \rightarrow -\infty \Rightarrow t \rightarrow -\infty \qquad x \rightarrow 0^- \Rightarrow t \rightarrow \infty$$
$$I'_2 = \int_{-\infty}^{0} f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx = \int_{-\infty}^{\infty} f(x)dx = I'_1$$
Now we will combine what we got so far:
$$I = I_1 + I_2 = I'_1 - I_2 + I'_2 - I_1 = 2I'_1 - I$$
from which we find: ##I = I'_1## which is the desired identity.

P.S. If the argument at the beginning is faulty, please point it out. I got this idea by playing with some examples of this rule where I realized the key might be to divide the interval in order to perform the substitution correctly.
 
Last edited:
  • Like
Likes PeroK
  • #39
Antarres said:
We shall denote the integral on the left hand side by:
$$I = \int_{-\infty}^{\infty} f\left(x-\frac{b}{x}\right)dx$$
It is reasonable to assume that both integrals of the identity are convergent(that is what I assumed the identity to mean, if they're divergent, I didn't work on whether they would diverge at the same rate). If that is the case, we can divide the integral into two parts, around the zero:
$$ I = I_1 + I_2 = \int_{-\infty}^0 f\left(x-\frac{b}{x}\right)dx + \int_0^{\infty} f\left(x-\frac{b}{x}\right)dx$$
We're doing this because in order to perform a substitution, we're looking for domain on which the substitution is completely defined. We can discuss this step a bit more. If it is the case that only one of the summands is divergent, and the other isn't, the integral we started from wouldn't converge, so that can't be the case. If they're both convergent, then we have no problems. Let's look at the case where they both diverge, but the divergences cancel out. Since they both diverge, we can have the case:
$$I_1 = \lim_{a\rightarrow -\infty}\lim_{p\rightarrow 0^-} \int_a^p f\left(x-\frac{b}{x}\right)dx \qquad I_2 = \lim_{c \rightarrow 0^+}\lim_{d \rightarrow \infty} \int_c^d f\left(x-\frac{b}{x}\right)dx$$
where we used the notation ##0^\pm## to mean approaching the zero from positive or negative side.
Boundaries ##a##, ##p##, ##c##, ##d## can be approaching infinities and zero at different rates. But this is ambiguous, because we can adjust these rates in such a way that we can choose the value to which the result integral would converge, so in that way, our integral we started from wouldn't be properly defined. So we will assume that ##I_1## and ##I_2## are defined properly as well, that is, convergent(If the limit above is completely symmetrical, both towards zero and infinity, these integrals must be well defined, convergent that is, for only the even part of the function remains, who's definition follows from the definition of the original integral). Now that we got that out of hand, we can perform substitutions on those integrals.

$$I_1 = \int_{-\infty}^{0} f\left(x-\frac{b}{x}\right)dx$$
We substitute ##t = -\frac{b}{x}##. This substitution leaves the argument of the function invariant. We look at the boundaries:
$$x \rightarrow -\infty \Rightarrow t \rightarrow 0^+ \qquad x\rightarrow 0^- \Rightarrow t \rightarrow +\infty$$
We find:
$$I_1 = \int_{0}^{\infty} f\left(x-\frac{b}{x}\right)\frac{b}{x^2}dx$$
Similarly, under the same substitution for ##I_2##, we get:
$$I_2 = \int_{-\infty}^{0} f\left(x-\frac{b}{x}\right)\frac{b}{x^2}dx$$
Now we will add and subtract ##I_2##(in it's first form) to ##I_1##.
$$I_1 + I_2 - I_2 = \int_{0}^{\infty} f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx - \int_0^\infty f\left(x-\frac{b}{x}\right)dx$$
Here in the first term, we will perform the substitution: ##t = x - \frac{b}{x} \rightarrow dt = \left(1 + \frac{b}{x^2}\right)dx##:
$$x \rightarrow 0^+ \Rightarrow t \rightarrow -\infty \qquad x \rightarrow \infty \Rightarrow t \rightarrow \infty$$
so we get:
$$I'_1 = \int_{0}^{\infty} f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx = \int_{-\infty}^{\infty} f(x)dx$$
We're going to do the same for ##I_2##, adding and subtracting the first form of ##I_1## to it:
$$I_2 + I_1 - I_1 = \int_{-\infty}^0 f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx - \int_{-\infty}^0 f\left(x-\frac{b}{x}\right)dx$$
Making the same substitution in the first term, we find:
$$t = x - \frac{b}{x} \rightarrow dt = \left(1 + \frac{b}{x^2}\right)dx$$
$$x \rightarrow -\infty \Rightarrow t \rightarrow -\infty \qquad x \rightarrow 0^- \Rightarrow t \rightarrow \infty$$
$$I'_2 = \int_{-\infty}^{0} f\left(x-\frac{b}{x}\right)\left(1 + \frac{b}{x^2}\right)dx = \int_{-\infty}^{\infty} f(x)dx = I'_1$$
Now we will combined what we got so far:
$$I = I_1 + I_2 = I'_1 - I_2 + I'_2 - I_1 = 2I'_1 - I$$
from which we find: ##I = I'_1## which is the desired identity.

P.S. If the argument at the beginning is faulty, please point it out. I got this idea by playing with some examples of this rule where I realized the key might be to divide the interval in order to perform the substitution correctly.
A bit harder to read than your previous solution, but it's correct. It would be better written as ##I= \ldots ## and ##2I=\ldots ## by writing the substitutions as ##\stackrel{t=x-b/x}{=}##. Well, it would have saved me some scrollings and the confusion of ##I_j## and ##I'_j\,.##

This formula is another translation invariance for integrals. I should make a list ...
 
  • #40
Oh nice, well I didn't know how to make that latex code, so I wrote it in a longer way. Good to know, and thanks for the remark!
 
  • #41
Antarres said:
Oh nice, well I didn't know how to make that latex code, so I wrote it in a longer way. Good to know, and thanks for the remark!
I use
Code:
\begin{align*}
L &= R_1 \\
&= R_2 \\
\ldots
&= R_n
\end{align*}

and
Code:
\stackrel{(*)}{=}
for the add-ons in equations.
 
  • #42
Umm, I meant the part where you write the substitution above the equality. Align wouldn't compress the notation enough there, just the substitutions, if suppressed, would make it easier to read.
 
  • #43
Antarres said:
Umm, I meant the part where you write the substitution above the equality. Align wouldn't compress the notation enough there, just the substitutions, if suppressed, would make it easier to read.
With stackrel{}{} (see my edit in the previous post).
 
  • Like
Likes Antarres
  • #44
fresh_42 said:
A bit harder to read than your previous solution, but it's correct. It would be better written as ##I= \ldots ## and ##2I=\ldots ## by writing the substitutions as ##\stackrel{t=x-b/x}{=}##. Well, it would have saved me some scrollings and the confusion of ##I_j## and ##I'_j\,.##
You mean:

Let $$I = \int_{-\infty}^{+\infty} f(x - \frac b x) dx $$
Using the substitution $$t = -\frac{b}{x}$$ we see that:
$$\int_{-\infty}^{0} f(x - \frac b x) dx = \int_0^{+\infty} f(x - \frac b x) \frac{b}{x^2} dx \ \ $$
Hence
$$I = \int_{0}^{+\infty} f(x - \frac b x) dx + \int_{-\infty}^{0} f(x - \frac b x) dx = \int_{0}^{+\infty} f(x - \frac b x)(1 + \frac{b}{x^2}) dx $$
And now using the substitution $$u = x - \frac b x$$ we see that:
$$I = \int_{-\infty}^{+\infty} f(x) dx $$
 
  • Like
Likes Antarres and fresh_42
  • #45
re prob 15, this is an interpretation perhaps only a mathematician can love. I too have tried to stretch a finite can of paint over a rather large area, but in my house if the paint I apply to a surface like our house, gets thinner and thinner as the wall extends further and further, at some point my wife complains that she can see through it. i.e. "don't try this at home".
 
  • #46
I recommend not mixing before use, so that the paint is more and more opaque the closer to the bottom of the bucket you get.
 
  • #47
i think the only level of opacity that can be achieved uniformly overall, is zero. no?
 
  • #48
Oh sure, if you're using the cheap, bounded-opacity paint that your wife complains about. (then again, if the average opacity of the paint was infinite to begin with, I guess mixing does no harm... apparently I didn't think this through)
 
  • #49
Assume first that ##f(0) = f(2\pi) = 0##. For a smooth, ##2\pi## periodic function the Fourier series converges and is: $$f(x) = \sum_{n=1}^{\infty} a_n \sin(nx)$$
With $$\int_0^{2\pi} [f(x)]^2dx = \pi^2 \sum_{n=1}^{\infty} a_n^2$$
Moreover: $$f'(x) = \sum_{n=1}^{\infty} na_n \cos(nx)$$ and
$$\int_0^{2\pi} [f'(x)]^2dx = \pi^2 \sum_{n=1}^{\infty} n^2 a_n^2 \ge \pi^2 \sum_{n=1}^{\infty} a_n^2$$
With equality iff ##a_n =0## for ##n \ne 1##

In general, any ##2\pi## periodic function can be translated to be zero at the end points, with the same integral and derivative. The result holds in general, therefore, with equality iff:$$f(x) = a\sin(x + \phi)$$
 
  • #50
PeroK said:
Assume first that ##f(0) = f(2\pi) = 0##. For a smooth, ##2\pi## periodic function the Fourier series converges and is: $$f(x) = \sum_{n=1}^{\infty} a_n \sin(nx)$$
With $$\int_0^{2\pi} [f(x)]^2dx = \pi^2 \sum_{n=1}^{\infty} a_n^2$$
Moreover: $$f'(x) = \sum_{n=1}^{\infty} na_n \cos(nx)$$ and
$$\int_0^{2\pi} [f'(x)]^2dx = \pi^2 \sum_{n=1}^{\infty} n^2 a_n^2 \ge \pi^2 \sum_{n=1}^{\infty} a_n^2$$
With equality iff ##a_n =0## for ##n \ne 1##

In general, any ##2\pi## periodic function can be translated to be zero at the end points, with the same integral and derivative. The result holds in general, therefore, with equality iff:$$f(x) = a\sin(x + \phi)$$
Yep. It can also be done without the assumption and the Fourier series ##\sum_{n=1}^{\infty}\left(a_n\cos(nx)+b_n\sin(nx)\right)## and Parseval (Nov. Challenge #10) for ##f## and ##f'##. The equality condition is then ##f(x)=a_1\cos(x)+b_1\sin(x)##.

Edit: FWIW. The inequality is called Wirtinger's inequality.
 
Last edited:
  • Like
Likes PeroK
  • #51
PeroK said:
In general, any ##2\pi## periodic function can be translated to be zero at the end points, with the same integral and derivative. The result holds in general, therefore, with equality iff:

Nitpick: Not sure if I agree with this. Translating ##f(x)\to f(x)-c## does change the integral ##\int_0^{2\pi} |f(x)|^2 dx##, and anyway ##\int_0^{2\pi}f(x) dx## wouldn't be zero anymore. Did you have something else in mind?

Edit: Nevermind, I assume you mean a translation ##f(x)\to f(x-c)##. Your solution is fine :)
 
  • #52
Infrared said:
Nitpick: Not sure if I agree with this. Translating ##f(x)\to f(x)-c## does change the integral ##\int_0^{2\pi} |f(x)|^2 dx##. Did you have something else in mind?
##f(x) \rightarrow f(x -c)##
 
  • #53
From the conditions on ##f_k(x)## that are given, which are the convexity, ##f_k(0)=0##, ##f_k(x)\geq 0##, we have that, for all ##k##, ##f_k(x)## are non-decreasing functions
Now from the definition of convexity, we have:
$$f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda)f(y)$$
for all ##x## and ##y## in the domain of ##f##, and ##\lambda \in [0,1]##.
We prove that a product of two convex non-decreasing positive functions, is also convex(we take that they are defined on the same domain).
Proof:
Let's take ##f## and ##g## to be positive, non-decreasing, convex functions. Then since they are non-decreasing, we have for any two points in their domain:
$$(f(x)-f(y))(g(x)-g(y)) \geq 0 \Leftrightarrow f(x)g(x) + f(y)g(y) \geq f(x)g(y) + f(y)g(x)$$
Now we apply the definition of convexity to check if their product is also convex, using the above inequality:
$$f(\lambda x + (1-\lambda)y)g(\lambda x + (1-\lambda)y) \leq (\lambda f(x) + (1-\lambda)f(y))(\lambda g(x) + (1-\lambda)g(y)) = \lambda^2f(x)g(x) + \lambda(1-\lambda)(f(x)g(y) + f(y)g(x)) + (1-\lambda)^2 f(y)g(y) \\ \leq \lambda f(x)g(x) + (1-\lambda)f(y)g(y)$$

Now that we have proven that the product of two functions from our set of functions in the problem is also convex(and obviously trivially satisfying all the other conditions of the set ##M##), we have that it is sufficient to prove the inequality for two functions that belong to ##M##. The extension to any finite number of them then follows by induction.
We now construct the 'hat' function by:
$$\hat{f}(x) = 2x\int_0^1 f(x)dx$$
and we observe that: ##\int_0^1 \hat{f}(x)dx = \int_0^1 f(x)dx##.
Then our inequality boils down to proving that for two functions ##f## and ##g## belonging to ##M##, we have:
$$\int_0^1 \hat{f}(x)\hat{g}(x)dx \leq \int_0^1 f(x)g(x)dx$$

Proof:
We observe that, because ##f \in M## is convex and non-decreasing and positive, and ##\hat{f}## is linear and has the same integral over ##[0,1]## as ##f##, we can't have ##f(x) > \hat{f}(x)## on the whole domain, nor can we have ##f(x) < \hat{f}(x)## on the whole domain. That is, neither of the two functions can dominate the other on the whole domain. There must be a point where they are equal, or if this change happens at a jump discontinuity, then there must exist this point, ##x=a##, that divides the domain into two parts(from the monotoneity of ##f##) such that:
$$f(x) < \hat{f}(x) , x<a \qquad f(x)>\hat{f}(x) , x>a$$
Then, it is easy to derive the following inequality:
$$\int_0^1 [f(x)-\hat{f}(x)]g(x)dx = \int_0^a [f(x)-\hat{f}(x)]g(x)dx + \int_a^1 [f(x)-\hat{f}(x)]g(x)dx \geq g(a)\int_0^a [f(x)-\hat{f}(x)]dx + g(a)\int_a^1 [f(x)-\hat{f}(x)]dx = 0$$
Where the inequality follows from non-decreasing property of ##g## and the inequality we derived above, and the final equality is consequence of the equality of integrals of ##f## and ##\hat{f}##.
From this, we conclude:
$$\int_0^1 f(x)g(x)dx \geq \int_0^1 \hat{f}(x)g(x)dx \geq \int_0^1 \hat{f}(x)\hat{g}(x)dx$$
In the last inequality, we proceeded analogous to the derivation above, using that ##\hat{f}## is increasing for any ##f \in M##.
By induction we obtain iteratively for ##n## functions that belong to ##M##:
$$\int_0^1 \prod_{k=1}^n f_k(x)dx \geq \int_0^1 \prod_{k=1}^n \hat{f_k}(x)dx = \frac{2^n}{n+1} \prod_{k=1}^n \int_0^1 f_k(x)dx$$
which is the result we needed.
 
Last edited:
  • Like
Likes fresh_42
  • #54
Antarres said:
From the conditions on ##f_k(x)## that are given, which are the convexity, ##f_k(0)=0##, ##f_k(x)\geq 0##, we have that, for all ##k##, ##f_k(x)## are non-decreasing functions
Yes, this is correct. Can you add, why the functions in ##M## are not decreasing anywhere?
Now from the definition of convexity, we have:
$$f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda)f(y)$$
for all ##x## and ##y## in the domain of ##f##, and ##\lambda \in (0,1)##.
##\lambda \in [0,1]##
We prove that a product of two convex non-decreasing positive functions, is also convex(we take that they are defined on the same domain).
Proof:
Let's take ##f## and ##g## to be positive, non-decreasing, convex functions. Then since they are non-decreasing, we have for any two points in their domain:
$$(f(x)-f(y))(g(x)-g(y)) \geq 0 \Leftrightarrow f(x)g(x) + f(y)g(y) \geq f(x)g(y) + f(y)g(x)$$
Now we apply the definition of convexity to check if their product is also convex, using the above inequality:
$$f(\lambda x + (1-\lambda)y)g(\lambda x + (1-\lambda)y) \leq (\lambda f(x) + (1-\lambda)f(y))(\lambda g(x) + (1-\lambda)g(y)) = \lambda^2f(x)g(x) + \lambda(1-\lambda)(f(x)g(y) + f(y)g(x)) + (1-\lambda)^2 f(y)g(y) \\ \leq \lambda f(x)g(x) + (1-\lambda)f(y)g(y)$$

Now that we have proven that the product of two functions from our set of functions in the problem is also convex(and obviously trivially satisfying all the other conditions of the set ##M##), we have that it is sufficient to prove the inequality for two functions that belong to ##M##. The extension to any finite number of them then follows by induction.
We now construct the 'hat' function by:
$$\hat{f}(x) = 2x\int_0^1 f(x)dx$$
and we observe that: ##\int_0^1 \hat{f}(x)dx = \int_0^1 f(x)dx##.
Then our inequality boils down to proving that for two functions ##f## and ##g## belonging to ##M##, we have:
$$\int_0^1 \hat{f}(x)\hat{g}(x)dx \leq \int_0^1 f(x)g(x)dx$$

Proof:
We observe that, because ##f \in M## is convex and non-decreasing and positive, and ##\hat{f}## is linear and has the same integral over ##[0,1]## as ##f##, we can't have ##f(x) > \hat{f}(x)## on the whole domain, nor can we have ##f(x) < \hat{f}(x)## on the whole domain. That is, neither of the two functions can dominate the other on the whole domain. There must be a point where they are equal, or if this change happens at a jump discontinuity, then there must exist this point, ##x=a##, that divides the domain into two parts(from the monotoneity of ##f##) such that:
$$f(x) < \hat{f}(x) , x<a \qquad f(x)>\hat{f}(x) , x>a$$
1578768988032.png

Then, it is easy to derive the following inequality:
$$\int_0^1 [f(x)-\hat{f}(x)]g(x)dx = \int_0^a [f(x)-\hat{f}(x)]g(x)dx + \int_a^1 [f(x)-\hat{f}(x)]g(x)dx \geq g(a)\int_0^a [f(x)-\hat{f}(x)]dx + g(a)\int_a^1 [f(x)-\hat{f}(x)]dx = 0$$
Where the inequality follows from non-decreasing property of ##g## and the inequality we derived above, and the final equality is consequence of the equality of integrals of ##f## and ##\hat{f}##.
From this, we conclude:
$$\int_0^1 f(x)g(x)dx \geq \int_0^1 \hat{f}(x)g(x)dx \geq \int_0^1 \hat{f}(x)\hat{g}(x)dx$$
In the last inequality, we proceeded analogous to the derivation above, using that ##\hat{f}## is increasing for any ##f \in M##.
By induction we obtain iteratively for ##n## functions that belong to ##M##:
$$\int_0^1 \prod_{k=1}^n f_k(x)dx \geq \int_0^1 \prod_{k=1}^n \hat{f_k}(x)dx = \frac{2^n}{n+1} \prod_{k=1}^n \int_0^1 f_k(x)dx$$
which is the result we needed.
 
  • Like
Likes Antarres
  • #55
Thanks for the remark, I corrected ##\lambda## in an edit(it was typo).
As for the proof why ##f \in M## are non-decreasing, I will add it here.

Let's take ##f : [0,1] \rightarrow \mathbb{R}##, such that ##f## is convex, nonnegative and ##f(0) = 0##. Assuming that ##f## is not constant and equal to zero, which is a trivial case, ##f## must be non-decreasing at one part of the interval ##[0,1]##(say from ##x=0## to ##x=m##), since it is nonnegative and is beginning from zero value. Assume that somewhere in the domain there is a local maximum(which means that this non-decreasing trend turned into a decreasing one). Then there are points ##x=a<m## and ##x=b>m## in some neighbourhood of ##m## such that ##f(a)<f(m)## and ##f(b)<f(m)##(this follows from the definition of a local maximum).
Then we apply the convexity condition to ##a## and ##b## and we take ##\lambda## such that:
$$m = \lambda a + (1-\lambda)b \Rightarrow \lambda = \frac{b-m}{b-a}$$
Then:
$$f(m) = f(\lambda a + (1-\lambda)b) \leq \lambda f(a) + (1-\lambda)f(b) = \frac{b-m}{b-a}f(a) + \frac{m-a}{b-a}f(b)$$
But then from ##f(m) > f(b)## we have:
$$f(b) < \frac{b-m}{b-a}f(a) + \frac{m-a}{b-a}f(b) \Leftrightarrow f(b)<f(a)$$
and also from ##f(m)>f(a)## we have:
$$f(a) < \frac{b-m}{b-a}f(a) + \frac{m-a}{b-a}f(b) \Leftrightarrow f(a)<f(b)$$
a contradiction.
Therefore, ##f## has no local maxima, and hence it is non-decreasing on the whole domain.
 
  • Like
Likes fresh_42
  • #56
Since no restrictions on the function ##f## are given, except that it is periodic with period ##\pi##, I will assume(reasonably, I hope), that we can expand the function into convergent Fourier series(except at a few points, possibly). This means that ##f## is assumed to be of bounded variation(if I remember correctly). We will expand ##f## over the interval ##[-\frac{\pi}{2}, \frac{\pi}{2}]##.
$$f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n cos(2nx) + \sum_{n=1}^\infty b_n sin(2nx)$$
Substituting this series, we can integrate it term by term.
Integral of the sine series is equal to zero, since it is an odd function integrated over a symmetric interval:
$$\int_{-\infty}^{\infty} sin(2nx) \frac{sin(x)}{x}dx = 0$$
Integral of cosine series is proven to be zero in the following way:
$$\int_{-\infty}^{\infty} cos(2nx) \frac{sin(x)}{x}dx = \int_{-\infty}^{\infty} \frac{sin((2n+1)x) - sin((2n-1)x)}{x}dx = 0$$
where the last equality is obtained with a simple substitution ##t = (2n \pm 1)x## using the known value of the sine integral ##\int_{-\infty}^{\infty} \frac{sin(x)}{x}dx = \pi##.
Therefore, the only term left in the series is the constant term, so we have:
$$a_0 = \frac{2}{\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} f(x)dx$$
$$\int_{-\infty}^{\infty} f(x)\frac{sin(x)}{x}dx = \frac{a_0\pi}{2} = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} f(x)dx$$
Finally we use that for periodic function with period ##T## we have ##\int_0^T f(x)dx = \int_a^{a+T} f(x)dx##, so we complete our proof by translating the boundary:
$$\int_{-\infty}^{\infty} f(x)\frac{sin(x)}{x}dx = \int_0^\pi f(x)dx$$

As for the second identity, I am suspicious that it won't work unless we put some restrictions on the function ##f##. Take ##f(x) = cos(2x)##, which is a function with period ##\pi##.
Then we have that the function in the integrand ##cos(2x)\frac{tan(x)}{x}## is not bounded at infinity, because we can always get tangent function to dominate the ##\frac{cos(2x)}{x}## function for arbitrarily large ##x##. So our integral doesn't converge.
However, ##\int_0^\pi cos(2x)dx = 0##, so the identity can't hold for this function.
 
  • #57
Edit: Was unable to edit since I forgot about it, but in the cosine-series part there is one half missing that is inconsequential to the proof.
 
  • #58
Antarres said:
Since no restrictions on the function ##f## are given, except that it is periodic with period ##\pi##, I will assume(reasonably, I hope), that we can expand the function into convergent Fourier series(except at a few points, possibly). This means that ##f## is assumed to be of bounded variation(if I remember correctly). We will expand ##f## over the interval ##[-\frac{\pi}{2}, \frac{\pi}{2}]##.
$$f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n cos(2nx) + \sum_{n=1}^\infty b_n sin(2nx)$$
Substituting this series, we can integrate it term by term.
Integral of the sine series is equal to zero, since it is an odd function integrated over a symmetric interval:
$$\int_{-\infty}^{\infty} sin(2nx) \frac{sin(x)}{x}dx = 0$$
Integral of cosine series is proven to be zero in the following way:
$$\int_{-\infty}^{\infty} cos(2nx) \frac{sin(x)}{x}dx = \int_{-\infty}^{\infty} \frac{sin((2n+1)x) - sin((2n-1)x)}{x}dx = 0$$
where the last equality is obtained with a simple substitution ##t = (2n \pm 1)x## using the known value of the sine integral ##\int_{-\infty}^{\infty} \frac{sin(x)}{x}dx = \pi##.
Therefore, the only term left in the series is the constant term, so we have:
$$a_0 = \frac{2}{\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} f(x)dx$$
$$\int_{-\infty}^{\infty} f(x)\frac{sin(x)}{x}dx = \frac{a_0\pi}{2} = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} f(x)dx$$
Finally we use that for periodic function with period ##T## we have ##\int_0^T f(x)dx = \int_a^{a+T} f(x)dx##, so we complete our proof by translating the boundary:
$$\int_{-\infty}^{\infty} f(x)\frac{sin(x)}{x}dx = \int_0^\pi f(x)dx$$

As for the second identity, I am suspicious that it won't work unless we put some restrictions on the function ##f##. Take ##f(x) = cos(2x)##, which is a function with period ##\pi##.
Then we have that the function in the integrand ##cos(2x)\frac{tan(x)}{x}## is not bounded at infinity, because we can always get tangent function to dominate the ##\frac{cos(2x)}{x}## function for arbitrarily large ##x##. So our integral doesn't converge.
However, ##\int_0^\pi cos(2x)dx = 0##, so the identity can't hold for this function.
Yes, the second identity is problematic. The first one is called Lobachevski's formula which can be generalized to even powers of sine. I found the second one at the same place as the first one. However, it needs Fubini and a suitable pole handling, which I was hoping to summarize with the additional condition "assuming the integrals exist".

Means: If we only use integrals and series and ignore any condition on exchangeability of integral, series and limits, or finiteness, and integrate from pole to pole, then the identity is true. So this will give us a warning how easily mistakes can be made, if such conditions are not checked. Same as I did not check the details. I'd rather made a "find the mistake" problem out of it.
 
Last edited:
  • #59
Okay, well, I wasn't sure, since the integral was divergent on a few examples, but if we take that the function is very suitably defined as you said, and that we're looking for the Cauchy principal value of the integral by integrating from pole to pole, I will add such a proof below, just for the sake of completeness and curiosity of someone who would want to read it in the future. The proof is not extremely rigorous, since we're allowing the function to behave properly, evading complications as it was intended in the exercise. Also the proof is missing the picture of the contour, but one should have no problem drawing it with the directions given.

Regards!

We assume as below, that we have convergent Fourier series for ##f##, so that it can be integrated term by term, and we assume that the integrals in the exercise are convergent, that is, that on the left hand side we have Cauchy principal value of the integral.
Then:
$$f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos(2nx) + \sum_{n=1}^\infty b_n \sin(2nx)$$
From the convergence of the integral, we conclude that the sine series will give zero integrating term by term, since it is an odd function on a symmetric interval.
For the cosine part, the term we're integrating looks like:
$$\int_{-\infty}^{\infty} \cos(2nx) \frac{\tan(x)}{x}dx $$
Now we will search for principal value by using contour integration. Our contour will be anti-clockwise directed semicircle in the upper-half plane with infinite diameter along the real axis of the complex plane(a very standard contour, I don't have any good way to draw it here, so I assume the description of it is sufficient).
The function we will be integrating along this contour will be:
$$f(z) = e^{i(2nx)}\frac{\sin(z)}{z\cos(z)}$$
This function has simple poles at ##z = 0## and ##z = k\pi + \tfrac{\pi}{2}##, all along the real axis, so we will evade all of these poles by constructing small semi-circles in the upper-half plane around each pole, clockwise. This way, we won't have any poles inside the contour.
So, denoting the small semi-circles by ##C_0## and ##C_k##, the real axis part of the contour by ##\gamma##, the big semi-circle by ##C_R##, and the full contour by ##C##, we have:
$$\oint_C f(z)dz = \oint_{C_0} f(z)dz + \sum_{k=-m+1}^{m} \oint_{C_k}f(z)dz + \oint_{\gamma} f(z)dz + \oint_{C^+} f(z)dz$$
By Cauchy's residue theorem the total integral is zero since we have no poles inside the contour.
For the semicircles we will use Jordan's lemma. For now, we will assume the contour to be of finite width ##2R## with ##2m## poles in between(##k = -(m-1), \dots, m##), and in the end we will extend this contour to infinity.
For small semicircles, we have:
$$\lim_{r_k \rightarrow 0}\oint_{C_k} f(z)dz = -i\pi Res(f,k\pi+\tfrac{\pi}{2})$$
$$\lim_{r \rightarrow 0}\oint_{C_0} f(z)dz = -i\pi Res(f,0)$$
where we denoted the radii of the semicircles around the poles with ##r_k## and ##r##.

We calculate the residues:
$$Res(f,0) = \lim_{z \rightarrow 0} zf(z) = 0$$
$$Res(f,k\pi+\tfrac{\pi}{2}) = \lim_{z \rightarrow k\pi + \tfrac{\pi}{2}} \left(z - k\pi -\frac{\pi}{2}\right)f(z) = e^{i(2n(k\pi+\tfrac{\pi}{2}))}\sin\left(k\pi + \frac{\pi}{2}\right)\frac{2}{(2k+1)\pi} \lim_{z \rightarrow k\pi + \tfrac{\pi}{2}} \frac{z - k\pi - \frac{\pi}{2}}{\cos(z)} = (-1)^{(n+k)}\frac{2}{(2k+1)\pi} (-1)^{(k+1)} = \frac{2(-1)^{(n+1)}}{(2k+1)\pi}$$
Integral along the real axis(which we denoted by ##\gamma##) is:
$$\oint_{\gamma} f(z)dz = \int_{-R}^{R}e^{i(2nx)}\frac{\sin(x)}{x\cos(x)}dx$$
Combining the results above, we have:
$$ 0 = \oint_{C_R}f(z)dz + \int_{-R}^{R}e^{i(2nx)}\frac{\sin(x)}{x\cos(x)}dx +(-1)^n\sum_{k=-m+1}^{m} \frac{2i}{2k+1}$$
The integral we're looking for is the real part of the principal value we see in the expression above with boundaries going to infinity. When we take boundaries to infinity, ##C_R## integral will drop to zero, because by closing the contour anticlockwise in the upper-half of the plane, we obtained exponential damping of the integrand, so this limit will follow from Jordan's lemma. The sum that we have in the last term will diverge, however this sum is purely imaginary, the real part will still be zero. So we conclude that the real part of our integral is zero:
$$p.v. \int_{-\infty}^{\infty} \cos(2nx)\frac{\sin(x)}{x\cos(x)}dx = 0$$
This means that our cosine series will be integrated to zero term by term. We're left only with the constant term. So we have:
$$\int_{-\infty}^{\infty} f(x)\frac{\tan(x)}{x}dx = \frac{a_0}{2}\int_{\infty}^\infty \frac{\tan(x)}{x}dx$$
Where we're looking for the Cauchy principal value of the integral, as before. We will perform the same type of calculation as with the cosine series, this time working with the function:
$$f(z) = \frac{e^{iz}}{z\cos(z)}$$
The contour and method are completely identical, so we will proceed to calculate the residues:
$$Res(f,0) = 1$$
$$Res(f,k\pi + \frac{\pi}{2}) = \frac{-2i}{(2k+1)\pi}$$
From Cauchy integral theorem, we find:
$$0 = \oint_{C_R}f(z)dz -i\pi -\sum_{k=-m+1}^m \frac{2}{(2k+1)} + \int_{R}^{R} \frac{e^{ix}}{x\cos(x)}dx$$
Letting ##R## go to infinity, we find that the integral along the big semicircle ##C_R## goes to zero, analogous to the cosine-series case, and that the sum is divergent. However, the integral we're computing is imaginary part of the principal value in the formula above, and the sum is purely real. So we conclude:
$$p.v. \int_{-\infty}^{\infty} \frac{\tan(x)}{x}dx = \pi$$
Finally, we obtain:
$$p.v. \int_{-\infty}^{\infty} f(x)\frac{\tan(x)}{x}dx = \frac{\pi a_0}{2} = \int_0^\pi f(x)dx$$
So the correct identity is with Cauchy principal value on the left.
 
  • Like
Likes fresh_42
  • #60
Antarres said:
Okay, well, I wasn't sure, since the integral was divergent on a few examples, but if we take that the function is very suitably defined as you said, and that we're looking for the Cauchy principal value of the integral by integrating from pole to pole, I will add such a proof below, just for the sake of completeness and curiosity of someone who would want to read it in the future. The proof is not extremely rigorous, since we're allowing the function to behave properly, evading complications as it was intended in the exercise. Also the proof is missing the picture of the contour, but one should have no problem drawing it with the directions given.

Regards!

We assume as below, that we have convergent Fourier series for ##f##, so that it can be integrated term by term, and we assume that the integrals in the exercise are convergent, that is, that on the left hand side we have Cauchy principal value of the integral.
Then:
$$f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos(2nx) + \sum_{n=1}^\infty b_n \sin(2nx)$$
From the convergence of the integral, we conclude that the sine series will give zero integrating term by term, since it is an odd function on a symmetric interval.
For the cosine part, the term we're integrating looks like:
$$\int_{-\infty}^{\infty} \cos(2nx) \frac{\tan(x)}{x}dx $$
Now we will search for principal value by using contour integration. Our contour will be anti-clockwise directed semicircle in the upper-half plane with infinite diameter along the real axis of the complex plane(a very standard contour, I don't have any good way to draw it here, so I assume the description of it is sufficient).
The function we will be integrating along this contour will be:
$$f(z) = e^{i(2nx)}\frac{\sin(z)}{z\cos(z)}$$
This function has simple poles at ##z = 0## and ##z = k\pi + \tfrac{\pi}{2}##, all along the real axis, so we will evade all of these poles by constructing small semi-circles in the upper-half plane around each pole, clockwise. This way, we won't have any poles inside the contour.
So, denoting the small semi-circles by ##C_0## and ##C_k##, the real axis part of the contour by ##\gamma##, the big semi-circle by ##C_R##, and the full contour by ##C##, we have:
$$\oint_C f(z)dz = \oint_{C_0} f(z)dz + \sum_{k=-m+1}^{m} \oint_{C_k}f(z)dz + \oint_{\gamma} f(z)dz + \oint_{C^+} f(z)dz$$
By Cauchy's residue theorem the total integral is zero since we have no poles inside the contour.
For the semicircles we will use Jordan's lemma. For now, we will assume the contour to be of finite width ##2R## with ##2m## poles in between(##k = -(m-1), \dots, m##), and in the end we will extend this contour to infinity.
For small semicircles, we have:
$$\lim_{r_k \rightarrow 0}\oint_{C_k} f(z)dz = -i\pi Res(f,k\pi+\tfrac{\pi}{2})$$
$$\lim_{r \rightarrow 0}\oint_{C_0} f(z)dz = -i\pi Res(f,0)$$
where we denoted the radii of the semicircles around the poles with ##r_k## and ##r##.

We calculate the residues:
$$Res(f,0) = \lim_{z \rightarrow 0} zf(z) = 0$$
$$Res(f,k\pi+\tfrac{\pi}{2}) = \lim_{z \rightarrow k\pi + \tfrac{\pi}{2}} \left(z - k\pi -\frac{\pi}{2}\right)f(z) = e^{i(2n(k\pi+\tfrac{\pi}{2}))}\sin\left(k\pi + \frac{\pi}{2}\right)\frac{2}{(2k+1)\pi} \lim_{z \rightarrow k\pi + \tfrac{\pi}{2}} \frac{z - k\pi - \frac{\pi}{2}}{\cos(z)} = (-1)^{(n+k)}\frac{2}{(2k+1)\pi} (-1)^{(k+1)} = \frac{2(-1)^{(n+1)}}{(2k+1)\pi}$$
Integral along the real axis(which we denoted by ##\gamma##) is:
$$\oint_{\gamma} f(z)dz = \int_{-R}^{R}e^{i(2nx)}\frac{\sin(x)}{x\cos(x)}dx$$
Combining the results above, we have:
$$ 0 = \oint_{C_R}f(z)dz + \int_{-R}^{R}e^{i(2nx)}\frac{\sin(x)}{x\cos(x)}dx +(-1)^n\sum_{k=-m+1}^{m} \frac{2i}{2k+1}$$
The integral we're looking for is the real part of the principal value we see in the expression above with boundaries going to infinity. When we take boundaries to infinity, ##C_R## integral will drop to zero, because by closing the contour anticlockwise in the upper-half of the plane, we obtained exponential damping of the integrand, so this limit will follow from Jordan's lemma. The sum that we have in the last term will diverge, however this sum is purely imaginary, the real part will still be zero. So we conclude that the real part of our integral is zero:
$$p.v. \int_{-\infty}^{\infty} \cos(2nx)\frac{\sin(x)}{x\cos(x)}dx = 0$$
This means that our cosine series will be integrated to zero term by term. We're left only with the constant term. So we have:
$$\int_{-\infty}^{\infty} f(x)\frac{\tan(x)}{x}dx = \frac{a_0}{2}\int_{\infty}^\infty \frac{\tan(x)}{x}dx$$
Where we're looking for the Cauchy principal value of the integral, as before. We will perform the same type of calculation as with the cosine series, this time working with the function:
$$f(z) = \frac{e^{iz}}{z\cos(z)}$$
The contour and method are completely identical, so we will proceed to calculate the residues:
$$Res(f,0) = 1$$
$$Res(f,k\pi + \frac{\pi}{2}) = \frac{-2i}{(2k+1)\pi}$$
From Cauchy integral theorem, we find:
$$0 = \oint_{C_R}f(z)dz -i\pi -\sum_{k=-m+1}^m \frac{2}{(2k+1)} + \int_{R}^{R} \frac{e^{ix}}{x\cos(x)}dx$$
Letting ##R## go to infinity, we find that the integral along the big semicircle ##C_R## goes to zero, analogous to the cosine-series case, and that the sum is divergent. However, the integral we're computing is imaginary part of the principal value in the formula above, and the sum is purely real. So we conclude:
$$p.v. \int_{-\infty}^{\infty} \frac{\tan(x)}{x}dx = \pi$$
Finally, we obtain:
$$p.v. \int_{-\infty}^{\infty} f(x)\frac{\tan(x)}{x}dx = \frac{\pi a_0}{2} = \int_0^\pi f(x)dx$$
So the correct identity is with Cauchy principal value on the left.
Wow! And I only used the power series of ##\operatorname{cot}(x-k\pi)##.
 
  • Like
Likes Antarres
  • #61
The solution to this inequality draws inspiration from this integral:
$$\int \frac{dx}{1+k^2x^2} = \frac{1}{k}\arctan(kx) + C$$
We assume that the sequences on the right-hand side are well defined. Then we won't have any convergence problems.
We proceed using Cauchy-Schwarz inequality:
$$\left(\sum_{n \in \mathbb{N}}a_n\right)^2 \leq \sum_{n \in \mathbb{N}} a^2_n(1+k^2n^2)\sum_{n \in \mathbb{N}} \frac{1}{1+k^2n^2} $$.
Here ##k \in \mathbb{R}## is just a parameter, setting it up for the integral above.
The second sum can be bounded by the integral:
$$\sum_{n \in \mathbb{N}} \frac{1}{1+k^2n^2} < \int_0^\infty \frac{dx}{1+k^2x^2} = \frac{\pi}{2k}$$.
Combining the inequalities, we have:
$$\left(\sum_{n \in \mathbb{N}}a_n\right)^2 < \sum_{n \in \mathbb{N}} a^2_n(1+k^2n^2)\frac{\pi}{2k} = \frac{\pi}{2} \left(\frac{1}{k}\sum_{n \in \mathbb{N}} a^2_n + k \sum_{n \in \mathbb{N}} n^2a^2_n\right)$$
The term on the right is of the form ##\tfrac{x}{k} + yk##, and we need it to be of the form ##2\sqrt{xy}## in order to prove the inequality. So we will assume that it's of this form, and search for ##k## to confirm it, if possible. We find:
$$\frac{x}{k} + yk = 2\sqrt{xy} \Rightarrow \frac{x^2}{k^2} - 2xy + y^2k^2 = 0 \Rightarrow \left(\frac{x}{k} - yk\right) = 0 \Rightarrow k = \sqrt{\frac{x}{y}}$$.
So with choice:
$$k = \sqrt{\frac{\sum_{n \in \mathbb{N}} a^2_n}{\sum_{n \in \mathbb{N}} n^2a^2_n}}$$
we obtain the inequality that we're looking for by squaring both sides.
 
  • Like
Likes Infrared and fresh_42
  • #62
Antarres said:
The solution to this inequality draws inspiration from this integral:
$$\int \frac{dx}{1+k^2x^2} = \frac{1}{k}\arctan(kx) + C$$
We assume that the sequences on the right-hand side are well defined. Then we won't have any convergence problems.
We proceed using Cauchy-Schwarz inequality:
$$\left(\sum_{n \in \mathbb{N}}a_n\right)^2 \leq \sum_{n \in \mathbb{N}} a^2_n(1+k^2n^2)\sum_{n \in \mathbb{N}} \frac{1}{1+k^2n^2} $$.
Here ##k \in \mathbb{R}## is just a parameter, setting it up for the integral above.
The second sum can be bounded by the integral:
$$\sum_{n \in \mathbb{N}} \frac{1}{1+k^2n^2} < \int_0^\infty \frac{dx}{1+k^2x^2} = \frac{\pi}{2k}$$.
Combining the inequalities, we have:
$$\left(\sum_{n \in \mathbb{N}}a_n\right)^2 < \sum_{n \in \mathbb{N}} a^2_n(1+k^2n^2)\frac{\pi}{2k} = \frac{\pi}{2} \left(\frac{1}{k}\sum_{n \in \mathbb{N}} a^2_n + k \sum_{n \in \mathbb{N}} n^2a^2_n\right)$$
The term on the right is of the form ##\tfrac{x}{k} + yk##, and we need it to be of the form ##2\sqrt{xy}## in order to prove the inequality. So we will assume that it's of this form, and search for ##k## to confirm it, if possible. We find:
$$\frac{x}{k} + yk = 2\sqrt{xy} \Rightarrow \frac{x^2}{k^2} - 2xy + y^2k^2 = 0 \Rightarrow \left(\frac{x}{k} - yk\right) = 0 \Rightarrow k = \sqrt{\frac{x}{y}}$$.
So with choice:
$$k = \sqrt{\frac{\sum_{n \in \mathbb{N}} a^2_n}{\sum_{n \in \mathbb{N}} n^2a^2_n}}$$
we obtain the inequality that we're looking for by squaring both sides.
The inequality is called Carlson's inequality. Hardy gave two proofs, one with the integration trick and Schwarz's inequality as above, and another which uses Parseval's inequality (see November 2019 challenge #1).

So if anyone wants to search for the second proof, go ahead!
 
  • #63
etotheipi said:
Suppose the first number is ##n## and the final number is ##m##. We then require that $$\sum_{i=n}^{m} i = \frac{m(m+1)}{2} - \frac{n(n-1)}{2} = 2020$$ By multiplying through by 2 and expanding the brackets, we get $$(m^{2} - n^{2}) + (m+n) = 4040 \implies (m+n)(m-n+1) = 4040$$ Since both ##(m+n)## and ##(m-n+1)## are integers, they must be factor pairs of 4040. Note also that for ##n>0##, ##(m+n) > (m-n+1)##.

The possible pairs ##(m+n, m-n+1)## are then ##(4040,1), (2020, 2), (1010, 4), (808, 5), (505, 8), (404, 10), (202, 20), (101, 40)##.

Furthermore, the sum ##(m+n) + (m-n+1) = 2m+1## is odd, so we also want only the factor pairs which sum to an odd number. We are then left with the pairs ##(4040,1), (808, 5), (505, 8), (101, 40)##, each of which could then be solved simultaneously to find the endpoints.

So we have 4 ways, if we also include the boring one which only contains 2020!

I am going to show my ignorance here and ask, where can I learn more about this type of problem solving? I'm exploring math and wouldn't even know what category to fit this type of problem into, much less how to solve it! Can anyone give me a direction to go into and thanks in advance.

Sean
 
  • Like
Likes Not anonymous
  • #64
Hsopitalist said:
I am going to show my ignorance here and ask, where can I learn more about this type of problem solving? I'm exploring math and wouldn't even know what category to fit this type of problem into, much less how to solve it! Can anyone give me a direction to go into and thanks in advance.

Sean

Looks like elementary number theory to me. Or discrete maths.
 
  • Like
Likes Hsopitalist and etotheipi
  • #65
Hsopitalist said:
I am going to show my ignorance here and ask, where can I learn more about this type of problem solving? I'm exploring math and wouldn't even know what category to fit this type of problem into, much less how to solve it! Can anyone give me a direction to go into and thanks in advance.

Sean
There is basically only the idea: ##(m^{2} - n^{2}) + (m+n) = (m+n)(m-n+1)## and the formula for the sum of consecutive numbers.

It is crucial because it allows the consideration of factors of ##4040## instead of sums. However, those formulas are often found by just playing around with some algebraic expressions from the problem statement.
 
  • Like
Likes Hsopitalist and etotheipi

Similar threads

4
Replies
114
Views
8K
2
Replies
60
Views
9K
3
Replies
86
Views
11K
2
Replies
61
Views
11K
2
Replies
67
Views
9K
3
Replies
100
Views
9K
Replies
33
Views
8K
2
Replies
39
Views
11K
2
Replies
61
Views
7K
2
Replies
61
Views
9K
Back
Top