Math Challenge - April 2020

I'm sorry but it's hard to read your solution. You need to use the formatting tools as @Not anonymous did. Or if you feel like it you can learn LaTeX.
  • #106
Okay, but it still has nothing to do with concavity- only monotonicity. If ##f## is strictly increasing, and ##x<f(x)## for some ##x##, then ##f(x)<f(f(x))## and ##f(f(x))<f(f(f(x)))=x##, which gives a contradiction. Similarly for the other cases.
 
Physics news on Phys.org
  • #107
Infrared said:
@archaic I didn't read your new argument carefully, but it doesn't look like it uses the continuity of ##f##, and so it cannot be right. A discontinuous counterexample is: Let ##f(x)=x## except when ##x=0,1/2,1##, for which we set ##f(0)=1/2, f(1/2)=1,## and ##f(1)=0##. Let me know if you still want me to read through it.

You had everything you needed in your old argument, just needed to clean some things up.
I added that it is a continuous bijection. This way you can't chose values as you did; you either chose them to be strictly increasing or strictly decreasing.
 
  • #108
Okay, you state that ##f## is strictly monotonic, but where do you use this fact in your proof? It still doesn't seem like your argument relies on it.
 
  • #109
Infrared said:
Okay, you state that ##f## is strictly monotonic, but where do you use this fact in your proof? It still doesn't seem like your argument relies on it.
It maintains the order of the values so that you cannot chose discontinuities like you did.
 
  • #110
I understand that, but where in your argument do you use the fact that ##f## is monotonic?
 
  • Like
Likes archaic
  • #111
Infrared said:
You have the right idea here, but the wording is a little off. Your argument that having a maxmum or a minimum contradicts the injectivity of ##f## is good, but concavity doesn't have much to do with it. Being "concave down" does not mean that a maximum is achieved on the interior (e.g. the square root function is concave down, but still strictly increasing on its domain). The last detail that you should check for this argument to be airtight is that a continuous function defined on an interval is either monotonic or has (interior) local extrema.

Yes, I think there is no reason to use concavity here. Here, I modified the Claim 2 according to your suggestion.

Claim 2: Since ##f(x)## is continuous bijection in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: WLOG, let us consider a small neighbourhood ##[a,b]## around a local extrema ##f(c)## such that ##a \lt c \lt b## (We ensure that each such interval contains only one such extrema). Obviously, ##f'(c)=0##. But, then by mean value theorem, ##\exists x_1, x_2## such that ##a \lt x_1 \lt c## and ##c \lt x_2 \lt b##, i.e. ##x_2 \gt x_1## but ##f(x_2)=f(x_1)##. This is a contradiction, since ##f(x)## is injective. So, a local extrema cannot lie in the interval (a,b). Hence, the local extrema must be at the end-points of the interval,i.e. either ##f(b) \gt f(a)## (strictly monotonically increasing) or ##f(b) \lt f(a)## (strictly monotonically decreasing). The monotonicity is strict because otherwise for some interval ##f(x)## is constant which contradicts the injectiveness of ##f(x)##. This result can be extended other adjacent intervals repeatedly.

It is evident that if ##f(x)## is monotonically increasing(or decreasing) in ##[a,b]##, then in the next interval ##[b,d]## (say), it cannot become monotonically decreasing(or increasing) because that would imply ##f'(x)## changed signs while we moved from one interval to the next. But that implies that ##f'(b)=0##. But our choice of intervals was such that each interval contains only such peak or valley, so a extrema cannot occur at ##x=b## or else we can again squeeze ##f(b)## within some interval and apply the same reasoning as above to conclude ##f'(b) \ne 0##.

So, ##f(x)## is either strictly monotonically increasing or strictly monotonically decreasing.

Infrared said:
Why are the only three possible cases that f(x)=x,f(x)>xf(x)=x,f(x)>x, and f(x)<xf(x)<x (for all xx)? Why can't it be mixed?

Yes, of course, it can be mixed. My argument can be restated by just dividing the entire intervals into sub-intervals where either ##f(x)>x## or ##f(x)<x##, and applying the same argument over the various sub intervals over the entire interval.

I think that solves all the problems. Do I need to post the complete solution again?
 
  • #112
Lament said:
Do I need to post the complete solution again?
This would be very kind of you. It is meanwhile almost impossible to figure out in which post which argument contributes to a possible solution. It is generally best to quote the question and provide a consistent answer. Especially as there are quite a few members involved. E.g. someone used claim 3 of your attempt to argue about his, whereas claim 2 had a flaw. and it is impossible to figure out whether this affected claim 3 etc.
It's a mess.
 
  • Like
Likes Adesh
  • #113
fresh_42 said:
This would be very kind of you. It is meanwhile almost impossible to figure out in which post which argument contributes to a possible solution. It is generally best to quote the question and provide a consistent answer. Especially as there are quite a few members involved. E.g. someone used claim 3 of your attempt to argue about his, whereas claim 2 had a flaw. and it is impossible to figure out whether this affected claim 3 etc.
It's a mess.
Just a general query, do you people have to search every time who has has answered your question? Because whenever we post a solution we don’t ping (we don’t put @ before anybody’s name) you, QQ or IR. I think it’s a very hard job to come here and search for who has solved your question.
 
  • #114
Question 9:

Regarding the monotonicity of ##f##, we are given that $$f \left (
f \left (
f(x) \right ) \right) =x $$ for all ## x \in [0,1]##. Now, either ##f## is an identity function or ##f## undo the effect of ##f \left ( f(x) \right)##, the latter condition implies that ##f \left ( f(x) \right) ## is inverse of ##f##. Hence, ##f## is inevrtible.

Any function which is invertible (in an interval) is also monotonic in that interval.
 
  • #115
fresh_42 said:
9. Let ##f:[0,1]\to [0,1]## be a continuous function such that ##f(f(f(x)))=x## for all ##x\in [0,1]##. Show that ##f## must be the identity function ##f(x)=x.## (IR)

The question confines itself to ##f:[0,1]\to [0,1]## such that ##f(f(f(x)))=x## for all ##x\in [0,1] \Rightarrow f(x)=x##.

However, it can be proved for the general case,i.e. when ##f :\mathfrak {R}\rightarrow \mathfrak {R} ## such that ##f(f(f(x)))=x## for all ##x\in \mathfrak {R}##, ##f(x)=x.##

I prove the general case below :

Claim 1: ##f(x)## is a bijective mapping.
Proof: To prove ##f## is one-one, assume $$f(x)=f(y)$$
$$\Rightarrow f(f(x))=f(f(y))$$
$$\Rightarrow f(f(f(x)))=f(f(f(y)))$$
$$\Rightarrow x=y$$
So, ##f(x)## is injective.

Now, to prove ##f## is onto, ##\forall y \in \mathfrak {R},## ##\exists x \in \mathfrak {R}## such that ##y=f(x)##.
Evidently, if ##y\in \mathfrak {R} \text { then } y=f(f(f(y))) \Rightarrow x=f(f(y))\in \mathfrak {R}##
Thus, we can find ##s## such that $$y=f(x)$$
So, ##f(x)## is surjective.

Thus, ##f(x)## is a continuous bijection as the mapping ##f :\mathfrak {R}\rightarrow \mathfrak {R} ## is continuous.

Claim 2: Since ##f(x)## is continuous bijection in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.

Proof: WLOG, let us consider a small neighbourhood ##[a,b]## around a local extrema ##f(c)## such that ##a \lt c \lt b## (We ensure that each such interval contains only one such extrema). Obviously, ##f'(c)=0##. But, then by mean value theorem, ##\exists x_1, x_2## such that ##a \lt x_1 \lt c## and ##c \lt x_2 \lt b##, i.e. ##x_2 \gt x_1## but ##f(x_2)=f(x_1)##. This is a contradiction, since ##f(x)## is injective. So, a local extrema cannot lie in the interval (a,b). Hence, the local extrema must be at the end-points of the interval,i.e. either ##f(b) \gt f(a)## (strictly monotonically increasing) or ##f(b) \lt f(a)## (strictly monotonically decreasing). The monotonicity is strict because otherwise for some interval ##f(x)## is constant which contradicts the injectiveness of ##f(x)##. This result can be extended other adjacent intervals repeatedly.

It is evident that if ##f(x)## is monotonically increasing(or decreasing) in ##[a,b]##, then in the next interval ##[b,d]## (say), it cannot become monotonically decreasing(or increasing) because that would imply ##f'(x)## changed signs while we moved from one interval to the next. But that implies that ##f'(b)=0##. But our choice of intervals was such that each interval contains only such peak or valley, so a extrema cannot occur at ##x=b## or else we can again squeeze ##f(b)## within some interval and apply the same reasoning as above to conclude ##f'(b) \ne 0##.Claim 3: ##f(x)## is strictly increasing.
Proof: Assume on the contrary that f(x) is strictly decreasing, then for
$$y>x \Rightarrow f(y)<f(x)$$
$$\Rightarrow f(f(y))>f(f(x))$$
$$\Rightarrow f(f(f(y)))<f(f(f(x)))⇒y<x$$
a contradiction.
So, ##f(x)## is strictly increasing.

Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)≠x##, then either ##f(x)>x## or ##f(x)<x## for a certain interval.
As such, the entire interval can be subdivided on the basis of ##f(x)>x## or ##f(x)<x##, and for each such interval, we prove that it leads to a contradiction.

Therefore, if in a certain interval,
\begin{equation}f(x)>x\end{equation}
\begin{equation}\Rightarrow f(f(x))>f(x)\end{equation}
\begin{equation}\Rightarrow f(f(f(x)))>f(f(x))\end{equation}
\begin{equation}\Rightarrow x>f(f(x))\end{equation}
From (1), (2) and (4)
$$x>f(f(x))>f(x)>x$$
which is impossible.

Similarly, in another interval if
\begin{equation}f(x)<x\end{equation}
\begin{equation}\Rightarrow f(f(x))<f(x)\end{equation}
\begin{equation}\Rightarrow f(f(f(x)))<f(f(x))\end{equation}
\begin{equation}\Rightarrow x<f(f(x))\end{equation}
From (5), (6) and (8)
$$x<f(f(x))<f(x)<x$$
which is impossible.
Hence,
$$f(x)=x$$ Q.E.D.
 
  • #116
Adesh said:
Just a general query, do you people have to search every time who has has answered your question?
If a new post is entered, the alert box says there is. And then we know who posted which question, and at least you guys normally quote the number of the question. The harder part is to edited the "solved by" part, because I link to the solution. Unless someone comes up with a full solution, this will be impossible for revolution no. 9. I need the id number of the post, and I only get it by pressing "reply". But that fills my edit box and if I'm waiting too long, it will be stored automatically, which is annoying for future posts. Unfortunately, the "give credit to" messages from my colleagues don't provide that url or id either. I normally look for a like that might indicate which post is meant to be the solution. So the edit might take some time, especially as I have to wait until the browser interpreted the MathJax code, for otherwise I risk that it will be destroyed.
 
  • Sad
Likes Adesh
  • #117
fresh_42 said:
5. For coprime natural numbers ##n,m## show that
$$
m^{\varphi(n)}+n^{\varphi(m)} \equiv 1 \operatorname{mod} nm
$$
Edited after comments from @fresh_42

Fermat-Euler Theorem states that for every pair of co-prime integers ##a## and ##m##, it holds
$$a^{\varphi(m)} \equiv 1 (\operatorname{mod} m)$$ where ##{\varphi(m)}## is the Euler totient function.

Thus, since ##n,m## are co-prime, $$m^{\varphi(n)}+n^{\varphi(m)} \equiv 1+0 \equiv 1(\operatorname{mod} n)$$
& $$m^{\varphi(n)}+n^{\varphi(m)} \equiv 1+0 \equiv 1 (\operatorname{mod} m)$$
This suggests $$n|(m^{\varphi(n)}+n^{\varphi(m)}-1)$$ and $$m|(m^{\varphi(n)}+n^{\varphi(m)}-1)$$
Hence, (since if ##a|c## and ##b|c## then ##lcm(a,b)|c##. Now, ##gcd(a,b)=1 \Rightarrow lcm(a,b)=ab##. Hence, ##ab|c## ) $$mn|(m^{\varphi(n)}+n^{\varphi(m)}-1)$$
or equivalently, $$m^{\varphi(n)}+n^{\varphi(m)} \equiv 1 \operatorname{mod} nm$$.
 
Last edited:
  • #118
Lament said:
Fermat-Euler Theorem states that for every pair of co-prime integers ##a## and ##m##, it holds
$$a^{\varphi(m)} \equiv 1 (\operatorname{mod} m)$$ where ##{\varphi(m)}## is the Euler totient function.

Thus, since ##n,m## are co-prime, $$m^{\varphi(n)}+n^{\varphi(m)} \equiv 1 (\operatorname{mod} n)$$
Easier to read: ##m^{\varphi(n)}+n^{\varphi(m)} \equiv 1 + 0 \equiv 1 (\operatorname{mod} n)##
& $$m^{\varphi(n)}+n^{\varphi(m)} \equiv 1 (\operatorname{mod} m)$$
This suggests $$n|(m^{\varphi(n)}+n^{\varphi(m)}-1)$$ and $$m|(m^{\varphi(n)}+n^{\varphi(m)}-1)$$
Hence, (since if ##a|c## and ##b|c## such that ##gcd(a,b)=1##, then ##ab|c## ) $$
Here we use ##a|c## and ##b|c \Longrightarrow lcm(a,b)|c ## and ##lcm=ab## for ##gcd(a,b)=1##.
mn|(m^{\varphi(n)}+n^{\varphi(m)}-1)$$
or equivalently, $$m^{\varphi(n)}+n^{\varphi(m)} \equiv 1 \operatorname{mod} nm$$.
 
  • #119
Lament said:
The question confines itself to ##f:[0,1]\to [0,1]## such that ##f(f(f(x)))=x## for all ##x\in [0,1] \Rightarrow f(x)=x##.

However, it can be proved for the general case,i.e. when ##f :\mathfrak {R}\rightarrow \mathfrak {R} ## such that ##f(f(f(x)))=x## for all ##x\in \mathfrak {R}##, ##f(x)=x.##

I prove the general case below :

Thus, ##f(x)## is a continuous bijection as the mapping ##f :\mathfrak {R}\rightarrow \mathfrak {R} ## is continuous.

Claim 2: Since ##f(x)## is continuous bijection in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.

Proof: WLOG, let us consider a small neighbourhood ##[a,b]## around a local extrema ##f(c)## such that ##a \lt c \lt b## (We ensure that each such interval contains only one such extrema). Obviously, ##f'(c)=0##. But, then by mean value theorem, ##\exists x_1, x_2## such that ##a \lt x_1 \lt c## and ##c \lt x_2 \lt b##, i.e. ##x_2 \gt x_1## but ##f(x_2)=f(x_1)## This is a contradiction, since ##f(x)## is injective.

It is not given that ##f## is differentiable. The function ##f(x)=|x|## is minimized ##0##, but is not differentiable there. Even for differentiable functions, I think your MVT argument is flawed- I can't tell how you're applying MVT, but it looks like you're claiming that any function whose derivative has a zero cannot be injective. The function ##f(x)=x^3## is a counterexample to that. It is also not clear how you can make each interval contain only one such extremum, since in general functions can change monotonicity infinitely often in a finite interval, like ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##. Your original argument using the IVT was sound, aside from the concavity language.
Lament said:
Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)≠x##, then either ##f(x)>x## or ##f(x)<x## for a certain interval.
As such, the entire interval can be subdivided on the basis of ##f(x)>x## or ##f(x)<x##, and for each such interval, we prove that it leads to a contradiction.
...
As before, dividing into intervals adds some confusion. Here the problem is less severe because the set where ##x\neq f(x)## is open, so it must contain an interval if it's nonempty, but it's still easier to avoid it: suppose for contradiction that ##f(x)\neq x## for some ##x##. Then either ##x<f(x)## or ##x>f(x)##, and apply your argument to the point ##x##, not to an entire interval.
Adesh said:
Any function which is invertible (in an interval) is also monotonic in that interval.
Any continuous invertible function, yes. This is the content of @Lament's claim 2.
 
  • #120
Adesh said:
Just a general query, do you people have to search every time who has has answered your question? Because whenever we post a solution we don’t ping (we don’t put @ before anybody’s name) you, QQ or IR. I think it’s a very hard job to come here and search for who has solved your question.
Infrared said:
Any continuous invertible function, yes. This is the content of @Lament's claim 2.
I can give an informal proof of why an invertible function would be monotonic.
PROOF: Function is invertible implies it is injective, that is it passes horizontal line test for all horizontal lines. That is our function passes every horizontal line just once and this is possible only if it keeps on increasing/decreasing, if it were to change it’s nature, that is from increasing to decreasing (or decreasing to increasing) it would surely intersect at least one horizontal line at least twice and hence will not be injective

Was my proof of “##f## is invertible” right?
 
  • #121
Adesh said:
I can give an informal proof of why an invertible function would be monotonic.
PROOF: Function is invertible implies it is injective, that is it passes horizontal line test for all horizontal lines. That is our function passes every horizontal line just once and this is possible only if it keeps on increasing/decreasing, if it were to change it’s nature, that is from increasing to decreasing (or decreasing to increasing) it would surely intersect at least one horizontal line at least twice and hence will not be injective

The claim "an invertible function would be monotonic" is false. A counterexample is ##f:[0,1]\to [0,1]## defined by ##f(x)=x## for ##x\in (0,1)## and ##f(0)=1, f(1)=0##. Continuity is necessary. You don't mention continuity anywhere in your explanation, so I don't think it can be right.

Adesh said:
Was my proof of “##f## is invertible” right?
A bit wordy. I think it's fine to just say "##f## is invertible because ##f\circ f## is an inverse function to ##f##."
 
  • #122
Infrared said:
It is not given that ##f## is differentiable. The function ##f(x)=|x|## is minimized ##0##, but is not differentiable there.
True, I overlooked that. I took a stationary point for defining the extrema.
Infrared said:
Even for differentiable functions, I think your MVT argument is flawed- I can't tell how you're applying MVT, but it looks like you're claiming that any function whose derivative has a zero cannot be injective. The function ##f(x)=x^3## is a counterexample to that.
It is supposed to be IVT (typographical error on my part). What I meant to do was to take an interval around the extrema, if it exists. And then prove that since any horizontal line in between the range of the interval will intersect the curve at two points at least, by injectiveness of ##f## this is impossible and the regional extrema must only occur at the end-points of the interval making the function either strictly monotonically increasing or decreasing.
I wouldn't consider ##f(x)=x^3## at ##x=0## because it is not an extremum even though it is a stationary point.
Infrared said:
It is also not clear how you can make each interval contain only one such extremum, since in general functions can change monotonicity infinitely often in a finite interval, like ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##.
Well, is we take an extremum then in a small neighbourhood of it, it will only contain that extrema. Yes, it can be done away with. Even if we consider any arbitrary interval and it contains an extremum, then again with IVT we can prove that no local extrema are present within the interval. I thought of having an interval with only one such extremum because then we can eventually conclude it to be monotonically increasing within each such interval.
Also from your argument ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##. I didn't think about badly behaving functions.

Infrared said:
As before, dividing into intervals adds some confusion. Here the problem is less severe because the set where ##x\neq f(x)## is open, so it must contain an interval if it's non-empty, but it's still easier to avoid it: suppose for contradiction that ##f(x)\neq x## for some ##x##. Then either ##x<f(x)## or ##x>f(x)##, and apply your argument to the point ##x##, not to an entire interval.
I understand considering point by point the proof is easier but I do not understand why the interval argument raises confusion. Because if ##f(x)\neq x## for some x, then in a small neighbourhood around the point the same must hold true (it is strictly monotonic after all).

Am I still missing something?
 
  • #123
Question 9
We are given that ##f## is a continuous function and satisfies $$ f (f ( f(x) ) ) =x$$ for all ##x \in [0,1]##. Since, ##f## undo the effect of ##f \circ f## therefore ##f \circ f## is inverse of ##f##. Hence, ##f## is an invertible function.
Any continuous invertible function is always monotonic (an informal proof is given in post #120).

Let’s assume that ##f(x) \neq x## for some sub-interval, that is ##f## is not an identity function. Now, in that sub-interval ##f## is either greater or smaller than ##x## and also monotonically continuous (increasing/ decreasing).

Let’s assume ##f## is greater than ##x## in that sub-interval and is monotonically increasing in that sub-interval, $$ x \lt f(x) \\ f(x) \lt f (f(x)) \implies x \lt f ( f(x)) ~~~~~(1)\\ f(f(x)) \lt f ( f( f (x) ) ) \implies f(f(x)) \lt x$$ the last inequality above contradicts with inequality (1). Hence, our assumption was wrong.

We can show similar contradictions for other cases: ##f## is monotonically decreasing, ##f## is less than ##x## and more.

Let’s do it for ##f## being less than ##x## (in a some sub-interval) and monotonically decreasing. $$ f(x) \lt x \\
f ( f(x)) \gt f(x) \\
f( f ( f (x) ) ) \lt f ( f (x) ) \\
x \lt f (f (x) ) \\
f(x) \gt f ( f ( f (x) ) ) \implies f(x) \gt x $$
The last inequality above contradicts with our assumption that ##f(x)## is less than ##x## in the considered sub-interval.

These contradictions are occurring because our assumption of ##f## being not an identity function was false.

Hence, ##f## must be an identity function .
 
  • #124
Lament said:
I wouldn't consider ##f(x)=x^3## at ##x=0## because it is not an extremum even though it is a stationary point.
Yes, this is my point. To me, it looked like your argument was roughly:

"##f## has a local extremum" ##\implies## "##f'(c)=0## for some ##c##" ##\implies## "##f## is not injective"

But ##f(x)=x^3## is a counterexample to the second implication. If this was not the shape of your argument, could you clarify how you were trying to use the fact that ##f'(c)=0##?

Lament said:
Well, is we take an extremum then in a small neighbourhood of it, it will only contain that extrema. Yes, it can be done away with. Even if we consider any arbitrary interval and it contains an extremum, then again with IVT we can prove that no local extrema are present within the interval. I thought of having an interval with only one such extremum because then we can eventually conclude it to be monotonically increasing within each such interval.
Also from your argument ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##. I didn't think about badly behaving functions.
I think my function ##f(x)=x\sin(1/x)## has local extrema in every small interval ##(0,\varepsilon)##, so you can't always make these intervals. But anyway, there is no need to try to make these intervals.

Lament said:
I understand considering point by point the proof is easier but I do not understand why the interval argument raises confusion. Because if ##f(x)\neq x## for some x, then in a small neighbourhood around the point the same must hold true (it is strictly monotonic after all).

Am I still missing something?
It is okay, but the reasoning shouldn't involve monotonicity- only continuity. The function ##x\mapsto f(x)-x## is continuous, so if it is negative (or positive) at a point, it is negative (or positive) in a neighborhood of the point. In general, it's better to avoid making diversions like this that aren't necessary for the proof.
 
  • #125
fresh_42 said:
9. Let ##f:[0,1]\to [0,1]## be a continuous function such that ##f(f(f(x)))=x## for all ##x\in [0,1]##. Show that ##f## must be the identity function ##f(x)=x.## (IR)
Saying ##f(f(f(x)))=x## is like saying ##f(u\in[0,1])=x##. As such, ##f:[0,1]\to[0,1]## is bijective.
Since ##f## is a continuous bijection on ##[0,1]##, then it is either strictly decreasing or strictly increasing, the same for its inverse (which has the same direction of variation as ##f##).

Proving ##\forall x\in [0,1],\,f(f(f(x)))=x\implies f(x)=x## is the same as proving ##f(x)\neq x\implies\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

Suppose that ##f(x)\neq x##, i.e, there exists a real number in ##[0,1]## such that its image by ##f## is different than itself. In the following, by ##a## I mean this real.
All inequalities will be strict because ##f## is a continuous bijection.
Suppose that ##f(x)## is strictly increasing, and let there be ##a\in[0,1]## such that ##f(a)=b\neq a##, then:
If ##b<a##, then ##f(b)=c<f(a)=b##, and thus ##f(c)<f(a)=b##. It follows that ##f(f(f(a)))<b<a##.​
If ##b>a##, then ##f(b)=c>f(a)=b##, and thus ##f(c)>f(a)=b##. It follows that ##f(f(f(a)))>b>a##.​
I have proved that if ##f(x)## is strictly increasing, then ##\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

Suppose that ##f(x)## is strictly decreasing, and let there be ##a\in[0,1]## such that ##f(a)=b\neq a##, then:
If ##b<a##, then ##f(b)=c>f(a)=b##.​
If ##b<c<a##, then ##f(b)=c>f(c)>f(a)=b##. It follows that ##f(f(f(a)))>f(a)##​
If ##a<c##, then ##f(a)=b>f(c)##. It follows that ##f(f(f(a)))<f(a)##​
If ##b>a##, then ##f(b)=c<f(a)=b##.​
If ##b>c>a##, then ##f(b)=c<f(c)<f(a)=b##. It follows that ##f(f(f(a)))<f(a)##​
If ##a>c##, then ##f(a)=b<f(c)##. It follows that ##f(f(f(a)))>f(a)##​
I have proved that if ##f(x)## is strictly decreasing, then ##\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

I have proved that ##f(x)\neq x\implies\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##!
 
Last edited:
  • #126
archaic said:
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
I had someone show me this; it'll prove that for even ##n##, ##p_n(x)=\sum_{k=0}^n\frac{x^n}{n!}>0## for ##x<0##, thus proving 2) and 3).
Since ##e^x## is differentiable infinitely many times, then it is differentiable ##n+1## times. Moreover ##e^x=\lim_{n\to\infty}p_n(x)##, then, using Taylor's theorem and the Lagrange remainder term, we get:
$$e^x=p_n(x)+E(x)\Leftrightarrow p_n(x)=e^x-E(x)$$
Where ##E(x)## is the remainder, and is given by:
$$E(x)=\frac{e^\zeta}{(n+1)!}x^{n+1},\text{ with }x<\zeta<0$$
Since ##x<0## and ##n## is even, then ##-x^{n+1}=|x|^{n+1}##:
$$p_n(x)=e^x-E(x)=e^x+\frac{|x|^{n+1}e^\zeta}{(n+1)!}>0$$
I think that my proof is now complete.
 
Last edited:
  • #127
fresh_42 said:
I still don't like the inaccuracy of "behaves like" because it doesn't capture lower terms quantitatively. We have ##\dfrac{1}{x^n}<\dfrac{1}{x^{n-1}}##, so why can we neglect those terms of lower orders? In other words: Why is ##g(x)=x^{-n}## suited, if ##f(x)=P(x)^{-1}## is actually bigger than ##g(x)?##

There should be chosen a better function ##g(x)##, which is actually greater than ##x^{-n}## and not only "near", and the estimation ##\int 1/P(x) dx < \int g(x)dx## should be accurate.

Your argument goes: "I have a function (##x^{-n}##), which is possibly smaller than ##1/P(x)##, so it cannot guarantee convergence, but I don't care, since my error is small." However, you did not show that this heuristic is allowed.
Hey fresh! Thank you very much. For some reason, I was set on finding an equivalent at infinity for ##P(x)##, and disregarded that I can also find a function (whose integral over that interval converges) such that ##P(x)## is a little o of it like that you suggested.
 
Last edited:
  • #128
@QuantumQuest Sir, I'm having some doubts regarding question number 12, please pardon my lack of knowledge.

When you wrote
A table has an interval moveable surface used for slicing bread.
I couldn't understand what "internal" meant, how it is internal? And does the "surface" means one of those faces of that "cuboidal" machine?
 
  • #129
Infrared said:
Yes, this is my point. To me, it looked like your argument was roughly:

"##f## has a local extremum" ##\implies## "##f'(c)=0## for some ##c##" ##\implies## "##f## is not injective"

But ##f(x)=x^3## is a counterexample to the second implication. If this was not the shape of your argument, could you clarify how you were trying to use the fact that ##f'(c)=0##?I think my function ##f(x)=x\sin(1/x)## has local extrema in every small interval ##(0,\varepsilon)##, so you can't always make these intervals. But anyway, there is no need to try to make these intervals.

@Infrared as per your suggestions I have reworded my claim 2. Is it airtight now? Or did I miss something?

Claim 2: Since ##f(x)## is continuous bijection in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.

Proof: WLOG, let us consider an interval ##[a,b]##. If it contains a local extrema ##f(c)## such that ##a \lt c \lt b##. Then by mean value theorem, ##\exists x_1, x_2## such that ##a \lt x_1 \lt c## and ##c \lt x_2 \lt b##, i.e. ##x_2 \gt x_1## but ##f(x_2)=f(x_1)##. This is a contradiction, since ##f(x)## is injective. So, a local extrema cannot lie in the interval (a,b). Hence, the local extrema must be at the end-points of the interval,i.e. either ##f(b) \gt f(a)## (strictly monotonically increasing) or ##f(b) \lt f(a)## (strictly monotonically decreasing). The monotonicity is strict because otherwise for some interval ##f(x)## is constant which contradicts the injectiveness of ##f(x)##. This result can be extended to the entire range of ##f## indicating that no extremas occur over the range of ##f##.

Infrared said:
It is okay, but the reasoning shouldn't involve monotonicity- only continuity. The function ##x\mapsto f(x)-x## is continuous, so if it is negative (or positive) at a point, it is negative (or positive) in a neighbourhood of the point. In general, it's better to avoid making diversions like this that aren't necessary for the proof.

Yes, I have changed it to the following:

Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##

Proof: Suppose for some ##x \in \mathfrak {R}, f(x)≠x##, then either ##f(x)>x## or ##f(x)<x## at that point.
 
  • #130
Adesh said:
@QuantumQuest Sir, I'm having some doubts regarding question number 12, please pardon my lack of knowledge.

When you wrote

A table has an internal moveable surface used for slicing bread.

I couldn't understand what "internal" meant, how it is internal? And does the "surface" means one of those faces of that "cuboidal" machine?

(corrected "interval" you had in your quote)

Internal means inside, in the interior of something. Now, if you read the second phrase right after the first one it's easy to see that there is an internal surface inside table that can be pulled outside by using two small handles.

fresh_42 said:
A table has an internal movable surface used for slicing bread. In order to be able to be pulled to the outside, there are two small handles on its front face...

To explain even more simply, the internal surface I mention is like a drawer that can be pulled outside , used to do its job and be pushed inside again using the handles,after it has done its job. So, there is no surface of "cuboidal" machine.

EDIT: Also, keep in mind that Question ##12## is only for people currently at High School
 
  • #131
QuantumQuest said:
Now, if you read the second phrase right after the first one it's easy to see that there is an internal surface inside table that can be pulled outside by using two small handles.
What is the table in that figure? What does that cuboid represents in the figure?
 
  • #132
fresh_42 said:
10. Let ##f:[0,1]\to\mathbb{R}## be a continuous function, differentiable on ##(0,1)## that satisfies ##f(0)=f(1)=0##. Prove that ##f(x)=f'(x)## for some ##x\in (0,1)##. Note that ##f'## is not assumed to be continuous. (IR)

I am not sure about the correctness and completeness of the "proof" I have attempted. Even if turns out to be correct, it may have loose ends because my knowledge of calculus and continuity is basic and I haven't had to solve proof-type problems of this type as a part of my curriculum.
The proof should be obvious for the trivial case where ##f## is the constant function ##f(x) = 0 \, \forall x \in [0,1]##. If ##f## is not the constant function, the proof is as follows. Let ##a \in (0, 1]## be the smallest positive number for which ##f(a) = 0 = f(0)##. Obviously, ##a## exists since at least ##a=1## will meet this requirement even if no other value in ##(0,1)## does.
Observation 1: By definition of ##a## and since ##f## is continuous, ##f(x)## must be strictly positive or strictly negative for all ##x \in (0,a)##
Observation 2: By Rolle's theorem, there must exist a value ##b \in (0, a)## for which ##f'(b) = 0##. And by definition of ##a##, ##f(b) \neq 0##

Let us define a function ##g:[0,a]\to\mathbb{R}## as follows: $$
g(x) =
\begin{cases}
-\log (f(x)) & \text{if } f(\frac{a} {2}) > 0 \\
-\log (-f(x)) & \text{if } f(\frac{a} {2}) < 0
\end{cases}
$$

It is easy to see that ##g## too must be a continuous function and differentiable in ##(0, a)##.

Observation 3: ##g'(x) = -\dfrac {f'(x)} {f(x)}##, and since ##f'(b) = 0, f(b) \neq 0##, we must have ##g'(b) = 0##
Observation 4: ##g(x)## tends to ##\infty## as ##x## tends to 0 or ##a## (##x## within the range ##[0,a]##)

(I don't have a proof for the following claim but it just appears to be "intuitively" correct). Since ##g(x)## is unbounded within a finite interval ##[0,a]##, the derivative ##g'(x)## cannot be bounded, so it can take unbounded, arbitrarily extreme negative or arbitrarily extreme positive values as ##x## tends to 0 or ##a## respetively. More specifically, since ##g(x)## goes from 0 to ##\infty## as ##x## goes from ##b## to ##0##, ##g'(x)## should tend to ##-\infty## as ##x## tends to ##0## from the right. So, if we choose an arbitrary bound ##v_0##, there must exist some ##p_0 \in (0, b)## such that ##g'(p_0) \lt v_0##.

Now, we can apply Darboux's theorem of real analysis since ##g## is differentiable. Since ##g'(b) = 0## and if ##g'(p_0) \leq v_0 \gt 0## for some ##p_0 \lt b##, then for any ##v_1 \in (v_1, 0)## must exist a point ##c \in (p_0, b)## such that ##g'(c) = v_1##. Since we claimed that ##g'## is unbounded in the interval of which it is defined, we can choose ##v_0 = -2##. This implies that there must exist ##v_1 \in (p_0, b)## (##p_0## being the largest ##x \in (0, b)## for which ##g'(x) \leq v_0##) such that ##g'(v_1) = -1## (since ##-2 < -1 < 0##).

$$
g'(v_1) = -1 \Rightarrow -\dfrac {f'(v_1)} {f(v_1)} = -1 \Rightarrow \dfrac {f'(v_1)} {f(v_1)} =1 \\
\Rightarrow f'(v_1) = f'(v_1)
$$
Thus there exists a point ##x = v_1 \in (0, 1)## (more precisely ##x = v_1 \in (0, b)##) for which ##f'(x) = f(x)##

(I got to know of Darboux's theorem only when trying to understand non-continuous derivatives after looking at this question - it is a theorem in real analysis, a topic that I have never studied as my math knowledge is mostly limited to high-school curriculum. So don't be surprised if I have interpreted or applied the theorem incorrectly :oops:)
 
  • Like
Likes Infrared
  • #133
@Adesh This is good- to make it complete, you should have a proof of your claim that a continuous bijection ##f:[0,1]\to [0,1]## must be strictly monotonic. You can also cut down a bit on the case-work by noting that ##f## decreasing contradicts ##f\circ f\circ f\circ f=\text{Id}.##

@Lament I think it is good now, but there are two details in your claim 2 that could be expanded upon. It's not immediately clear how you are applying IVT (not MVT), and how you are concluding that ##f## is strictly monotonic from the fact that it does not attain local extrema in its interior.

Since there have been a few nearly complete arguments, I'll post a solution. It should look a lot like several of the preceding posts.

First of all, ##f:[0,1]\to [0,1]## is a bijection because ##f\circ f## is inverse to ##f##. In particular, ##f## is injective. We check that a continuous injection ##f:[0,1]\to [0,1]## must be strictly monotonic.

Suppose for contradiction that ##f## is neither increasing nor decreasing. Since ##f## is not decreasing, there exist ##a,b\in [0,1]## such that ##a<b## and ##f(a)<f(b)##. Similarly, there are ##c,d\in [0,1]## such that ##c<d## and ##f(c)>f(d)## (we may take the inequalities to be strict by injectivity).

Suppose that ##a<c## and ##b<d##. Let ##M=\max(f(b),f(c))## and let ##p\in\{b,c\}## be the corresponding input. Let ##y## be a real number in the interval ##(\max(f(a),f(d)),M)##. By the intermediate value theorem, there exist ##x_1\in (a,p)## and ##x_1\in (p,d)## such that ##f(x_1)=y=f(x_2)##, contradicting injectivity. The other cases are analogous.

A very slick way alternative way to prove this is to consider the set ##S=\{(x,y)\in [0,1]\times [0,1]: x>y\},## and define ##F:S\to\mathbb{R}## by ##F(x,y)=f(x)-f(y)##. Since ##S## is connected and ##F## is continuous, the image ##F(S)## is an interval in ##\mathbb{R}##. This interval does not contain ##0## by injectivity, so ##F(x,y)## is either always positive or always negative, corresponding to ##f## being increasing or decreasing, respectively.

From here, many of you have given a correct argument: The function ##f\circ f\circ f=\text{Id}## has the same monotonicity as ##f##, so ##f## must be increasing. Suppose for contradiction that ##f## is not the identity, so ##x\neq f(x) ## for some ##x##, meaning either ##x<f(x)## or ##x>f(x)##. In the first case, we have that ##x<f(x)<f(f(x))<f(f(f(x)))=x## is a contradiction, and similarly if ##x>f(x)##.
 
Last edited:
  • #134
@archaic Your solution to problem ##8## looks correct, but there isn't any need to do estimates like this. It really is just induction like I promised!

We check by induction that ##p_n## has no roots when ##n## is even, and exactly one root when ##n## is odd:

The base case ##n=0## is clear. Now suppose that this claim is true for ##p_{n-1}##, and consider the polynomial ##p_n##.

If ##n## is odd, then ##p_n'=p_{n=1}## has no roots, and so is always positive. This means that ##p_n## is a strictly increasing function and so can have at most one root. Also ##\lim_{x\to\pm\infty}p_n(x)=\pm\infty## since ##n## is odd, so by IVT, the polynomial ##p_n## does in fact have a root.

If instead ##n## is even, the polynomial ##p_n## must achieve a minimum value (since ##\lim_{x\to\pm\infty}p_n(x)=\infty##), say at ##p_n(c)## Then ##p_n'(c)=p_{n-1}(c)=0##, so ##p_n(c)=p_{n-1}(c)+c^n/n!=c^n/n!>0## because ##n## is even (and we can rule out ##c=0## as a special case). So ##p_n## has no real roots.
 
  • Love
Likes archaic
  • #135
Not anonymous said:
Let ##a \in (0, 1]## be the smallest positive number for which ##f(a) = 0 = f(0)##. Obviously, ##a## exists
I don't think I agree with this. Supposing that the zero set of ##f## is ##\{0\}\cup\{1,1/2,1/3,1/4,1/5,\ldots\}##, what is your ##a##?

Not anonymous said:
(I don't have a proof for the following claim but it just appears to be "intuitively" correct).
I think what you're trying to say is: If ##f:(a,b)\to\mathbb{R}## is differentiable, and ##f'## is bounded, then ##f## is bounded. You can show this with the mean value theorem.

The rest of your proof actually looks pretty good!
 
  • Informative
Likes Not anonymous
  • #137
Problem 10

It is clear there is an interval ##[a,b]\subseteq[0,1]## such that ##f(a)=f(b)=0,\quad |f|\mid_{(a,b)}>0##.
Consider a function ##F(x)=\ln|f(x)|-x##. We have ##F(x)\to-\infty## as ##x\to b-## or ##x\to a+##.
Thus for some ##\tilde x\in(a,b)## we have ##F'(\tilde x)=0##
This solves the problem
 
  • #138
Problem 1 is a very very standard fact. I would propose to prove the following generalization.
Let ##(X,d_X),\quad (Y,d_Y)## be metric spaces. ##Y## is complete. Let ##U\subset X## be a subset of ##X##.

Let a function ##f:U\to Y## be uniformly continuous:
For any ##\varepsilon>0## there is ##\delta=\delta(\varepsilon)## such that
$$d_X(x_1,x_2)<\delta\Longrightarrow d_Y(f(x_1),f(x_2))\le\varepsilon.$$
Then there is a unique function ##\tilde f:\overline U\to Y## such that ##\tilde f\mid_U=f##. The function ##\tilde f## is uniformly continuous with the same ##\delta(\varepsilon)##.

(##\overline U## stands for the closure of ##U##)
 
Last edited:
  • #139
Infrared said:
I don't think I agree with this. Supposing that the zero set of ##f## is ##\{0\}\cup\{1,1/2,1/3,1/4,1/5,\ldots\}##, what is your ##a##?

That is a nice counterexample! I was wary of infinity and yet not sure if my proof correctly dealt with infinity in it various forms. I missed the case of ##f## having infinite number of zeros (except for the constant function ##f(x) = 0 \, \forall x \in [0,1]##). The reason why my attempted proof defined ##a## was to get an interval over which I could define ##g(x)##, specifically an interval whose endpoints would be zeros of ##f(x)## and where ##f## would be either strictly positive or strictly negative within the interval (except at the endpoints). Is it valid to assume that for any differentiable function ##f:[0,1] \rightarrow \mathbb{R}## that is not the constant-value function but has infinite number of zeros, its zeros will form a countably infinite set? Or that for any such function, there will be some interval ##I ⊂[0,1]## containing 2 or more zeros of ##f## with all zeros falling in that interval forming a finite or a countably infinite set (i.e. set of zeros of ##f## contains a finite or countably infinite subset, with all remaining zeros, if any, lying outside the range defined by this finite or countably infinite subset)? If yes, will the following changes to the proof make it correct? (If my understanding of "countably infinite" is correct, the zero set example that you have provided is countably infinite).

Instead of choosing the range over which ##g## is defined as before, we now choose it to be ##[a_{i}, a_{i+1}] \subseteq [0,1]## where ##a_{i}, a_{i+1}## (##a_{i} \lt a_{i+1}##) are any two adjacent elements in the zero set of ##f## (meaning ##f## has no zeros in ##(a_{i}, a_{i+1})##).

##g:[a_{i}, a_{i+1}] \rightarrow \mathbb{R}## is defined as follows: $$g(x) =
\begin{cases}
−\log(f(x)) & \text{if } f(\frac{a_{i} + a_{i+1}} {2}) > 0 \\
−\log(−f(x)) & \text{if } f(\frac{a_{i} + a_{i+1}} {2}) < 0
\end{cases}
$$

We note that ##g(x)## tends to ∞ as ##x## tends to ##a_{i}## (from the right) or ##a_{i+1}## (from the left) (##x## within the range ##[a_{i},a_{i+1}]##). And by Rolle's theorem, there must exist some ##b \in (a_{i},a_{i+1})## such that ##f'(b) = 0##. The rest of the proof is essentially the same as before with the main changes being the use of ####[a_{i},a_{i+1}]#### instead of the earlier used ##[0,a]## and ##(a_{i}, b)## instead of the earlier used ##(0,b)##. The ranges in which ##p_0, v_0, v_1## etc. were defined in the earlier proof will change accordingly.
 
Last edited:
  • #140
wrobel said:
Problem 1 is a very very standard fact.
However, we neither have a Hilbert space nor is ##X## required to be complete in question 1. Hence it is actually close to what you have proposed instead.
 

Similar threads

2
Replies
61
Views
9K
2
Replies
61
Views
11K
2
Replies
60
Views
9K
3
Replies
100
Views
9K
Replies
33
Views
8K
3
Replies
77
Views
14K
3
Replies
102
Views
9K
2
Replies
56
Views
8K
2
Replies
67
Views
9K
2
Replies
64
Views
14K
Back
Top