Math Challenge - April 2020

I'm sorry but it's hard to read your solution. You need to use the formatting tools as @Not anonymous did. Or if you feel like it you can learn LaTeX.
  • #71
archaic said:
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
I am not exactly sure, but maybe I can use a recursive algorithm. I'll think about it later.
 
Physics news on Phys.org
  • #72
@archaic If ##n## is even, then ##p_n## achieves a minimum at some point ##x_0##. Think about how to use the fact that ##p_n'(x_0)=0##.
 
  • #73
Infrared said:
@archaic If ##n## is even, then ##p_n## achieves a minimum at some point ##x_0##. Think about how to use the fact that ##p_n'(x_0)=0##.
I think that I have thought of that but didn't know how to prove its uniqueness (the minimum).
 
  • #74
You don't need uniqueness. If ##x_0## is any point where ##p_n## achieves its minimum, then it's enough to show that ##p_n(x_0)>0##

But uniqueness does follow from ##p_n'(x_0)=p_{n-1}(x_0)=0## since you know that ##p_{\text{odd}}## only has one real root.
 
  • #75
Infrared said:
But uniqueness does follow from ##p_n'(x_0)=p_{n-1}(x_0)=0## since you know that ##p_{\text{odd}}## only has one real root.
What? I do? Do you mean that I am allowed to say "since we know that ##p_{odd}(x)## has only one real root"?
I probably am misunderstanding you, since an odd polynomial of degree ##n## has up to ##n## real roots.
 
  • #76
By induction you can assume that ##p_{2n-1}## has exactly one root. Then arguing like this shows that ##p_{2n}## has no roots, which by your post shows that ##p_{2n+1}## has exactly one root, etc.
 
  • #77
Infrared said:
By induction you can assume that ##p_{2n-1}## has exactly one root. Then arguing like this shows that ##p_{2n}## has no roots, which by your post shows that ##p_{2n+1}## has exactly one root, etc.
But, to argue like this, I should have already found out that ##p_{2n}(x)\geq0##, no (*)? Since assuming that ##p_{2n-1}## has just one root only tells me that ##p_{2n}## has just one extremum.
I realized that my before last post made no sense because of (*). Maybe that's why I tried to prove that ##p_{2n}(x)\geq0##.
 
Last edited:
  • #78
Yes, you need to show that if ##x_0## is an extremum for ##p_{2n}##, then ##p_{2n}(x_0)>0##. Think about how to use ##p_{2n}'(x_0)=p_{2n-1}(x_0)=0##.
 
  • #79
Infrared said:
Yes, you need to show that if ##x_0## is an extremum for ##p_{2n}##, then ##p_{2n}(x_0)>0##. Think about how to use ##p_{2n}'(x_0)=p_{2n-1}(x_0)=0##.
I tried supposing that ##p_n(x)## and ##p_{n-2}(x)## have only one root, which let's me say that ##p_{n-1}(x)\geq 0##. But I can't really say much about ##p_{n+1}(x)## (it might be negative in some interval) or ##p_{n-3}(x)## which would be positive, though possibly wavy.
If I only suppose that ##p_n(x)## has a minimum, then ##p_{n-1}(x)## might also be squiggly.
 
  • #80
Do you want another hint? I think if I say much more, I might give away the solution.
 
  • #81
fresh_42 said:
6. If ##a_1,a_2 > 0## and ##m_1,m_2 ≥ 0## with ##m_1 + m_2 = 1## show that ##a_1^{m_1}a_2^{m_2} < m_1 a_1 + m_2 a_2##. (QQ)

I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Without loss of generality, let us assume that ##a_1 \lt a_2##. The proof for the opposite case, ##a_1 \gt a_2## will be similar but with the variable ##x## corresponding to ##\frac {a_2}{a_1}## instead of ##\frac {a_1}{a_2}##.

The given inequality is equivalent to ##\dfrac {a_1^{m_1}a_2^{1-m_1}} {m_1 a_1 + (1-m_1) a_2} < 1##,
or equivalently, obtained by dividing both numerator and denominator by ##a_2 = a_2^{m_1} a_2^{1-m_1}##,
$$\dfrac { \left( \dfrac{a_1}{a_2} \right)^{m_1}} {m_1 \left( \dfrac{a_1}{a_2} \right) + (1 - m_1)} < 1
$$

For a fixed value of ##m_1##, the LHS of the above inequality can be written as a function of ##x=\frac{a_1}{a_2}##, so we need to prove that ##f(x) = \dfrac {x ^ {p}} {x p + (1-p)} < 1## for ##x \in (0, 1)## and ##p = m_1 \in (0, 1)##.
Note that since ##a_1 \lt a_2 \text{ and } a_1, a_2> 0, \text{ we must have } x \in (0,1)##.

We now prove that ##f(x)## is a strictly increasing function of ##x## when ##0 \lt x \lt 1##.
$$
f'(x) = \dfrac {px^{p-1}}{x p + (1-p)} - \dfrac {px^p}{(x p + (1-p))^2} = \dfrac {px^{p-1}} {(x p + (1-p))^2} (px + (1-p) - x)= \\
\dfrac {px^{p-1}(1-p) (1-x)} {(x p + (1-p))^2}
$$

It is obvious that that ##f'(x) \gt 0## for ##x \in (0, 1), p \in (0, 1)## since when those conditions are met, the both the numerator and denominator of the above expression for ##f'(x)## are positive. We also notice that ##f(1) = 1##. Since ##f(x)## is strictly increasing for ##0 \lt x \lt 1## and reaches a value of 1 when ##x=1##, it follows that ##f(x) \lt 1 \, \forall x \in (0, 1)##. Hence, the question's inequality is proved subject to the additional conditions mentioned earlier.
 
  • Like
Likes QuantumQuest
  • #82
@Infrared I think the only possible way for proving that is proof by contradiction. I’m need of some discussion (please no hint), @mathwonk said something but I couldn’t decode what he said.
 
  • #83
Not anonymous said:
I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Without loss of generality, let us assume that ##a_1 \lt a_2##. The proof for the opposite case, ##a_1 \gt a_2## will be similar but with the variable ##x## corresponding to ##\frac {a_2}{a_1}## instead of ##\frac {a_1}{a_2}##.

The given inequality is equivalent to ##\dfrac {a_1^{m_1}a_2^{1-m_1}} {m_1 a_1 + (1-m_1) a_2} < 1##,
or equivalently, obtained by dividing both numerator and denominator by ##a_2 = a_2^{m_1} a_2^{1-m_1}##,
$$\dfrac { \left( \dfrac{a_1}{a_2} \right)^{m_1}} {m_1 \left( \dfrac{a_1}{a_2} \right) + (1 - m_1)} < 1
$$

For a fixed value of ##m_1##, the LHS of the above inequality can be written as a function of ##x=\frac{a_1}{a_2}##, so we need to prove that ##f(x) = \dfrac {x ^ {p}} {x p + (1-p)} < 1## for ##x \in (0, 1)## and ##p = m_1 \in (0, 1)##.
Note that since ##a_1 \lt a_2 \text{ and } a_1, a_2> 0, \text{ we must have } x \in (0,1)##.

We now prove that ##f(x)## is a strictly increasing function of ##x## when ##0 \lt x \lt 1##.
$$
f'(x) = \dfrac {px^{p-1}}{x p + (1-p)} - \dfrac {px^p}{(x p + (1-p))^2} = \dfrac {px^{p-1}} {(x p + (1-p))^2} (px + (1-p) - x)= \\
\dfrac {px^{p-1}(1-p) (1-x)} {(x p + (1-p))^2}
$$

It is obvious that that ##f'(x) \gt 0## for ##x \in (0, 1), p \in (0, 1)## since when those conditions are met, the both the numerator and denominator of the above expression for ##f'(x)## are positive. We also notice that ##f(1) = 1##. Since ##f(x)## is strictly increasing for ##0 \lt x \lt 1## and reaches a value of 1 when ##x=1##, it follows that ##f(x) \lt 1 \, \forall x \in (0, 1)##. Hence, the question's inequality is proved subject to the additional conditions mentioned earlier.

I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Well, I was given this exercise as an undergrad some decades ago just like I'm giving it in this challenge thread - i.e, no modifications. While there are of course the cases you mention, the question is looking for the general case where ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.
 
  • Informative
Likes Not anonymous
  • #84
As the Question ##6## is already solved I find it advisable to post my own solution as well.

Let's take the function ##f: (0,+\infty) \rightarrow \mathbb{R}## with ##f(x) = \ln{x}##.
We have: ##f'(x) = \frac{1}{x}## and ##f''(x) = -\frac{1}{x^2} \lt 0 \hspace{0.5cm} \forall x \in (0,+\infty)##.

Suppose ##a_1 \lt a_2####(^*)##. We apply the Mean Value Theorem over the intervals ##[a_1, \frac{a_1 + ma_2}{1 + m}],[\frac{a_1 + ma_2}{1 + m}, a_2] \hspace{0.5cm} m \gt 0##. So, ##\exists \xi_1 \in (a_1, \frac{a_1 + ma_2}{1 + m}), \xi_2 \in (\frac{a_1 + ma_2}{1 + m}, a_2)## such that

##\frac{f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2) - f(a_1)}{\frac{a_1 + ma_2}{1 + m} - a_1} = f'(\xi_1), \frac{f(a_2) - f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2)}{a_2 - \frac{a_1 + ma_2}{1 + m}} = f'(\xi_2)##.

We also have that ##f'## is a strictly decreasing function in ##[a_1, a_2]## as ##f'' \lt 0## in ##(0,+\infty)##.
Now, because ##\xi_1 \lt \xi_2## we have ##f'(\xi_1) \gt f'(\xi_2)##, so

##\frac{f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2) - f(a_1)}{\frac{m(a_2 - a_1)}{1 + m}} \gt \frac{f(a_2) - f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2)}{\frac{a_2 - a_1}{1 + m}}##.

Putting ##\frac{1}{1 + m} = m_1##, ##\frac{m}{1 + m} = m_2## so ##m_1, m_2 \in \mathbb{R}_+## and ##m_1 + m_2 = 1## are satisfied, we have

##\frac{\ln{(m_1a_1 + m_2a_2)} - \ln{a_1}}{m_2} \gt \frac{\ln{a_2} - \ln{(m_1a_1 + m_2a_2)}}{m_1}## so
##m_1\ln{(m_1a_1 + m_2a_2)} - m_1\ln{a_1} \gt m_2\ln{a_2} - m_2\ln{(m_1a_1 + m_2a_2)}##,

so ##(m_1 + m_2) \ln{(m_1a_1 + m_2a_2)} \gt m_1\ln{a_1} + m_2\ln{a_2}## hence

##\ln{(m_1a_1 + m_2a_2)} \gt \ln{(a_1^{m_1}a_2^{m_2})}##, so finally

##m_1a_1 + m_2a_2 \gt a_1^{m_1}a_2^{m_2}## Q.E.D.##(^*)##: Although it could be ##a_1 = a_2## this would lead to an equality in the expression that is asked to be proved but here only the inequality part is asked, so we implicitly assume that ##a_1 \neq a_2##. Also, if
##m_1 = 0## or ##m_2 = 0##, again, we end up to an equality but here only the inequality part is asked, so we implicitly assume that ##m_1, m_2 \neq 0##.
 
  • #85
QuantumQuest said:
Yes, I see what you mean but in order for other people to understand it, I think that it is best to show how to utilize the derivatives and the appropriate theorem in order to reach the point where ##\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)##.
The ##\ln(x)## function is continuous, with ##d \ln (x) / dx = 1/x## positive for ##x > 0##, and ##d^2 \ln (x) / dx^2 = - 1/x^2## is negative for ##x > 0##. So the derivative is monotonically decreasing and so the function curves down and as a consequence the line segment (chord) between any two points on the graph (for ##x > 0##) lies below or on the graph - see attached figure. Specifically, for any point ##a_1 < x < a_2## the height of the chord is less than the height of ##\ln(x)## (while they are equal at ##x=a_1## and ##x=a_2##). Therefore:

##
\ln (m_1 a_1 + (1-m_1) a_2) \geq m_1 \ln (a_1) + (1-m_1) \ln (a_2)
##

for ##0 \leq m_1 \leq 1##.
 

Attachments

  • convex.jpg
    convex.jpg
    9.4 KB · Views: 127
Last edited:
  • Like
Likes QuantumQuest
  • #86
Question 9 :

Claim 1: ##f(x)## is one - one., i.e., ##f(x)=f(y)\Rightarrow x=y##
Proof: $$f(x)=f(y)$$
$$\Rightarrow f(f(x))=f(f(y))$$
$$\Rightarrow f(f(f(x)))=f(f(f(y)))$$
$$\Rightarrow x=y$$
So, ##f(x)## is one - one.

Claim 2: Since ##f(x)## is continuous in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: Suppose ##f(x)## is a concave downward function. Then, there will be a maximum within the interval, viz., ##f(c)## for some value ##c## such that ##a(=0)<c<b(=1)##. As ##f(x)## is continuous in ##[a,c]## & ##[c,b]##, then ##f(c)>f(a)## and ##f(c)>f(b)##. So, by intermediate value theorem, ##\exists ## ##x_1, x_2## such that ##x_2>x_1## and ##a<x_1<c, c<x_2<b ##but ##f(x_2)>f(x_1)##, which is a contradiction since ##f(x)## is one-one.

It can be similarly proved that ##f(x)## cannot be a concave upward function.

Claim 3: ##f(x)## is strictly increasing.
Proof: If ##f(x)## is strictly decreasing, then for
$$y>x \Rightarrow f(y)<f(x)$$
$$\Rightarrow f(f(y))>f(f(x))$$
$$\Rightarrow f(f(f(y)))<f(f(f(x))) \Rightarrow y<x$$
a contradiction.
So, ##f(x)## is strictly increasing.

Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)\ne x##,
Then either ##f(x)>x## or ##f(x)<x##.

If
\begin{equation} f(x)>x \end{equation}
\begin{equation} \Rightarrow f(f(x))>f(x) \end{equation}
\begin{equation} \Rightarrow f(f(f(x)))>f(f(x)) \end{equation}
\begin{equation} \Rightarrow x>f(f(x)) \end{equation}
From (1), (2) and (4)
\begin{equation} x>f(f(x))>f(x)>x \end{equation}
which is impossible.

Similarly,
If
\begin{equation} f(x)<x \end{equation}
\begin{equation} \Rightarrow f(f(x))<f(x) \end{equation}
\begin{equation} \Rightarrow f(f(f(x)))<f(f(x)) \end{equation}
\begin{equation} \Rightarrow x<f(f(x)) \end{equation}
From (5), (6) and (8)
\begin{equation} x<f(f(x))<f(x)<x \end{equation}
which is impossible.
Hence,
$$f(x)=x$$

As the proof is general, the result is also true for ##f:\mathfrak {R} \rightarrow \mathfrak {R}##.
 
  • #87
It is given that $$ f \left( f
\left (f (x) \right) \right ) = x$$ for all ##x \in [0,1]##. That “for all ##x \in [0,1]##” part is very important.

Let’s assume that ##f(x) \neq x## (that is ##f(x)## is not an identity function). Now, ##f(x)## is either greater than or less than ##f(x) =x## in some sub-interval of ##[0,1]##. $$ f(x) \gt x \\
\textrm{let’s assume that f is an increasing function, well there is no possibility for it to be a constant function, it will either increase or decrease in some sub-interval (that same sub-interval in which it is greater/lesser than x} \\
f \left ( f(x) \right) \gt f(x) \gt x \implies f \left ( f(x) \right ) \gt x \\
f \left (
f \left ( f(x) \right ) \right ) \gt f\left (f (x) \right ) \\
\implies x \gt f \left (f(x) \right)$$ That is a contradiction. Now, I said that ##f(x)## is greater than ##f(x) =x## for some sub-interval, but that’s enough because $$f \left(
f \left( f(x) \right ) \right) $$ is true for all ##x \in [0,1]## and now that we have proved it at least for some ##x## therefore it’s enough.

Similarly, we can reach the contradiction if we assume ##f(x) \lt x## in some sub-interval, or if we assume ##f## to be decreasing, if it’s decreasing that inequality sign would flip.
 
  • #88
Lament said:
Claim 2: Since ##f(x)## is continuous in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: Suppose ##f(x)## is a concave downward function. Then, there will be a maximum within the interval, viz., ##f(c)## for some value ##c## such that ##a(=0)<c<b(=1)##. As ##f(x)## is continuous in ##[a,c]## & ##[c,b]##, then ##f(c)>f(a)## and ##f(c)>f(b)##. So, by intermediate value theorem, ##\exists ## ##x_1, x_2## such that ##x_2>x_1## and ##a<x_1<c, c<x_2<b ##but ##f(x_2)>f(x_1)##, which is a contradiction since ##f(x)## is one-one.

It can be similarly proved that ##f(x)## cannot be a concave upward function.
You have the right idea here, but the wording is a little off. Your argument that having a maxmum or a minimum contradicts the injectivity of ##f## is good, but concavity doesn't have much to do with it. Being "concave down" does not mean that a maximum is achieved on the interior (e.g. the square root function is concave down, but still strictly increasing on its domain). The last detail that you should check for this argument to be airtight is that a continuous function defined on an interval is either monotonic or has (interior) local extrema.

Lament said:
Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)\ne x##,
Then either ##f(x)>x## or ##f(x)<x##.

Why are the only three possible cases that ##f(x)=x, f(x)>x##, and ##f(x)<x## (for all ##x##)? Why can't it be mixed?
 
  • #89
@Adesh Please fix your latex spacing- currently a part of your solution extends very far right and is hard to read.

Anyway, you suppose monotonicity and that ##x<f(x)## in some sub-interval ##J##. This doesn't imply that ##f(x)<f(f(x))## though, because ##f(x)## might not be in ##J##.

Adesh said:
Now, I said that ##f(x)## is greater than ##f(x) =x## for some sub-interval, but that’s enough because $$f \left(
f \left( f(x) \right ) \right) $$ is true for all ##x \in [0,1]## and now that we have proved it at least for some ##x## therefore it’s enough.

I don't understand this part. Why is it enough for ##f(f(f(x)))## to be equal to ##x## for some ##x##, as opposed to all ##x##?

Edit: Sorry, my last sentence should read "I don't understand this part. Why is it enough for ##f(x)## to be equal to ##x## for some ##x##, as opposed to all ##x##?"
 
Last edited:
  • #90
Infrared said:
I don't understand this part. Why is it enough for f(f(f(x)))f(f(f(x)))f(f(f(x))) to be equal to xxx for some xxx, as opposed to all xxx?
Actually I meant something else. I wanted to say that I proved that contradiction for some interval ##J##, in other cases I had to prove for all ##x## but since proving for even one single ##x## will suffice because in question that condition holds for all ##x##.
 
  • #91
I still don't understand. What if ##x## is not in such an interval ##J##?

And can you address my question about why ##f(f(x))>f(x)##?
 
  • #92
Infrared said:
And can you address my question about why f(f(x))>f(x)f(f(x))>f(x)f(f(x))>f(x)?
Okay, you mean that ##f## may not be defined on ##f(x)## because we assumed only that ##f## on the sub-interval ##J##?
 
  • #93
Because you're only assuming that ##f## is increasing on the interval ##J##. If ##f(x)## is outside of the interval ##J##, then ##x<f(x)## does not imply ##f(x)<f(f(x))##.

It's not hard to fix this though; ##f##. You should say why ##f## must be strictly monotonic on the whole interval, not just on some sub-interval.
 
  • #94
So ##f## is monotonically increasing because of injectivity. Say ##f(x)## intersects the function ##x## at a finite number of points which you use to define sub-intervals. Then ##f(x)## will alternate between convex down and convex up as you pass from one interval to the next. In a convex down interval, ##[a,b]##, you arrive at the contradiction that: ##c \equiv f(f(f(c))) < c## for any ##c## such that ##a < c < b##. And in an convex up interval, ##[p,q]##, you arrive at the contradiction that ##r \equiv f(f(f(r))) > r## such that ##p < r < q##.
 
  • Like
Likes Adesh
  • #95
julian said:
So ##f## is monotonically increasing because of injectivity.
Or decreasing

julian said:
Say ##f(x)## intersects the function xx at a finite number of points which you use to define sub-intervals.

There don't have to be only finitely many intersections.

julian said:
Then ##f(x)## will alternate between convex down and convex up as you pass from one interval to the next.##
Why can't there be mixed concavity on each interval? The points where ##f(x)## changes concavity won't be the same points where ##f(x)=x## in general.

julian said:
In a convex down interval, ##[a,b]##, you arrive at the contradiction that: ##c=f(f(f(c)))<c## for any ##c## such that ##a<c<b## . And in an convex up interval,
I don't see how you are using concavity for this.

@archaic has an almost complete solution.
 
  • #96
Infrared said:
It's not hard to fix this though; fff. You should say why fff must be strictly monotonic on the whole interval, not just on some sub-interval.
If I prove the monotonicity then how would I ensure that ##f## will be monotone on ##f(x)##? I’m thinking of it, I hope shortly I will come up with something.
 
  • #97
I'm not sure what you mean by "monotone on ##f(x)##", but @Lament gave an argument for why ##f## is (strictly) monotonic on all of ##[0,1]##, with some confusion about concavity. See posts 86 and 88.

You can assume that the range of ##f## lies inside of ##[0,1]## since this must be the case for ##f(f(f(x)))## to be defined.
 
  • #98
Infrared said:
I'm not sure what you mean by "monotone on ##f(x)##", but @Lament gave an argument for why ##f## is (strictly) monotonic on all of ##[0,1]##, with some confusion about concavity. See posts 86 and 88.
Even if ##f## is monotone, how would I go for proving that ##f \circ f## is also monotone?
 
  • #99
If ##f## is increasing, then ##f\circ f## is increasing. If ##f## is decreasing, then ##f\circ f## is increasing.
 
  • #100
Infrared said:
If ##f## is increasing, then ##f\circ f## is increasing. If ##f## is decreasing, then ##f\circ f## is increasing.
I will prove that part (the implication that ##x \lt f(x) \implies f(x) \lt f(f(x))##.
 
  • #101
Infrared said:
Or decreasing
There don't have to be only finitely many intersections.Why can't there be mixed concavity on each interval? The points where ##f(x)## changes concavity won't be the same points where ##f(x)=x## in general.I don't see how you are using concavity for this.

@archaic has an almost complete solution.

Increasing because of claim 3 above.

Convex up implies ##r \equiv f(f(f(r))) > r## for ##p< r < q## from attached figure. Seems the only way to not have a contradiction is if ##f(x) = x##.
 

Attachments

  • convexup.jpg
    convexup.jpg
    19.2 KB · Views: 132
Last edited:
  • #102
fresh_42 said:
9. Let ##f:[0,1]\to[0,1]## be a continuous function such that ##f(f(f(x)))=x## for all ##x\in[0,1]##. Show that ##f## must be the identity function ##f(x)=x##. (IR)
Saying ##f(f(f(x)))=x## is like saying ##f(u\in[0,1])=x##. As such, ##f:[0,1]\to[0,1]## is bijective, and ##f(f(x))=f^{-1}(x)##. Repeating this step twice more, we get that:$$f^{-1}(f^{-1}(f^{-1}(x)))=x\Leftrightarrow f(f(f(x)))=f^{-1}(f^{-1}(f^{-1}(x)))$$
Since ##f## is a continuous bijection on ##[0,1]##, then it is either strictly decreasing or strictly increasing, the same for its inverse (which has the same direction of variation as ##f##).

Suppose that ##f## is not the identity function, then there exists ##a\in[0,1]## such that ##f(a)\neq f^{-1}(a)##. Let ##f(a)=b## and ##f^{-1}(a)=c##, then:$$f(f(f(a)))=f^{-1}(f^{-1}(f^{-1}(a)))\Leftrightarrow f(f(b))=f^{-1}(f^{-1}(c))$$
Since ##f(x)## and its inverse are bijections, then ##f(b)=d\neq f(a)=b## and ##f^{-1}(c)=e\neq f^{-1}(a)=c##.
We get:$$f(f(b))=f^{-1}(f^{-1}(c))\Leftrightarrow f(d)=f^{-1}(e)$$
Let us go back to our original equation and chose ##x=a##.
$$f(f(f(a)))=f(f(b))=f(d)=f^{-1}(e)\neq a=f^{-1}(c)$$
That ##f^{-1}(e)\neq a## is because the inverse of ##f## is a continuous bijection and ##a=f^{-1}(c)##.
Therefore, ##f(x)## must be the identity function.
 
Last edited:
  • #103
julian said:
Increasing because of claim 3 above.

Convex up implies ##r \equiv f(f(f(r))) > r## for ##p< r < q## from attached figure. Seems the only way to not have a contradiction is if ##f(x) = x##.

What is "claim 3"?

I'm still not sure how you're using convexity. Just ##f## being increasing is enough to say that ##x<f(x)## implies ##f(x)<f(f(x))## etc.
 
  • #104
@archaic I didn't read your new argument carefully, but it doesn't look like it uses the continuity of ##f##, and so it cannot be right. A discontinuous counterexample is: Let ##f(x)=x## except when ##x=0,1/2,1##, for which we set ##f(0)=1/2, f(1/2)=1,## and ##f(1)=0##. Let me know if you still want me to read through it.

You had everything you needed in your old argument, just needed to clean some things up.
 
  • Informative
Likes archaic
  • #105
Infrared said:
What is "claim 3"?

I'm still not sure how you're using convexity. Just ##f## being increasing is enough to say that ##x<f(x)## implies ##f(x)<f(f(x))## etc.

@Lament's post #86, claim 3.

I think a combination of convex up in ##[p,q]## + no local maximum anywhere in the interior of ##[0,1]## is enough to say ##x<f(x)## implies ##f(x)<f(f(x))## etc.
 

Similar threads

2
Replies
61
Views
9K
2
Replies
61
Views
11K
2
Replies
60
Views
9K
3
Replies
100
Views
9K
Replies
33
Views
8K
3
Replies
77
Views
14K
3
Replies
102
Views
9K
2
Replies
56
Views
8K
2
Replies
67
Views
9K
2
Replies
64
Views
14K
Back
Top