Find ##f(x)## such that f(f(x))=##log_ax##

In summary, the conversation was about extending the definition of superlogarithms and finding a function that satisfies the equation ##f(f(x)) = \log_ax##. The speaker proposed using functions such as ##f(x) = \frac{x}{a^{\frac{1}{2}}}, g(x) = \frac{x}{a^{\frac{1}{3}}}, etc.## to calculate the number of times these functions can be applied without getting a number smaller than 1. This can then be used to define ##slog_ax## as ##k_1 + \frac{k_2}{2} + \frac{k_3}{3} + ...## where ##k_i## represents
  • #36
Kumar8434 said:
Substituting, f(x) = x and hence
You mean f(x) = y ?
 
Mathematics news on Phys.org
  • #37
Stavros Kiri said:
You mean f(x) = y ?
Does it make a difference?
 
  • #38
Kumar8434 said:
Does it make a difference?
Well, f(x) = x is not always true. Are you taking an example?
All we know is that f(f(x)) = x.
 
  • #39
Stavros Kiri said:
Well, f(x) = x is not always true. Are you taking an example?
All we know is that f(f(x)) = x.
Ok, so I substitute ##f(x)=y## and hence ##x=f^{-1}(y)=f(y)## which gives me:
$$f''(f(y))=-\frac{1}{(f'(y))^3}*f''(y)$$
But there are still functional arguments. I can't figure out any way to get rid of expressions like ##f''(f(y))##. If I get rid of them by some substitution, then maybe I would get a solvable differential equation.
 
  • #40
Instead, for easier algebra, I used (f°g)' = (f'°g)•g'. I got the same result: f'(f(x))•f'(x) = 1.
That's the one we've got to solve, by appropriate change of variable. We want to keep it 1st order.
 
  • #41
Stephen Tashi said:
I'd say it is not difficult to solve such nonlinear equations numerically.

It also possible to solve such equations symbolically, although the theory behind that is a fairly "deep" subject ( e.g. Grobner bases or Wu's method of elimination).
I don't know of any tool that versatile in C++ or any other programming system.

The term "the C++ library" can have various meanings. There are special mathematical libraries written for C++ such as "Symbolic C++" ( https://en.wikipedia.org/wiki/SymbolicC++ ). These libraries are not "standard" components of C++, but many are free software and simple to incorporate in C++ programs.

C++ has some "built-in" functions. There is also a library for C++ called "The Standard Template Library" which is, as the name suggests, a "standard" part of C++.
So, symbolic or numerical, is there a way to write such a program or not? If equation-solving tools don't already exist in the library then, I don't think anyone can design a computer program to solve complicated non-linear equations like these.
 
  • #42
Stavros Kiri said:
Instead, for easier algebra, I used (f°g)' = (f'°g)•g'. I got the same result: f'(f(x))•f'(x) = 1.
Yeah, I told you that I always arrive at that whatever I do. First I differentiated both sides of ##f(f(x))=x## which gave me this equation, then I differentiated both sides of ##f(x)=f^{-1}(x)##, again the same equation. Then, I did all that and again arrived at the same thing.
 
  • #43
Kumar8434 said:
Ok, so I substitute ##f(x)=y## and hence ##x=f^{-1}(y)=f(y)## which gives me:
$$f''(f(y))=-\frac{1}{(f'(y))^3}*f''(y)$$
But there are still functional arguments. I can't figure out any way to get rid of expressions like ##f''(f(y))##. If I get rid of them by some substitution, then maybe I would get a solvable differential equation.
I don't think getting to 2nd order diff. equ is a good idea here.
Kumar8434 said:
Yeah, I told you that I always arrive at that whatever I do. First I differentiated both sides of ##f(f(x))=x## which gave me this equation, then I differentiated both sides of ##f(x)=f^{-1}(x)##, again the same equation. Then, I did all that and again arrived at the same thing.
Good. +I quote you my edited above:
Stavros Kiri said:
Instead, for easier algebra, I used (f°g)' = (f'°g)•g'. I got the same result: f'(f(x))•f'(x) = 1.
That's the one we've got to solve, by appropriate change of variable. We want to keep it 1st order.
 
  • #44
I checked that x, c-x, a/x do indeed satisfy this equation. Are these the only ones? That's what we are actually trying to find out or prove , by attempting to actually solve that equation, despite the functional arguments! That's the real challenge of 'functional relations problems' ...
 
  • #45
Stavros Kiri said:
I checked that x, c-x, a/x do indeed satisfy this equation. Are these the only ones? That's what we are actually trying to find out or prove , by attempting to actually solve that equation, despite the functional arguments! That's the real challenge of 'functional relations problems' ...
I think there are infinite number of functions satisfying ##f(f(x))=x##: https://en.wikipedia.org/wiki/Involution_(mathematics)
 
  • #46
Kumar8434 said:
I think there are infinite number of functions satisfying ##f(f(x))=x##: https://en.wikipedia.org/wiki/Involution_(mathematics)
The arbitrariness of constants make them infinite anyway. There they basically talk about self-unitariness or self-inverted functions. In complex functions, QM and Hilbert spaces this is common, e.g. with operators (e.g. Unitary and self-conjugate operators ...). But here we are limited to real functions. However, ... Wow! , that wiki ref that you digged up is good! I have to admit I hadn't seen it. My favourite one there is
f(x) = ln[(ex+1)/(ex-1)] : x>0
Which I have to admit defeats our limited claim, despite the arbitrary constants. We have to find a more general solution.

P.S. here is another example: f(x) = (x+1)/(x-1)
[ f-1(y) = (y+1)/(y-1) = f(y) ]
 
Last edited:
  • #47
Another problem (just thought of - easier, but good for practice): e.g. to find all real f such that f(f(x)) = f(x). That's easier. a) If f is invertible then f-1[f(f(x))] =
f-1(f(x)) = the Identiy function, i.e. 1(x)=x, by definition of inversion. The first part easily becomes f(x), thus equals 1(x). And here, this time, no arbitrary constants, not other functions either, that's a just proven fact.
However, b) if f is not invertible then e.g. f(x) = const. is obviously also a solution.

Then there is your problem with the log (really interesting!), but I haven't worked on that yet.
However I suspect, or my feeling is, that there are only numerical solutions. But one needs to prove that.

But of course not all functional relations problems have an explicit closed form solution, or even a solution at all, for that matter. There are all sorts of categories ! ...

I'll give you my favourite two ones another time, or we can put it on another thread.
 
Last edited:
  • #48
A)
Stavros Kiri said:
We have to find a more general solution.
We started as:
Stavros Kiri said:
"Find all real continuous functions that f(f(x)) = x".
In a similar manner like above (#47), we have: a) Assume f is invertible. Then
f-1[f(f(x))] = f-1(x),
which easily yields: f(x) = f-1(x)
That's the involution* condition, as pointed out in
https://en.wikipedia.org/wiki/Involution_(mathematics)
The large class of real functions satisfying that condition can at least be depicted or represented graphically as "all the invertible functions whose x-y graph is symmetric to the diagonal straight line "y=x" ...".
So it's a huge class of functions ... (as we can consrtuct easily a whole banch of those [e.g. pick a continuous shape, make it symmetric to y=x ... and you name it ...] ...).

* a function that is its own inverse

b) Assume f is not invertible. E.g. the constant function obviously is not a solution, but I did not try any further.

B) Now about your original log problem
Kumar8434 said:
I was thinking about extending the definition of superlogarithms. I think maybe that problem can be solved if we find a function ##f## such that ##fof(x)=log_ax##. Is there some way to find such a function? Maybe the taylor series could be of some help. Or is there some method to find a function such that ##f(f(x))## is at least approximately equal to ##logx##?
Try applying the same technique and differentiate both sides to get a differential equation (then perhaps look for an appropriate change of variable) ... but I doubt it can be solved explicitly. I think perhaps the best way to go is your numerical methods ...
[+/or cf.
https://www.physicsforums.com/threads/find-f-x-such-that-f-f-x-log_ax.903097/#post-5697290 ]
 
Last edited:
  • #49
Stavros Kiri said:
A)

We started as:

In a similar manner like above (#47), we have: a) Assume f is invertible. Then
f-1[f(f(x))] = f-1(x),
which easily yields: f(x) = f-1(x)
That's the involution* condition, as pointed out in
https://en.wikipedia.org/wiki/Involution_(mathematics)
The large class of real functions satisfying that condition can at least be depicted or represented graphically as "all the invertible functions whose x-y graph is symmetric to the diagonal straight line "y=x" ...".
So it's a huge class of functions ... (as we can consrtuct easily a whole banch of those [e.g. pick a continuous shape, make it symmetric to y=x ... and you name it ...] ...).

* a function that is its own inverse

b) Assume f is not invertible. E.g. the constant function obviously is not a solution, but I did not try any further.

B) Now about your original log problem

Try applying the same technique and differentiate both sides to get a differential equation (then perhaps look for an appropriate change of variable) ... but I doubt it can be solved explicitly. I think perhaps the best way to go is your numerical methods ...
[+/or cf.
https://www.physicsforums.com/threads/find-f-x-such-that-f-f-x-log_ax.903097/#post-5697290 ]
I don't think either that this problem can be solved explicitly. If it could be, someone would've done it by now. Approximate functions are our only option.
 
  • #50
Kumar8434 said:
I don't think either that this problem can be solved explicitly. If it could be, someone would've done it by now. Approximate functions are our only option.
I tend to agree.
 
  • #51
Stephen Tashi said:
I'd say it is not difficult to solve such nonlinear equations numerically.

It also possible to solve such equations symbolically, although the theory behind that is a fairly "deep" subject ( e.g. Grobner bases or Wu's method of elimination).
I don't know of any tool that versatile in C++ or any other programming system.

The term "the C++ library" can have various meanings. There are special mathematical libraries written for C++ such as "Symbolic C++" ( https://en.wikipedia.org/wiki/SymbolicC++ ). These libraries are not "standard" components of C++, but many are free software and simple to incorporate in C++ programs.

C++ has some "built-in" functions. There is also a library for C++ called "The Standard Template Library" which is, as the name suggests, a "standard" part of C++.
The project just got simpler. Now forget the polynomials. Could you get approximate functions, such that ##f(f(x))=log_{a^a}x##, by the method suggested by jostpur and see if ##f(a)## converges towards 1? Similarly get function ##g(x)## such that ##g(g(g(x)))=log_{a^{a^a}}x## and see if ##f(a)## converges towards one. I don't think that should be much problem.
jostpuur said:
If you know some programming language, you could write an iteration that seeks to minimize the function

[tex]
\mathcal{F}(f) = \int\limits_{\epsilon}^R \big(f\big(f(x)\big) - \ln(x)\big)^2dx
[/tex]

numerically, approximating the function [itex]f[/itex] as some array. The result might tell something.
 
  • #52
Kumar8434 said:
No. I don't know any programming language.
Kumar8434 said:
I'm in high-school final year. I've just learned some of the advanced concepts from the internet. That's all.

Everybody (almost) must know how to write programs. If you are only at high school, well then it is understandable that you haven't learned programming yet, but you should start a programming hobby as soon as possible.

Kumar8434 said:
I don't know if it can be done by programming. But could you check one thing for me if it is possible with programming and it it's not much work?
Kumar8434 said:
Thanks. That would be great. Please reply if you get something.
Kumar8434 said:
I don't think that should be much problem.

You have a severe attitude problem if you are hoping that other people would start writing some programs to study your problem. I know fully well how to write a program that would minimize the function [itex]\mathcal{F}(f)[/itex] I mentioned earlier, but I'm not going go spend my time on it now, because getting some numerical results and graphs about this [itex]f[/itex] isn't very important to me.

Usually the relationship between the young and energetic, and the old and wise, works so that on their own the young and energetic don't know what to do with their enthusiasm, and then the old guys advice them on how to use their energy in a purposeful way. You should appreciate the fact that I even mentioned the existence of the function [itex]\mathcal{F}(f)[/itex], because on your own you would have never realized that that kind of function could be written down to be minimized.
 
  • #53
jostpuur said:
Everybody (almost) must know how to write programs. If you are only at high school, well then it is understandable that you haven't learned programming yet, but you should start a programming hobby as soon as possible.You have a severe attitude problem if you are hoping that other people would start writing some programs to study your problem. I know fully well how to write a program that would minimize the function [itex]\mathcal{F}(f)[/itex] I mentioned earlier, but I'm not going go spend my time on it now, because getting some numerical results and graphs about this [itex]f[/itex] isn't very important to me.

Usually the relationship between the young and energetic, and the old and wise, works so that on their own the young and energetic don't know what to do with their enthusiasm, and then the old guys advice them on how to use their energy in a purposeful way. You should appreciate the fact that I even mentioned the existence of the function [itex]\mathcal{F}(f)[/itex], because on your own you would have never realized that that kind of function could be written down to be minimized.
He only asked for people's help here. There is nothing wrong with that. That's why PF is here. Whoever wants to help does. We are all volunteers here.
And the function that you mention (or actually functional) is not a secret (he would have found it sooner or later ... by studying), just standard variational calculus.
 
  • #54
Kumar8434 said:
Could you get approximate functions, such that ##f(f(x))=log_{a^a}x##, by the method suggested by jostpur and see if ##f(a)## converges towards 1?
Jostpur suggested a goal, not a specific method. Before charging off to write programs, I prefer to think about the problem a little longer.

One possibility is that there is no continuous function ##f(x)## such at ##f(f(x))## well approximates ##\log_a(x)## everywhere in the set of values ## (0,\infty)##. For example, perhaps when we create a function that does the approximation well on the set of values ##(0, 2a)## the error gets worse and worse for larger values of ##x##.

Here is an amusing way to think about the problem. Suppose I try to find f(x) by creating a table of values. I pick the value x0 arbitrarily and also pick the value of f(x0) arbitrarily. I know that I want f(f(x0)) = log(x0) so the table looks like:

x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)

Then for a second row, I use the value x1 = f(x0). I already know that f( f(x0)) = log(x0). I also know that I want f(f(x_1)) = log(x1), so all three entries of the second row are determined:

x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
2) f(x0), log(x0), log(x1) = log( f(x0))

Using x3 = f(f(x0)) = log(x0) to form a third row, I get:

x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
2) x1= f(x0), log(x0), log(x1)
3) x2= log(x0), log(x1), log(x2)

If I continue to follow this process, the only arbitrary choices I made are the values for x0 and f(x0). The rest of the table is determined by those two values. As an approximation, this method doesn't give us much control over what x values appear in the table.
 
  • Like
Likes Stavros Kiri
  • #55
jostpuur said:
Everybody (almost) must know how to write programs. If you are only at high school, well then it is understandable that you haven't learned programming yet, but you should start a programming hobby as soon as possible.You have a severe attitude problem if you are hoping that other people would start writing some programs to study your problem. I know fully well how to write a program that would minimize the function [itex]\mathcal{F}(f)[/itex] I mentioned earlier, but I'm not going go spend my time on it now, because getting some numerical results and graphs about this [itex]f[/itex] isn't very important to me.

Usually the relationship between the young and energetic, and the old and wise, works so that on their own the young and energetic don't know what to do with their enthusiasm, and then the old guys advice them on how to use their energy in a purposeful way. You should appreciate the fact that I even mentioned the existence of the function [itex]\mathcal{F}(f)[/itex], because on your own you would have never realized that that kind of function could be written down to be minimized.
I fully appreciate your effort. I don't know advanced programming concepts yet. I haven't even started arrays. So, I thought instead of working on my skills for months and then checking my work, I'd ask Stephen Tashi to quickly check it only if it wasn't too much work. As he later mentioned that the task was difficult, so I didn't further force him to do anything.
 
  • #56
Stephen Tashi said:
Jostpur suggested a goal, not a specific method. Before charging off to write programs, I prefer to think about the problem a little longer.

One possibility is that there is no continuous function ##f(x)## such at ##f(f(x))## well approximates ##\log_a(x)## everywhere in the set of values ## (0,\infty)##. For example, perhaps when we create a function that does the approximation well on the set of values ##(0, 2a)## the error gets worse and worse for larger values of ##x##.

Here is an amusing way to think about the problem. Suppose I try to find f(x) by creating a table of values. I pick the value x0 arbitrarily and also pick the value of f(x0) arbitrarily. I know that I want f(f(x0)) = log(x0) so the table looks like:

x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)

Then for a second row, I use the value x1 = f(x0). I already know that f( f(x0)) = log(x0). I also know that I want f(f(x_1)) = log(x1), so all three entries of the second row are determined:

x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
2) f(x0), log(x0), log(x1) = log( f(x0))

Using x3 = f(f(x0)) = log(x0) to form a third row, I get:

x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
2) x1= f(x0), log(x0), log(x1)
3) x2= log(x0), log(x1), log(x2)

If I continue to follow this process, the only arbitrary choices I made are the values for x0 and f(x0). The rest of the table is determined by those two values. As an approximation, this method doesn't give us much control over what x values appear in the table.
I think we can get exact functions ##f(x)## such that ##f(f(x))=logx## using limits. If we assume ##f(x)## to be of degree ##n##. Then ##f(f(x))## will be of degree ##n^2##. Now, we ignore the terms containing powers of ##x## greater than ##n##, and then we solve ##n+1## equations in ##n+1## variables to get the coefficients. Since we're working with symbolic ##n##, so I think the coefficients will be functions of ##n##. So, the exact coefficients can be obtained by applying ##\lim_{n\rightarrow \infty}## to those functions. But, of course, I'm not saying that solving those seriously complicated non-linear equations in terms of ##n## is practically possible.
 
  • #57
Stavros Kiri said:
I tend to agree.
Could there be a family of functions, one for each point, such that ##f(f(x))=log_ax##?
For example, by using a first degree polynomial, I got this function which when applied twice to ##c## gives you ##a^c##:
$$x\sqrt{a^c\log_ea}+\frac{a^c(1-c\log_ea)}{\sqrt{a^c\log_ea}+1}$$
Problem is it itself depends upon ##c##. So, I guess there are different functions for different points.
Similarly, this function when applied ##n## times to ##c## gives you exactly ##a^c##:
$$x(a^c\log_ea)^{\frac{1}{n}}+\frac{a^c(1-c\log_ea)}{\sum_{k=0}^{n-1}(a^c\log_ea)^{\frac{k}{n}}}$$
But the problem is these functions themselves depend upon ##c##.
 
  • #59
SlowThinker said:
Kumar, you should try to read the document
http://tetration.org/IF.pdf
again. If you get past the unfortunate notation, it seems to answer your questions.
It looks as if the author has some good knowledge of the subject, in particular it seems obvious that the Taylor expansion should be performed at the function's fixed point(s) (0.318132 + 1.337236i for log(x)).
I tried to read it again, but it's too hard to read something when you don't understand more than half of the terms. He said that it involves college level maths. So, I've it saved and I'll read it again when I get to that level. I know that, in the paper, he proves that functional-roots exist and are unique. But how it's proved is far beyond my knowledge.
 
  • #60
A random thought:

Sometimes "changing coordinates" or "changing variables" is useful. What would such a "changing" be in the problem at hand?

In group theory, there is the notion of the "inner automorphisms" of a group. Any group can be regarded as set of functions. Applying the idea of an "inner automorphism", let ##g(x)## be a given 1-to-1 function with inverse function ##g^{-1}(x)##. Define the transformation ##T(f(x)) = g( f( g^{-1}(x)))##. So ##T## is a transformation that changes a function ##f## into another function.

The basic daydream goes like this. Suppose we can find a function ##g## such that ##T(ln(x))## transforms ##ln(x)## into some function ##h(x)## that is easy to express as ##h(x) = s(s(x))## Then we could use the inverse transformation ##T^{-1}s(x) = g^{-1}( s( g(x))) = r(x)## to express ##ln(x)## as ##ln(x) = r(r(x))##.

I don't know if this really helps solve the title of the thread, but at least it is an excuse to review "inner automorphisms".
 
  • #61
  • Like
Likes Stavros Kiri

Similar threads

Replies
5
Views
1K
Replies
12
Views
2K
Replies
2
Views
998
Replies
2
Views
3K
Replies
2
Views
1K
Replies
8
Views
1K
Back
Top