Can Lagrange multipliers be used to find a function?

In summary, the problem statement is asking for the lowest (maximal) absolute slope of a curve in a given domain. This curve has the lowest (maximal) absolute slope if it is the only extremum of a family of functions with the required conditions. However, this family of functions is not compact and so the problem is to find the extremal curve.
  • #1
jk22
731
24
Problem statement : Let ##f\in C^\infty ([-1;1])## with ##f(1)=f(-1)=0## and ##\int_{-1}^1f(x)dx=1##

Which curve has the lowest (maximal) absolute slope ?

Attempt :

Trying to minimize ##f′(x)−\lambda f″(x)## with Lagrange multipliers but to find f not x ?

I got ##\frac{f^{(3)}(x)}{f″(x)}=(log⁡(f″(x)))′=1/\lambda##

##\Rightarrow f″(x)=Ae^{x/\lambda}\Rightarrow f(x)=\lambda^2Ae^{x/\lambda}+Bx+C##

Insterting the boundary and integral conditions gives A,B,C as function of ##\lambda##.

Then remains to minimize wrt ##\lambda##.

Is this approach valid ? I don't think since the result isn't symmetric.
 
Last edited:
Physics news on Phys.org
  • #2
Lowest maximal absolute slope, or lowest maximal slope?
 
  • #3
fresh_42 said:
Lowest maximal absolute slope, or lowest maximal slope?
Lowest maximal absolute slope (edited)

If it is lowest maximal slope then it could be asymmetric. It corresponds to the equation above. Thanks, now it is clear.
 
  • #4
Looks like it is ##1##, an infimum that isn't achieved by smooth approximations to the symmetric sawtooth wave. I get a lower bound of ##0.25##, but that doesn't help. If I made no mistake, then ##f(x)=\dfrac{\pi}{4}\cos\left(\dfrac{\pi}{2}x\right)## results in a slope of ##\approx 1.234##
We could try to prove that all solutions are symmetric to the ##y-##axis and that they do not have a sign-change.
 
Last edited:
  • #5
Yes it is obtained by ##x\mapsto |x|## in the function above. Getting ##B=-\lambda A## to obtain the smoothness at 0.

The second derivative never vanishes so the maximal slope is at the boundary ?

However I obtain a weird function for A ##\frac{e^{(1/\lambda)}-1}{2(e^{1/\lambda}-1)(\lambda^3+\lambda^2)-3\lambda^2}## giving a slope with an extremum of .355.

However this shall be wrong since the triangle solution has slope 1 and shall be the minimum but it is not analytic.
 
Last edited:
  • #6
jk22 said:
Yes it is obtained by $x\mapsto |x|$ in the function above. Getting $B=-\lambda A$ to obtain the smoothness at 0.

The second derivative never vanishes so the maximal slope is at the boundary ?
I think the maximal slope is everywhere, except at ##0##. Chose ##f(x)=1-|x|## for all ##x\in [-1,1]\backslash (-\varepsilon ,\varepsilon )## and add something smooth on ##[-\varepsilon ,\varepsilon ]## with ##f(0)=1,f'(0)=0## and ##f(-\varepsilon )=f(\varepsilon )=1-\delta ## and ##f'(-\varepsilon )=1\, , \,f'(\varepsilon )=-1.##
 
  • #7
Then the smooth part shall contain a slope bigger than 1 to balance the area.
 
  • #8
jk22 said:
Then the smooth part shall contain a slope bigger than 1 to balance the area.
Yes. Or the straight part can be a bit steeper than ##1##. But with ##\varepsilon, \delta \to 0## we get the slope to ##1##. Hence ##1## is an infimum, which cannot be taken.
 
  • Like
Likes jk22
  • #9
The question is how to find the minimum inside analytic functions. But since that set is not closed the case you said happen.

However shouldn't the function be varied and not the coordinate x in the equation with Lagrange multipliers ?
 
  • #10
I think we can find a family ##\mathcal{F}\, : \,[0,1)\longrightarrow C^\infty ([-1,1])## of functions with the required conditions such that ##\mathcal{F}(0)=\dfrac{\pi}{4}\cos\left(\dfrac{\pi}{2}x\right)## and ##\lim_{t \to 1}\mathcal{F}(t)=1-|x|##. But why this is an optimal solution has to be proven. I don't think there is a unique solution. E.g. we can compensate the missing area in my model from post #6 with the left leg, the right, or both, and probably even with the cusp by setting the maximum ##f(0)=1+\eta .##
 
  • #11
Cos seems a good candidate maybe superposing with the line and joining at 0 could lead to a better slope ?

However there should be a method to find the extremal curve ?
I wanted to do it that way but I don't know if it is mathematically correct :
Let the operator (?!) with Lagrange multipliers
##J[f;\lambda_i]=f'(x)-\lambda_1f''(x)-\lambda_2(\int_{-1}^1f(s)ds-1)##

Then finding the extremum of J with variation for f and partial derivatives towards the lambdas ?

The problem is that the variation ##\frac{d}{d\epsilon}J[f(x)+\epsilon\eta(x)]|_{\epsilon=0}## makes f disappear and we get an equation for ##\eta## instead, those are affine functions. Does this mean that the functional shall be extremal towards the lambdas for all those eta's ?
 
Last edited:
  • #12
jk22 said:
Cos seems a good candidate maybe superposing with the line and joining at 0 could lead to a better slope ?

However there should be a method to find the extremal curve ?
I wanted to do it that way but I don't know if it is mathematically correct :
Let the operator (?!) with Lagrange multipliers
##J[f;\lambda_i]=f'(x)-\lambda_1f''(x)-\lambda_2(\int_{-1}^1f(s)ds-1)##

Then finding the extremum of J with variation for f and partial derivatives towards the lambdas ?

The problem is that the variation ##\frac{d}{d\epsilon}J[f(x)+\epsilon\eta(x)]|_{\epsilon=0}## makes f disappear and we get an equation for ##\eta## instead, those are affine functions. Does this mean that the functional shall be extremal towards the lambdas for all those eta's ?
The problem I see is, that we do not have a compact area of feasible functions. The solution I suggested lies on the boundary and isn't feasible. If we add the boundary, we will get ##C^0## and e.g. ##1-|x|.##

If we want to optimize the problem on the open set of feasible functions, then where should we search? Which sequence of functions should we take? All we can hope for is to find the / a limit point ##1+|x|## but no algorithm will ever stop, so that we have no method to identify this limit point by algorithmic means.

It is probably easier to prove that ##1-|x|## is the / a solution and any sequence on ##C^\infty ## which converges to it will do.
 
Last edited:
  • #13
What about seeing it as an optimization problem with infinite series : ##f(x)=\sum_{n=1}^\infty a_nx^n##.

Then it should be looked at x0 : ##f''(x_0)=0##. I thought to see that as the infinite vector ##(x_0^n)## is orthogonal to the ##(n(n-1)a_n)##. Thus we could take for the x nulls any vector orthogonalized to the second one with Gram-Schmidt process.

Then we shall minimize ##f'(x_0)## w.r.t to all the a n's.

Could this give a recurrence relation for the coefficients a n ?
 
  • #14
jk22 said:
What about seeing it as an optimization problem with infinite series : ##f(x)=\sum_{n=1}^\infty a_nx^n##.

Then it should be looked at x0 : ##f''(x_0)=0##. I thought to see that as the infinite vector ##(x_0^n)## is orthogonal to the ##(n(n-1)a_n)##. Thus we could take for the x nulls any vector orthogonalized to the second one with Gram-Schmidt process.

Then we shall minimize ##f'(x_0)## w.r.t to all the a n's.

Could this give a recurrence relation for the coefficients a n ?
I see it as an optimization problem, but its solution is on the boundary outside the set of feasible functions. So there are many ways to define sequences that converge to the solution.
E.g. https://en.wikipedia.org/wiki/Triangle_wave

It is a bit like asking which point in ##\{(x,y)\in\mathbb{R}^2\,|\,x^2+y^2<1\}## is closest to ##(1,0).##
 
  • Like
Likes mathwonk

FAQ: Can Lagrange multipliers be used to find a function?

How do Lagrange multipliers work?

Lagrange multipliers are used to find the maximum or minimum value of a function subject to a set of constraints. They work by introducing a new variable, called the Lagrange multiplier, which allows the constraints to be incorporated into the function to be optimized.

Can Lagrange multipliers be used for both single and multiple variable functions?

Yes, Lagrange multipliers can be used for both single and multiple variable functions. In single variable functions, the Lagrange multiplier is a constant. In multiple variable functions, the Lagrange multiplier is a vector of constants.

When should Lagrange multipliers be used?

Lagrange multipliers should be used when optimizing a function subject to a set of constraints. This can include both equality and inequality constraints.

Are there any limitations to using Lagrange multipliers?

One limitation of using Lagrange multipliers is that the function and constraints must be differentiable. Additionally, the solution found using Lagrange multipliers may not always be the global maximum or minimum, but rather a local one.

Can Lagrange multipliers be used to find a function with multiple constraints?

Yes, Lagrange multipliers can be used to find a function with multiple constraints. The process is the same as with a single constraint, but the number of Lagrange multipliers used will match the number of constraints.

Similar threads

Replies
1
Views
2K
Replies
9
Views
2K
Replies
9
Views
2K
Replies
1
Views
2K
Replies
1
Views
2K
Replies
8
Views
2K
Replies
4
Views
3K
Replies
8
Views
1K
Back
Top