How can I optimize a function on a non-linear set with unknown points?

In summary, the conversation is about finding a numerical algorithm for computing optimal controls in quantum mechanics. The problem involves finding the minimum norm of a function within a specific set. The function is believed to be countable and totally disconnected, and within a finite radius from the origin, it is believed to be finite. The goal is to minimize the norm without knowing the exact location of the other points in the set. Possible algorithms such as constrained discrete simulated annealing and level set optimization are mentioned, but the speaker is open to other suggestions. The function in question is only continuous and has an unbounded derivative, but it is believed to be smooth and highly oscillatory with sharp peaks around extrema. The derivative may or may not be bounded, and
  • #1
Kreizhn
743
1
Hey all,

I'm doing some research on computing optimal controls in quantum mechanics, and need a numerical algorithm that I can try to adapt to my problem. I'm hoping that if I describe the problem, someone out there can point me in a good direction.

Consider a function [itex] f: \mathbb R^n \to \mathbb R [/itex] and let [itex] S=f^{-1}(0) [/itex]. I want to find
[tex] \min_{\vec x \in S} \| x \|_2 [/tex]
While we do not have any theoretical results of yet proving this, we empirically believe that S is countable and hence totally disconnected. Furthermore, within a finite radius of the origin, we believe that S is actually finite (and hence this could be seen as a combinatorial optimization problem).

Now I can find a single point in S, but the goal is to minimize the norm without knowing where the other points in S lie. I've been looking into constrained discrete simulated annealing and superficially at level set optimization, but I'm not sure if there's a better way. If anyone knows of an algorithm that is designed to handle this (or something similar) it would be much appreciated.
 
Mathematics news on Phys.org
  • #2
The qualitative properties of f matter a lot; there's pretty much nothing you can do when f is a completely arbitrary function.

e.g. Is f smooth? Piecewise smooth? Does it have a bounded derivative?
 
  • #3
Sorry, should have included that. The problem is that the function doesn't just depend on x. It depends on many other things that are also allowed to vary (even the dimension of the domain). For our purposes we can hold those other things constant, but they tend to change the qualitative properties a lot making it hard to stipulate general characteristics.

In general the function is only continuous, and S actually represents all points where the gradient is not defined. Hence the derivative is not bounded.

However, we do not think f ever actually attains a value in S, but rather just comes arbitrarily close in infimum. The derivative is still not bounded but the function in this case is smooth. This is fine since numerically, we can only operate to within machine precision and in fact, we are satisfied with approximations far less accurate than that. That is, for [itex] \epsilon>0 [/itex] sufficiently small we can take [itex] S = f^{-1}(\epsilon) [/itex]. This set would have non-zero measure by continuity, but then quotient out connected components to make this a discrete problem again.

I hope that helps a bit. In general, assume that

f is bounded above and below (by zero)
f is smooth
[STRIKE]f does not have bounded derivative.[/STRIKE]
f is highly oscillatory (making local extrema populous)
f is "sharply peaked" around extrema

Edit: Actually, the derivative might be bounded, but this would be a very hard result to show. I'll have to think about it a bit more.
 
Last edited:

FAQ: How can I optimize a function on a non-linear set with unknown points?

What is minimization on level sets?

Minimization on level sets is a mathematical optimization technique that involves finding the minimum value of a function on a given level set. A level set is a set of points in the domain of the function where the function has a constant value. Minimization on level sets is useful in many scientific fields, such as engineering, physics, and economics.

How is minimization on level sets different from other optimization techniques?

Minimization on level sets differs from other optimization techniques, such as gradient descent or Newton's method, in that it does not require the function to be differentiable. This makes it a more versatile method that can be applied to a wider range of functions.

What are the main steps involved in minimization on level sets?

The main steps in minimization on level sets are: 1) selecting a suitable level set to minimize on, 2) defining the objective function to be minimized, 3) finding the minimum value of the objective function on the level set, and 4) using this minimum value to optimize the function.

What are the advantages of using minimization on level sets?

One advantage of minimization on level sets is its ability to handle non-differentiable functions. It is also a more robust method than other optimization techniques and can handle noisy data well. Additionally, it is a versatile method that can be applied to a wide range of functions and problems.

How is minimization on level sets applied in real-world problems?

Minimization on level sets has many real-world applications, such as in image and signal processing, machine learning, and data analysis. It can be used to optimize the parameters of a model, classify data, and remove noise from signals. It is also commonly used in engineering design and financial modeling.

Back
Top