Solving Optimization Problems: Avoiding Local Minima

In summary, the conversation discusses the issue of finding the optimization of a function and the use of Trust-Region Newton and Quasi-Newton methods. However, these methods sometimes result in local minimums depending on the initial guesses. The speaker asks for suggestions on how to avoid this issue and mentions the potential use of the Radom Walk method. The other person suggests trying the Simulated Annealing method, which is designed to overcome being trapped in local minima. A book is also referenced for more information on this method.
  • #1
ggyyree
2
0
I met a problem about finding the optimization of some function. I used the Trust-Region Newton and Quasi-Newton methods for the problem; however, with different initial guesses I sometimes got the local minimums. May I ask how to get out the trap of the local minimums please?

I may try the Radom Walk method but it seems not be a good one. Any other ideas please reply! Thanks a lot!
 
Mathematics news on Phys.org
  • #2
Simulated annealing is a method which is designed to overcome being trapped in local minima.
Section 10.9 of this book describes the method:
http://www.fizyka.umk.pl/nrbook/bookcpdf.html
 
Last edited by a moderator:

FAQ: Solving Optimization Problems: Avoiding Local Minima

What are local minima in optimization problems?

Local minima are points on a graph that represent the lowest value of a function within a specific range. In optimization problems, finding the global minimum (the lowest value of the entire function) is the goal, but sometimes the algorithm gets stuck at a local minimum instead.

How do local minima affect the results of an optimization problem?

If a local minimum is mistaken for the global minimum, the results of the optimization problem will not be the most optimal. This can lead to suboptimal solutions and inefficiency in the system being optimized.

What causes an algorithm to get stuck at a local minimum?

An algorithm can get stuck at a local minimum due to the shape of the function being optimized. If there are multiple peaks and valleys, the algorithm may take the easiest path and settle at a local minimum instead of continuing to search for the global minimum.

How can we avoid local minima in optimization problems?

To avoid local minima, various techniques can be used such as introducing randomness in the algorithm, changing the starting point of the search, or using more advanced optimization algorithms that are less likely to get stuck at local minima.

Are there any downsides to avoiding local minima in optimization problems?

While avoiding local minima may result in a more optimal solution, it can also increase the time and computational resources required to solve the problem. Additionally, completely avoiding local minima is not always possible, especially in highly complex optimization problems.

Similar threads

Replies
1
Views
1K
Replies
5
Views
1K
Replies
6
Views
2K
Replies
1
Views
2K
Replies
6
Views
3K
Replies
6
Views
2K
Back
Top