Optimization on non-compact (multivariable)

In summary, the conversation discusses optimization on non-compact sets and how to determine if a stationary point is a global maximum for a function. It is mentioned that the epsilon-delta-statement and restricting the domain to a compact region can be useful in determining this. An example is given to show that this is not always the case, but a theorem is presented that states that if a function meets certain criteria (such as being of class $\mathcal{C}^2$ and having a negative second derivative at the critical point), then the critical point is a global maximum.
  • #1
SweatingBear
119
0
Been a while since I stopped by here...

There's one thing about optimization on non-compact sets that's been bugging me for quite a while and I'd love to hear how you perceive things.

Say we are optimizing a partially differentiable (and thus continuous) function $f:\mathbb{R^2} \to \mathbb{R}$ over entire $\mathbb{R}^2$. Suppose $f(x,y) \geqslant 0$ for all $(x,y) \in \mathbb{R}^2$ and say at $(1,0)$ we have one stationary point where $f(1,0) = 4$. Furthermore, let's assume that $f(x,y) \to 0$ when $r = \sqrt{x^2 + y^2} \to \infty$.

So we suspect $4$ is the global maximum of the function but how to be certain? Well by definition we have for the previously stated limit

$$\forall \epsilon, \exists \delta : r > \delta \implies |f(x,y)| < \epsilon $$

Especially for $\epsilon = 4$ we can say that there exists some $\delta_0$ such that

$$r > \delta_0 \implies |f(x,y)| < 4 $$

Assume now that $r_0 > \delta_0$ and let us study our function on the set $M = \{ (x,y) : x^2 + y^2 \leqslant r_0^2\}$. Since $M$ is compact and $f$ continuous on it, by extreme value theorem a maximum value must exist on it. Our epsilon-delta-statement tells us that on the boundary of $M$ and outside of it, $f(x,y)$ never equals $4$. So the only possible case left is that $f(x,y) = 4$ on an interior point of $M$

But how do we from here take the leap and argue that $f(x,y) = 4$ is necessarily a global maximum? I have trouble fully understanding why it really is necessary to restrict the domain into a compact region with some specific and fixed $r_0$; how is that really useful, I feel like it does not necessarily provide any further information? Does not the epsilon argument suffice by itself?
 
Physics news on Phys.org
  • #2
It is not true.

Keep things simpler by working in $\mathbb{R}$. Find an example of a diff $f:\mathbb{R}\to \mathbb{R}$ where $0$ is a critical point for $f$, and where $f\geq 0$, and where $f(x)\to 0$ as $|x|\to \infty$ but $0$ is not maximum point. Perhaps, you can visualize this function?

The following however is true.

Theorem: Let $f:\mathbb{R}\to \mathbb{R}$ be such that:
(i) $f$ is of class $\mathcal{C}^2$
(ii) $f(x)\to 0$ as $|x|\to \infty$
(iii) $0$ is the only critical point
(iv) $f''(0) < 0$
Then $0$ is a global maximum point for $f$.

Proof: By (iii) and (iv) $0$ is a local maximum for $f$. Let $a = f(0)$. Assume for the time being $a > 0$.

By (ii) there is an $r>0$ such that if $|x|\geq r$ then $|f(x)| < a$. Define $K$ to be the interval $[-r,r]$. On $K$ the function $f$ assumes a maximum value. The maximum value cannot be at the endpoints as $f(\pm r) < a$ and $f(0) = a$. By (iii) there is no critical point in $(-r,r)$ other than $0$ thus we conclude that $0$ is maximum point for $f$. Since $f(x)<a$ for all $|x|\geq r$ it means that $f(0) \geq f(x)$ for all $x\not \in K$ and for all $x\in K$ so that $0$ is a global maximum point for $f$.

I am too lazy to think about $a\leq 0$. Maybe you can continue from there?
 

FAQ: Optimization on non-compact (multivariable)

What is optimization on non-compact (multivariable)?

Optimization on non-compact (multivariable) refers to the process of finding the maximum or minimum value of a function with multiple variables, where the domain of the function is not a compact set. In other words, the domain of the function is not a closed and bounded set.

How is optimization on non-compact (multivariable) different from optimization on compact sets?

In optimization on compact sets, the domain of the function is a closed and bounded set, making it easier to find the maximum or minimum value. However, in optimization on non-compact (multivariable), the domain is not bounded, which adds an extra layer of complexity to the problem.

What are some common techniques used for optimization on non-compact (multivariable)?

Some common techniques used for optimization on non-compact (multivariable) include gradient descent, Newton's method, and convex optimization. These techniques involve iteratively finding the optimal values for each variable until a minimum or maximum is reached.

Can optimization on non-compact (multivariable) be applied to real-world problems?

Yes, optimization on non-compact (multivariable) can be applied to real-world problems in various fields such as economics, engineering, and physics. It is commonly used in data analysis, machine learning, and financial modeling.

What are the challenges of solving optimization problems on non-compact (multivariable)?

The main challenge of solving optimization problems on non-compact (multivariable) is that the solution may not always exist, or it may be difficult to prove that a solution exists. Additionally, the non-compact nature of the problem can make it computationally expensive and time-consuming to find the optimal solution.

Similar threads

Back
Top