Numerical Optimization ( norm minim)

In summary, the optimization problem is to find the point x in the half space H that has the smallest Euclidean norm. This can be solved using the Lagrange multiplier method by setting up the objective function f(x) = ||x||^2 and the constraint function c(x) = aT x + alpha >= 0. The gradient of L(x, lambda) is then set to 0 and solved for x, which gives x = (-alpha a) / ||a||^2. This solution makes sense as it lies on the boundary plane of H and is parallel to the normal vector a.
  • #1
sbashrawi
55
0

Homework Statement



Consider the half space defined by H = {x ∈ IRn | aT x +alpha ≥ 0} where a ∈ IRn
and alpha ∈ IR are given. Formulate and solve the optimization problem for finding the point
x in H that has the smallest Euclidean norm.

Homework Equations





The Attempt at a Solution


I need help in this problem. I think the problem can be written as

min ||x|| sunbjected to a(transpose) x + a >= 0

am I right
 
Physics news on Phys.org
  • #2
consider the set [itex] \left\{ x \in \mathbb{R}^n : ||x||^2 = c\} [/itex] for some constant c, geometrically what does it represent?

now consider the half space, the boundary of which is a plane. how does the plane intersect the above set, in particular for the minimum value of c. This should lead to a simple solution
 
  • #3
hint: think tangents & normals
 
  • #4
Here is my work:

f(x) = ||x|| ^2
subjeted to c(x) = a^{T} x +[tex]\alpha[/tex][tex]\geq[/tex]0

so L ( x,[tex]\lambda[/tex]) = f(x) - [tex]\lambda[/tex] c(x)
gradiant L(x,[tex]\lambda[/tex]) = 2x - [tex]\lambda[/tex] grad(c(x)) = 0
grad c(x)= a
so 2x - [tex]\lambda[/tex]a = 0
this gives that x = [tex]\frac{}{}[1/2]\lambda[/tex]a
and
c(x) = 0
gives : [tex]\lambda[/tex] = -2 [tex]\alpha[/tex][tex] / ||a||^2

imlpies

x = - \alpha[/tex] a / ||a||^2
 
  • #5
Lagrange multipliers ok,, though bit hard to read what is a vector

the answer makes sense to me as the boundary plane will have a as its normal, and the answer is both on the boundary plane & parallel to a
 

FAQ: Numerical Optimization ( norm minim)

What is numerical optimization and why is it important in science?

Numerical optimization is a mathematical method used to find the optimal solution to a problem by minimizing a given objective function. It is important in science because it allows for the efficient and accurate solution of complex problems in various fields such as engineering, economics, and data analysis.

What is norm minimization and how is it related to numerical optimization?

Norm minimization is a specific type of numerical optimization where the objective function is a norm, which is a mathematical measure of the size or magnitude of a vector. It is related to numerical optimization because it is used to find the optimal solution for problems that involve minimizing a norm.

What are the common techniques used for numerical optimization?

There are various techniques used for numerical optimization, such as gradient descent, Newton's method, and conjugate gradient method. These techniques differ in their approach and can be used for different types of optimization problems.

How do you determine the best optimization technique for a given problem?

The choice of optimization technique depends on the specific problem at hand, such as the type of objective function and the constraints. It is important to consider the characteristics of the problem and the capabilities of each technique before selecting the most suitable one.

Can numerical optimization be used for non-linear problems?

Yes, numerical optimization can be used for both linear and non-linear problems. In fact, it is often used for non-linear problems as they are more complex and cannot be solved analytically. The choice of optimization technique may vary for non-linear problems as compared to linear problems.

Similar threads

Back
Top