Nonlinear programming problem, mathematical programming

In summary, if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative.
  • #1
malamenm
5
0
Hello,

I need to

minimize {- f (x) | a <=x <= b}
where f ( x) is a concave and twice differentiable function. In addition, a and b are
positive constants such that a <b. Assume that -f (x) exists in the given interval [a, b] .
Show that

if the optimal solution is at x*= a , then delta f (a) < 0 must hold and
if the optimal solution is at x*= b * , then delta f (b) > 0 must hold.

Any help is much appreciated
 
Physics news on Phys.org
  • #2
I have a number of questions here:

1. Does $f:\mathbb{R} \to \mathbb{R}$? That is, is $f$ a function from the reals to the reals?

2. What do you mean by $\delta f$? Do you mean the derivative $df/dx$?

It sounds to me like your question is asking you to prove that if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative. Is this what you're asked to do?
 
  • #3
Hi, you I mean the derivative df/dx, I am assuming that it is a real to real transformation but the only information provided is that it is a nonlinear programming problem.

You are completely right, I am trying to prove that if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative but don't really know if my methodolgy is right, here is what i have so far:
The problem can be re-formulated:

π‘šπ‘–π‘›π‘–π‘šπ‘§π‘’ βˆ’π‘“(π‘₯) π‘Ž ≀π‘₯ ≀𝑏 ≑ π‘šπ‘Žπ‘₯π‘–π‘šπ‘–π‘§π‘’ 𝑓(π‘₯) π‘Ž ≀π‘₯ ≀𝑏

f(x) is concave and twice differentiable implies it is also continuous along [a,b].
Proof by contradiction. If the (unique) optimal solution is at π‘₯βˆ— = π‘Ž then 𝑓 π‘₯βˆ— > 𝑓 π‘₯ βˆ€ π‘₯ ∈ (π‘Ž, 𝑏].
Now instead assume βˆ‡π‘“ π‘Ž οΏΌ β‰₯ 0. This means βˆƒπœ• > 0 such that, within some neighbourhood of a, π‘Ž + πœ• β‰₯ 𝑓 π‘Ž = 𝑓(π‘₯βˆ—) (as f is continuous along [a,b]. But this directly contradicts 𝑓 π‘₯βˆ— >
𝑓 π‘₯ βˆ€π‘₯∈(π‘Ž,𝑏].Therefore,βˆ‡π‘“ π‘Ž <0musthold.
Similarly, if the (unique) optimal solution is at π‘₯βˆ— = 𝑏 then 𝑓 π‘₯βˆ— > 𝑓 π‘₯ βˆ€ π‘₯ ∈ [π‘Ž, 𝑏).
Now instead assume βˆ‡π‘“(𝑏) ≀ 0. This means βˆƒπœ• > 0 such that, within some neighbourhood of b, 𝑓 𝑏 βˆ’ πœ• β‰₯ 𝑓 𝑏 = 𝑓(π‘₯βˆ—) (as f is continuous along [a,b]. But this directly contradicts 𝑓 π‘₯βˆ— >
𝑓 π‘₯ βˆ€π‘₯∈(π‘Ž,𝑏].Therefore,βˆ‡π‘“ 𝑏 >0 musthold.
Could you please advise if that is close to the actual proof and if it makes sense,

Thank you
 
  • #4
You might be making this more complicated than it needs to be. Take the first case. You know that $-f(x)$ is convex; since $f$ is twice-differentiable, that is equivalent to $-f''(x)>0$ everywhere. Then, if a point inside the interval was the minimum, it would have to be a critical point, by Fermat's Theorem, and hence $-f'(x)=0$ there. Since the minimum occurs at the left-hand endpoint instead, you've just shown that $-f'(x)\not=0$ on the interval. That is, the derivative can't change sign on the interval. So, either the derivative starts out negative and is always increasing (convex function), or the derivative starts out positive and is always increasing. The former would imply that the minimum was the RH endpoint. The latter gives you the result you want.

This argument is not very rigorous, but I think it's getting at what needs to happen. The other case is exactly analogous.
 
  • #5
Oh wow thank you, that is much simpler and straight foreword than the approach i tried:)

kind regards
 
Last edited by a moderator:

FAQ: Nonlinear programming problem, mathematical programming

What is a nonlinear programming problem?

A nonlinear programming problem is a mathematical optimization problem where the objective function and/or constraints contain nonlinear terms. This means that the relationship between the variables and the objective function or constraints is not linear.

How is a nonlinear programming problem different from a linear programming problem?

In a linear programming problem, the objective function and constraints are all linear, meaning that the relationship between the variables and the objective function or constraints is a straight line. In a nonlinear programming problem, this relationship is not linear and may be curved or have other shapes.

What are some common applications of nonlinear programming?

Nonlinear programming is used in a variety of fields, including engineering, economics, finance, and computer science. Some common applications include optimizing production processes, scheduling tasks, and portfolio optimization.

How is a nonlinear programming problem solved?

There are several methods for solving nonlinear programming problems, including gradient-based methods and heuristic algorithms. These methods involve iteratively adjusting the values of the variables to find the optimal solution that minimizes or maximizes the objective function while satisfying the constraints.

What are some challenges associated with solving nonlinear programming problems?

Nonlinear programming problems can be more difficult to solve than linear programming problems due to the complexity of the objective function and constraints. Additionally, finding the global optimal solution can be challenging, as the problem may have multiple local optima. This requires careful selection of the initial solution and the use of advanced optimization techniques.

Back
Top