How do we use dynamic programming?

In summary, we can use Lagrange multipliers to solve non-linear programming problems by defining a function $\Lambda(\mathbf x; \lambda, \mu)$ and then solving a system of equations by taking the partial derivatives of $\Lambda$ with respect to each variable and setting them equal to zero. This method can be applied to the given problem by defining $\Lambda$ as the objective function plus the constraint, and then solving a system of three equations with three unknowns.
  • #1
evinda
Gold Member
MHB
3,836
0
Hello! (Wave)

Could you explain to me how we can use dynamic programming in order to solve a non linear programming problem?

What do we do for example if we are given the following problem?

$$\max (y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) \\ y_1+y_2 \leq 5.4 \\ y_1, y_2 \geq 0$$
 
Physics news on Phys.org
  • #2
If this is not possible, how else could we solve the problem using a method that is related to optimization?
 
  • #3
evinda said:
Hello! (Wave)

Could you explain to me how we can use dynamic programming in order to solve a non linear programming problem?

What do we do for example if we are given the following problem?

$$\max (y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) \\ y_1+y_2 \leq 5.4 \\ y_1, y_2 \geq 0$$

evinda said:
If this is not possible, how else could we solve the problem using a method that is related to optimization?

Hey evinda! (Smile)

We can't do it with linear programming (the Simplex method), since it's not linear.
But there are many non-linear methods like:
- Graphing it, which is doable since there are only 2 dimensions,
- GRC non-linear, which is what Excel does
- Lagrange multipliers
- Powell's conjugate direction method
 
  • #4
I like Serena said:
Hey evinda! (Smile)

We can't do it with linear programming (the Simplex method), since it's not linear.
But there are many non-linear methods like:
- GRC non-linear, which is what Excel does
- Lagrange multipliers
- Powell's conjugate direction method

We have done Lagrange multipliers in class... (Yes)
But how can we apply them in this case? (Thinking)
 
  • #5
evinda said:
We have done Lagrange multipliers in class... (Yes)
But how can we apply them in this case? (Thinking)

There are 4 cases:
  1. The objective function takes its maximum strictly inside the feasible region.
  2. The maximum is where $y_1+y_2=5.4$.
  3. The maximum is where $y_1=0$.
  4. The maximum is where $y_2=0$.

If we assume that $y_1+y_2\le 5.4$ is a binding constraint (case 2), then we have:
$$\Lambda =(y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) + \lambda(y_1+y_2 - 5.4)$$
Solve for each of the derivatives being zero... (Thinking)
 
  • #6
I like Serena said:
There are 4 cases:
  1. The objective function takes its maximum strictly inside the feasible region.
  2. The maximum is where $y_1+y_2=5.4$.
  3. The maximum is where $y_1=0$.
  4. The maximum is where $y_2=0$.

So do we have to check all of the cases to solve the problem? (Thinking)

I like Serena said:
If we assume that $y_1+y_2\le 5.4$ is a binding constraint (case 2), then we have:
$$\Lambda =(y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) + \lambda(y_1+y_2 - 5.4)$$
Solve for each of the derivatives being zero... (Thinking)

So now we check Case 1? Could you explain to me why we define $\Lambda$ like that?

You mean that we solve $\frac{\partial{\Lambda}}{\partial{y_1}}(y)=0$ and $\frac{\partial{\Lambda}}{\partial{y_2}}(y)=0$ ? (Thinking)
 
  • #7
evinda said:
So do we have to check all of the cases to solve the problem? (Thinking)

From a graph, we can immediately see which case applies.
Otherwise, yes, we would have to check all cases.
So now we check Case 1? Could you explain to me why we define $\Lambda$ like that?

You mean that we solve $\frac{\partial{\Lambda}}{\partial{y_1}}(y)=0$ and $\frac{\partial{\Lambda}}{\partial{y_2}}(y)=0$ ? (Thinking)

My example was to check case 2.

That's just how I've learned Lagrange multipliers. How would you do it? (Wondering)

For case 1, that's indeed what we need to solve (where $\Lambda$ is without the $\lambda$ part).
For case 2 we need additionally that $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)=0$. (Nerd)
 
  • #8
I like Serena said:
From a graph, we can immediately see which case applies.
Otherwise, yes, we would have to check all cases.

How can we make the graph? For $y_1=0$ the function gets for example $y_2^3-8y_2^2+21 y_2$... (Sweating)

I like Serena said:
For case 1, that's indeed what we need to solve (where $\Lambda$ is without the $\lambda$ part).
For case 2 we need additionally that $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)=0$. (Nerd)

What does $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)$ represent? Do we consider $\lambda$ to be a constant? (Thinking)
 
  • #9
evinda said:
How can we make the graph? For $y_1=0$ the function gets for example $y_2^3-8y_2^2+21 y_2$... (Sweating)

Take a look at this contour plot on Wolfram.
Which clue can we get from it? (Wondering)

What does $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)$ represent? Do we consider $\lambda$ to be a constant? (Thinking)

No, $\lambda$ is a dummy variable.
If we have a problem of the form $\max f(\mathbf x)$ where $g(\mathbf x)=0$ and $h(\mathbf x)=0$, we can apply the Lagrange multipliers method.
That means that we first define:
$$\Lambda(\mathbf x; \lambda, \mu) = f(\mathbf x) + \lambda g(\mathbf x) + \mu h(\mathbf x)$$
and then solve:
$$\pd \Lambda {x_1} = 0 \\
\vdots \\
\pd \Lambda {x_n} = 0 \\
\pd \Lambda \lambda = 0 \\
\pd \Lambda \mu = 0
$$
 
  • #10
I like Serena said:
No, $\lambda$ is a dummy variable.
If we have a problem of the form $\max f(\mathbf x)$ where $g(\mathbf x)=0$ and $h(\mathbf x)=0$, we can apply the Lagrange multipliers method.
That means that we first define:
$$\Lambda(\mathbf x; \lambda, \mu) = f(\mathbf x) + \lambda g(\mathbf x) + \mu h(\mathbf x)$$
and then solve:
$$\pd \Lambda {x_1} = 0 \\
\vdots \\
\pd \Lambda {x_n} = 0 \\
\pd \Lambda \lambda = 0 \\
\pd \Lambda \mu = 0
$$

In this case do we pick $h(x) \equiv 0$ ?

We have $\frac{\partial{\Lambda}}{\partial{y_1}}=3y_1^2-22y_1+40+ \lambda \\ \frac{\partial{\Lambda}}{\partial{y_2}}=3y_2^2-16y_2+21+ \lambda $

Right?
 
  • #11
evinda said:
In this case do we pick $h(x) \equiv 0$ ?

For instance, or we just leave the whole $h(\mathbf x)$ and $\mu$ part out. (Nerd)

We have $\frac{\partial{\Lambda}}{\partial{y_1}}=3y_1^2-22y_1+40+ \lambda \\ \frac{\partial{\Lambda}}{\partial{y_2}}=3y_2^2-16y_2+21+ \lambda $

Right?
And additionally $\pd \Lambda \lambda = y_1+y_2-5.4$.
Hey! That simply is the constraint! (Happy)
 
  • #12
I like Serena said:
And additionally $\pd \Lambda \lambda = y_1+y_2-5.4$.
Hey! That simply is the constraint! (Happy)

Oh yes... And what do we get from that? (Thinking)
 
  • #13
evinda said:
Oh yes... And what do we get from that? (Thinking)

That we need to solve the system:
\begin{cases}3y_1^2-22y_1+40+\lambda &= 0 \\
3y_2^2-16y_2+21+\lambda &= 0\\
y_1+y_2-5.4 &= 0
\end{cases}
That is, 3 equations with 3 unknowns.
Btw, we're not really interested in $\lambda$ itself, so it makes sense to eliminate $\lambda$ first. (Mmm)
 
  • #14
I like Serena said:
That we need to solve the system:
\begin{cases}3y_1^2-22y_1+40+\lambda &= 0 \\
3y_2^2-16y_2+21+\lambda &= 0\\
y_1+y_2-5.4 &= 0
\end{cases}
That is, 3 equations with 3 unknowns.
Btw, we're not really interested in $\lambda$ itself, so it makes sense to eliminate $\lambda$ first. (Mmm)

Then we get $y_1=3.2$ and $y_2=2.2$. Or am I wrong? (Thinking)
 
  • #15
evinda said:
Then we get $y_1=3.2$ and $y_2=2.2$. Or am I wrong? (Thinking)

That is correct. (Nod)
 
  • #16
I like Serena said:
That is correct. (Nod)

Then the function is equal to $66.256$, right?

So now do we check the other cases?
 

FAQ: How do we use dynamic programming?

1. What is dynamic programming?

Dynamic programming is a problem-solving technique that involves breaking a complex problem into smaller subproblems and solving them recursively. It is commonly used in computer science and mathematics to efficiently solve problems that involve overlapping subproblems.

2. How do we determine if a problem can be solved using dynamic programming?

A problem can be solved using dynamic programming if it can be broken down into smaller subproblems, and the solution to the larger problem can be built from the solutions to the smaller subproblems. Additionally, the subproblems should have overlapping subproblems, meaning that the same subproblems are encountered multiple times during the solution process.

3. What are the steps involved in using dynamic programming?

The steps involved in using dynamic programming are:

  • Identify and define the problem
  • Break the problem down into smaller subproblems
  • Find the recursive relation between the subproblems
  • Solve the subproblems recursively
  • Store the solutions to the subproblems in a data structure
  • Use the stored solutions to solve the larger problem

4. What are the advantages of using dynamic programming?

The advantages of using dynamic programming include:

  • Efficiency: Dynamic programming can significantly reduce the time and space complexity of solving a problem.
  • Optimality: The solutions obtained using dynamic programming are always optimal.
  • Reuse of solutions: By storing the solutions to subproblems, dynamic programming avoids recomputing them, leading to faster solution times.
  • Flexibility: Dynamic programming can be applied to a wide range of problems, making it a versatile problem-solving technique.

5. What are some common applications of dynamic programming?

Dynamic programming has various applications in computer science and mathematics, including:

  • Finding the shortest path in a graph
  • Optimizing resource allocation in project management
  • Sequence alignment in bioinformatics
  • Knapsack problem in operations research
  • Maximizing profits in stock trading
  • Optimizing code in software development

Similar threads

Replies
1
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
32
Views
5K
Replies
15
Views
1K
Replies
2
Views
2K
Replies
14
Views
3K
Back
Top