Karush Khan Tucker Gradient Conditions

In summary, the individual is having trouble applying the KKT gradient conditions to a specific equation and is looking for help in identifying any mistakes they may have made. They have already searched for resources and videos, but have not found a solution. They may need to consult with a professor or wait until they have more experience with KKT.
  • #1
mertcan
345
6
upload_2018-9-24_19-21-29.png

Hi, initially I cut the photo off the book called Nonlinear-and-Mixed-Integer-Optimization-Fundamentals-and-Applications-Topics-in-Chemical-Engineering and page 120.

My question is: in the GREEN box it says we have to use KKT gradient conditions with respect to a_i and as a result of that we obtain the set in RED box. You see the sum of u_i equals to 1 BUTTT when I take gradient conditions into account with respect to a_i, I find that all individual u_i must be 1. FOR INSTANCE say that Lagrangian function is sum of a_i + sum of u_i*[ g_i () - a_i ], and apply KKT gradient conditions to Lagrangian function with respect to a_1. Derivative of sum of a_i with respect to "a_1" is just " 1 " and
derivative of sum of u_i*[ g_i () - a_i ] with respect to "a_1" equals to just " -u_1", then summation of them must equal to "0", then all individual u_i must be 1 INSTEAD of summation of u_i equals 1??
Could you help me about that?
 

Attachments

  • upload_2018-9-24_19-21-29.png
    upload_2018-9-24_19-21-29.png
    38.9 KB · Views: 829
Physics news on Phys.org
  • #2
  • Like
Likes berkeman
  • #3
jedishrfu said:
This seems to be a very specialized areas on math and we don't always have experts that can answer your questions.

In any event, I did a google search and found these references that may help you find your answer:

https://www.google.com/search?newwi...0.0..0.96.96.1...0...1j2..gws-wiz.OwByeZ0gc_4

There are numerous videos too:

https://www.google.com/search?q=Kar...k9ndAhWi6IMKHU66BckQ_AUIDygC&biw=1391&bih=911

In particular the APMonitor seems to have a series of videos on this:


Initially thanks for your return. But I would like to express that I have already done the necessary calculations and shown them in my post 1. I just can not reach the same point like the mentioned book. Could you help me about that? I always find all individual u_i equals 1 instead of sum of u_i equals 1?
 
  • #4
Sorry I can't help here my expertise is limited. I take it the videos didn't help?
 
  • #5
jedishrfu said:
Sorry I can't help here my expertise is limited. I take it the videos didn't help?
The videos contain what I have already learnt. But I appreciate it. I just would like to see whether to make some mistakes?
 
  • #6
HI again, I would like to say I know KKT karush khan tucker conditions and tried to take derivative of mentioned equation in previous posts. But I think I am making a mistake while taking gradients. I find all individual u_i must be 1 but truly as you can see in screenshot sum of u_i must be 1. Could you help me notice my mistake?
 
  • #7
I know this has to be frustrating, but the problem here is that we don't always have someone versed in your problem area to help. I tried by posting the videos. I've checked around and no one immediately comes to mind. You may need to talk with any profs you know versed in this area.

You could also try some other online sites but you may experience the same problem with not enough subject experts.

Lastly, you may just have to accept the result for now and come back to it later when you've got more experience with KKT.
 

FAQ: Karush Khan Tucker Gradient Conditions

What are the Karush-Khan-Tucker (KKT) gradient conditions?

The Karush-Khan-Tucker (KKT) gradient conditions are a set of necessary conditions for a point to be a local minimum or maximum in a nonlinear optimization problem with inequality and equality constraints. They are essential in solving constrained optimization problems.

What is the significance of the KKT gradient conditions?

The KKT gradient conditions provide a set of necessary conditions for a point to be a local minimum or maximum in constrained optimization problems. They help in determining the optimal solution for a given problem and in understanding the behavior of the objective function near the optimal point.

How are the KKT gradient conditions derived?

The KKT gradient conditions are derived by considering the Lagrangian function, which is a combination of the objective function and the constraints. The conditions are obtained by setting the partial derivatives of the Lagrangian with respect to the decision variables and the Lagrange multipliers to zero.

What are the assumptions made in the KKT gradient conditions?

The KKT gradient conditions assume that the objective function and the constraints are continuously differentiable, and the constraints are convex. The problem also needs to have a feasible solution for the conditions to be applicable.

Can the KKT gradient conditions be used for all types of optimization problems?

No, the KKT gradient conditions are only applicable for problems with differentiable objective functions and constraints. They are also limited to problems with convex constraints. For problems with non-differentiable functions or non-convex constraints, alternative methods need to be used.

Similar threads

Replies
3
Views
4K
Replies
8
Views
1K
Replies
6
Views
3K
Replies
4
Views
2K
Replies
4
Views
2K
Replies
2
Views
1K
Replies
1
Views
588
Back
Top