- #1
OhMyMarkov
- 83
- 0
Hello everyone!
I got this topic confusing me:
Suppose we would like minimize a convex function such that a given constraint is satisfied, mathematically, we would solve:
$arg min _x f(x)$ such that $c(x) \in S$ where $c(x)$ is the constraint function and $S$ is the allowable set of values $c(x)$ can have (I don't know if this is correct in mathematical notation...)
The dual of this problem would be solving the Lagrangian, that is: $arg min _x f(x) + \lambda c(x)$ where $\lambda > 0$. But they say that the two problems are not equivelent in a sense that there is what is called a duality gap between the two results, but that this gap is zero for linear programs and second order cone programs. What I want ask is that, how are the two problems different and why do they yield different results, we've been using Lagrange multipliers for a while and it is shocking that the result it produces may not be correct.Any help is appreciated!
I got this topic confusing me:
Suppose we would like minimize a convex function such that a given constraint is satisfied, mathematically, we would solve:
$arg min _x f(x)$ such that $c(x) \in S$ where $c(x)$ is the constraint function and $S$ is the allowable set of values $c(x)$ can have (I don't know if this is correct in mathematical notation...)
The dual of this problem would be solving the Lagrangian, that is: $arg min _x f(x) + \lambda c(x)$ where $\lambda > 0$. But they say that the two problems are not equivelent in a sense that there is what is called a duality gap between the two results, but that this gap is zero for linear programs and second order cone programs. What I want ask is that, how are the two problems different and why do they yield different results, we've been using Lagrange multipliers for a while and it is shocking that the result it produces may not be correct.Any help is appreciated!