Convex Optimization Problems

  • Definition
  • Optimality
  • Maximizing a convex function

Definition
The optimization problem in standard form:

    \[\min _x f_0(x): \quad \begin{array}{ll} f_i(x) \leq 0, \quad i=1, \cdots, m, \\ h_i(x)=0 \quad i=1, \cdots, p, \end{array}\]

is called a convex optimization problem if:
– the objective function f_0 is convex;
– the functions defining the inequality constraints, f_i, i=1, \ldots, m are convex;
– the functions defining the equality constraints, h_i, i=1, \ldots, m are affine.
Note that, in the convex optimization model, we do not tolerate equality constraints, unless they are affine.
Optimality
Local and global optima
Recall the definition of global and local optima given here.
Theorem: Global vs. local optima in convex optimization
For convex problems, any locally optimal point is globally optimal. In addition, the optimal set is convex. (Proof)
Optimality condition
Consider the optimization problem above, which we write as

    \[\min _{x \in \mathbf{X}} f_0(x),\]

where \mathbf{X} is the feasible set.
When f_0 is differentiable, then we know that for every x, y \in \operatorname{dom} f_0,

    \[f_0(y) \geq f_0(x)+\nabla f_0(x)^T(y-x) .\]

Then x is optimal if and only if

    \[x \in \mathbf{X} \text { and } \forall y \in \mathbf{X}: \nabla f_0(x)^T(y-x) \geq 0 .\]

If \nabla f_0(x) \neq 0, then it defines a supporting hyperplane to the feasible set at x.
When the problem is unconstrained, we obtain the optimality condition:

    \[\nabla f_0(x)=0 .\]

Note that these conditions are not always feasible, since the problem may not have any minimizer. This can happen for example when the optimal value is only attained in the limit; or, in constrained problems, when the feasible set is empty.
Examples:
– Minimization of a convex quadratic function.
– Optimality condition for a convex, unconstrained problem.
– Optimality condition for a convex, linearly constrained problem.
The optimality conditions given above might be hard to solve. We will return to this issue later.

Maximization of convex functions
Sometimes we would like to maximize a convex function over a set S. Such problems are usually hard to solve.
For example, the problem of maximizing the distance from a given point (say, 0 ) to a point in a polyhedron described as \{x: A x \leq b\} is an example of such hard problems.
One important property of convex functions is that their maximum over any set is the same as the maximum over the convex hull of that set. That is, for any set \mathbf{S} \subseteq \mathbf{R}^n and any convex function f: \mathbf{R}^n \rightarrow \mathbf{R}, we have

    \[\max _{x \in \mathbf{S}} f(x)=\max _{x \in \mathbf{C o}} f(x) .\]

In the example mentioned above, where we seek to maximize the Euclidean norm of a point in a polyhedron, if we know the vertices of the polyhedron, that is, we can express the polyhedron as

    \[\mathbf{P}=\mathbf{C o}\left\{x^{(1)}, \ldots, x^{(K)}\right\},\]

then the optimal distance is simply \max _{1 \leq i \leq K}\left\|x^{(i)}\right\|_2.
Unfortunately, in practice, finding the vertices (given the original representation of \mathbf{P} as an intersection of hyperplanes) is hard, as the polytope might have an exponential number of vertices.

License

Hyper-Textbook: Optimization Models and Applications Copyright © by L. El Ghaoui. All Rights Reserved.

Share This Book