Nomenclature

  • Feasible set
  • What is a solution? Optimal value, optimal set
  • Local vs. global optima

Consider the optimization problem

(P)           \quad \min _x f_0(x): f_i(x) \leq 0, i=1, \ldots, m.

Feasible set

The feasible set of problem (P) is defined as

\mathbf{X}:=\left\{x \in \mathbf{R}^n: f_i(x) \leq 0, i=1, \ldots, m\right\} .

A point x is said to be feasible for problem (P) if it belongs to the feasible set X, that is, it satisfies the constraints.

Example: In the toy optimization problem, the feasible set is the ‘‘box’’ in R^2, described by -1 \leq x_1 \leq 2,0 \leq x_2 \leq 3.

The feasible set may be empty if the constraints cannot be satisfied simultaneously. In this case, the problem is said to be infeasible.

What is a solution?

In an optimization problem, we are usually interested in computing the optimal value of the objective function, and also often a minimizer, which is a vector that achieves that value, if any.

Feasibility problems

Sometimes an objective function is not provided. This means that we are just interested in finding a feasible point, or determining that the problem is infeasible. By convention, we set f_0 to be a constant in that case, to reflect the fact that we are indifferent to the choice of a point x as long as it is feasible.

Optimal value

The optimal value of the problem is the value of the objective at optimum, and we denote it by p^* :

p^*:=\min _x f_0(x): f_i(x) \leq 0, \quad i=1, \ldots, m .

Example: In the toy optimization problem, the optimal value is p^*=-10.2667.

Optimal set

The optimal set (or, set of solutions) of the problem (P) is defined as the set of feasible points for which the objective function achieves the optimal value:

\mathbf{X}^{\text {opt }}:=\left\{x \in \mathbf{R}^n: f_0(x)=p^*, f_i(x) \leq 0, i=1, \ldots, m\right\} .

We take the convention that the optimal set is empty if the problem is not feasible.

A standard notation for the optimal set is via the argmin notation:

\mathbf{X}^{\text {opt }}=\arg \min _{x \in \mathbf{X}} f_0(x) .

A point x is said to be optimal if it belongs to the optimal set. Optimal points may not exist, and the optimal set may be empty. This can be due to the fact that the problem is infeasible. Or, it may be due to the fact that the optimal value is only attained in the limit.

Example: The problem

\min _x e^{-x}

has no optimal points, as the optimal value p^*=0 is only attained in the limit x \righttarrow +\infty.

If the optimal set is not empty, we say that the problem is attained.

Suboptimality

The \epsilon-suboptimal set is defined as

\mathbf{X}_e:=\left\{x \in \mathbf{R}^n: f_i(x) \leq 0, i=1, \ldots, m, f_0(x) \leq p^*+c\right\} .

(With our notation, \mathbf{X}_0=\mathbf{X}_{\text {opt }}.) Any point in the epsilon-suboptimal set is termed epsilon-suboptimal.

This set allows characterizing points that are close to being optimal (when \epsilon is small). Usually, practical algorithms are only able to compute suboptimal solutions, and never reach true optimality.

Example: Nomenclature of the two-dimensional toy problem.

Local vs. global optimal points

A point z is locally optimal if there is a value R>0 such that z is optimal for the problem

\min _x f_0(x): f_i(x) \leq 0, \quad i=1, \ldots, m, \quad \| z-\left.x\right|_2 \leq R

In other words, a local minimizer x minimizes f_0, but only for nearby points on the feasible set. So the value of the objective function is not necessarily the optimal value of the problem.

The term globally optimal (or, optimal for short) is used to distinguish points in the optimal set from local minima.

Examplea function with local minima.

Local optima may be described as the curse of general optimization, as most algorithms tend to be trapped in local minima if they exist.

License

Hyper-Textbook: Optimization Models and Applications Copyright © by L. El Ghaoui. All Rights Reserved.

Share This Book