01-01-2013, 10:33 AM
Nonlinear Constrained Optimization: Methods and Software
1Nonlinear Constrained.pdf (Size: 349.56 KB / Downloads: 83)
Abstract
We survey the foundations of nonlinearly constrained optimization methods, emphasizing
general methods and highlighting their key components, namely, the local model and
global convergence mechanism. We then categorize current software packages for solving
constrained nonlinear optimization problems. The packages include interior-point methods,
sequential linear/quadratic programming methods, and augmented Lagrangian methods. For
every package we highlight the main methodological components and provide a brief summary
of interfaces and availability. We also comment on termination conditions of nonlinear
solvers and provide a list of online optimization tools.
Convergence Test and Termination Conditions
We start by describing the convergence test, a common component among all NCO algorithms.
The convergence test also provides the motivation for many local models that are described next.
The convergence analysis of NCO algorithms typically provides convergence only to KKT points
Local Model: Improving a Solution Estimate
One key difference among nonlinear optimization methods is how the local model is constructed.
The goal of the local model is to provide a step that improves on the current iterate. We distinguish
three broad classes of local models: sequential linear models, sequential quadratic models, and
interior-point models. Models that are based on the augmented Lagrangian method are more
suitably described in the context of globalization strategies in Section 4.
Interior-Point Methods
Interior-point methods (IPMs) are an alternative approach to active-set methods. Interior-point
methods are a class of perturbed Newton methods that postpone the decision of which constraints
are active until the end of the iterative process. The most successful IPMs are primal-dual IPMs,
which can be viewed as Newton’s method applied to the perturbed first-order conditions of (1.1)
Globalization Strategy:
Convergence from Remote Starting Points
The local improvement models of the preceding section guarantee convergence only in a small
neighborhood of a regular solution. Globalization strategies are concerned with ensuring convergence
from remote starting points to stationary points (and should not be confused with global
optimization). To ensure convergence from remote starting points, we must monitor the progress
of the local method. Monitoring is easily done in unconstrained optimization, where we can measure
progress by comparing objective values. In constrained optimization, however, we must
take the constraint violation into account. Three broad classes of strategies exist: augmented Lagrangian
methods, penalty and merit-function methods, and filter and funnel methods.
Penalty and Merit Function Methods
Penalty and merit functions combine the objective function and a measure of the constraint violation
into a single function whose local minimizers correspond to local minimizers of the original
problem (1.1). Convergence from remote starting points can then be ensured by forcing descent of
the penalty or merit function, using one of the mechanisms of the next section.
Filter and Funnel Methods
Filter and funnel methods provide an alternative to penalty methods that does not rely on the use
of a penalty parameter. Both methods use step acceptance strategies that are closer to the original
problem, by separating the constraints and the objective function.
Filter methods: Filter methods keep a record of the constraint violation, hl := kc(xl)k, and objective
function value, fl := f(xl), for some previous iterates, xl; l 2 Fk (Fletcher and Leyffer, 2002).
A new point is acceptable if it improves either the objective function or the constraint violation
compared to all previous iterates.
Maratos Effect and Loss of Fast Convergence
One can construct simple examples showing that arbitrarily close to an isolated strict local minimizer,
the Newton step will be rejected by the exact penalty function (Maratos, 1978), resulting
in slow convergence. This phenomenon is known as the Maratos effect. It can be mitigated by
computing a second-order correction step, which is a Newton step that uses the same linear system
with an updated right-hand side (Fletcher, 1987; Nocedal and Wright, 1999). An alternative
method to avoid the Maratos effect is the use of nonmonotone techniques that require descent
over only the last M iterates, where M > 1 is a constant.
Line-Search Methods
Line-search methods enforce convergence with a backtracking line search along the direction s.
For interior-point methods, the search direction, s = (x;y;z), is obtained by solving the
primal-dual system (3.5). For SQP methods, the search direction is the solution of the QP (3.1), s =
d. It is important to ensure that the model produces a descent direction, e.g., r(xk)T s < 0 for a
merit or penalty function (x); otherwise, the line search may not terminate. A popular line search
is the Armijo search (Nocedal and Wright, 1999), described in Algorithm 3 for a merit function
(x). The algorithm can be shown to converge to a stationary point, detect unboundedness, or
converge to a point where there are no directions of descent.