(568b) Interior Point Regularization Strategies for Mathematical Programs with Equilibrium Constraints | AIChE

(568b) Interior Point Regularization Strategies for Mathematical Programs with Equilibrium Constraints

Authors 

Wan, W. - Presenter, Carnegie Mellon Univeristy
Biegler, L. - Presenter, Carnegie Mellon University

Mathematical Programs with equilibrium constraints (MPEC) are known as difficult optimization problems because of the violation of constraint qualifications (CQ) such as Mangasarian-Fromovitz constraint qualification (MFCQ). Without a CQ, the optimum may not be a KKT point. So MPECs are challenging for most NLP solvers, which are based on the KKT conditions. However, MPECs are widely used for some classes of disjunctions in a model. In process optimization, complementarities allow for models containing disappearance of phases, flow reversal, safety valve options, and some other discrete events.

To solve MPECs with NLP solvers, one approach is to add a parameter ε to relax the complimentary constraints and make MFCQ hold in the feasible points. By solving a series of relaxed NLP problems, the optimal solution is approached as ε goes to zero. Alternatively, we can move the complementary constraints to a penalty term in the objective function. However, the relaxation approach slows down the solving process and the penalty approach has difficulty in determining the penalty terms.

As problem size and degrees of freedoms increase for these relaxed NLPs, active set methods become less efficient due to their combinatorial complexity of identifying the active constraints. Alternatively, interior point methods add a logarithmic barrier function to the objective function for the inequality constraints, resulting in cheaper iterations with Newton steps. As one of the most popular interior point NLP solvers, IPOPT is widely used because of its high performance. However, interior point methods require certain regularity conditions for the solution path to exist. Unfortunately, MPECs do not satisfy these conditions and even relaxed NLPs may be difficult to solve with interior points methods.

In this presentation, we present a new regularization algorithm for IPOPT. The new regularization algorithm uses concepts from active set methods, and identifies an independent subset of active constraints. For interior point methods, this is achieved by adding big-M terms in dependent rows that essentially eliminate dependent constraints. This results in more accurate Newton steps and faster convergence to a solution.

When IPOPT can’t get the sufficient improvement in the line search, it switches to a restoration phase to improve the infeasibility, but often leads to slow convergence. Here we introduce a new penalty restoration phase, as an alternative of the restoration phase in current version IPOPT, by adding the objective function as a penalty term. A nice convergent property is shown inside the restoration phase and the algorithm works well for degenerate or infeasible problem sets. 167 MPEC problems are tested with the proposed algorithm and the improvements over current version of IPOPT are demonstrated.