(194f) An Interior-Point l1-Exact Penalty Strategy for Degenerate Nonlinear Programming Problems within IPOPT | AIChE

(194f) An Interior-Point l1-Exact Penalty Strategy for Degenerate Nonlinear Programming Problems within IPOPT

Authors 

Thierry, D. - Presenter, Carnegie Mellon University
Biegler, L., Carnegie Mellon University
The solution of Nonlinear Programming (NLP) problems has remained a continuously relevant topic of interest for many applications; particularly for the design and control of chemical processes because their basic models typically involve nonlinearities. Moreover, algorithms for solving Global and Mixed-Integer nonlinear optimization are tacitly subject to the properties of their NLPs subproblems. Therefore, it is critical to have a robust strategy to solve NLPs.

In this context, the interior-point optimization algorithm IPOPT has been established as a popular choice for the solution of NLP problems. However, as the amount of processing power of modern computers allows the solution of increasingly larger problems, their complexity often leads to ill-posed formulations. One particularly difficult instance of such issues is when the locally-linearized active constraints fail to uphold key assumptions for the convergence of the algorithm. This corresponds to the violation of the Linear Independence Constraint Qualification (LICQ), which leads to a kind of degenerate behavior that ultimately hinders the performance of the solution algorithm.

Though interior-point solvers like IPOPT have built-in strategies that attempt to overcome this kind of local behavior, some case studies reveal that these do not guarantee successful runs. Moreover, correcting degenerate behavior might not be evident or even possible for certain kinds of problems.

In this work we present an implementation of an alternative algorithmic strategy, denoted as the l1-exact penalty formulation. For this, the l1-norm of the entire set of equality constraints of the NLP problem are moved into the objective function and penalized. This results in a bound constrained problem with a penalty parameter to be set. Then, penalty-slack variables are added to replace the l1-penalty function and form a new set of equality constraints and finally obtain an equivalent smooth NLP problem. The key feature of this formulation is that it is well-posed in the sense that it guarantees that for a fixed penalty parameter, the Mangasarian-Fromovitz Constraint Qualification (MFCQ) is always satisfied, and steps towards the solution can be taken. Moreover, if a strategy for updating the penalty parameter is used, and depending on its boundedness, stationary points of l1-penalty problem might result in local solutions of the original NLP.

We demonstrate the performance and limitations of this strategy implemented within IPOPT against its regular execution by attempting to solve degenerate problems such as Mathematical Programs with Complementarity Constraints (MPCC) and a degenerate version of the CUTER test set. Finally, nonlinear blending problems which typically have dependent linearized constraints whenever flows are set to zero have also been considered with our strategy.