(183b) Global Optimization of Fixed-Point Iterations | AIChE

(183b) Global Optimization of Fixed-Point Iterations

Authors 

Barton, P. I. - Presenter, Massachusetts Institute of Technology

Optimization formulations are applied to solve a wide variety of process systems engineering applications from feasibility to profitability studies.  The optimization problems encountered in process systems applications are almost always nonlinear and nonconvex, with process model equations expressed explicitly as equality constraints alongside performance specifications, often written as inequality constraints.  Guaranteeing location of global optima of such optimization problems can be expensive or impossible with currently available optimization algorithms, given the worst-case exponential running time of all known algorithms.

Steady-state process models are typically written in the form h(z) = 0, with h:nznx, and are often underdetermined (nz > nx).  We can partition the process variables, z, into dependent variables, x nx, and independent variables, p np, such that z=(x,p).  The process model can then be expressed as h(x,p) = 0, with h:nxx np nx.  We can reduce the dimensionality of the optimization problem by solving h(x,p) = 0 for the dependent variables , x(p), that  are implicit functions of the independent variables p.  The complexity of the model equations most often prohibits us from solving h(x,p) = 0 for an explicit x(p), so fixed-point iterations, such as Newton's method, must be employed to approximate the implicit function.  Optimization formulations in turn become more complicated because the functions to be optimized can only be expressed through a fixed-point iterative algorithm.  On the other hand, this procedure essentially eliminates x from the problem, allowing us to focus on a reduced-space optimization problem. 

In [2], the authors present an algorithm for finding global optima of problems with embedded algorithms with a fixed number of iterations known a priori.  In their examples, the focus is on approximating the solution of differential equations, such as the 1-dimensional heat equation.  The method makes use of McCormick [1] convex and concave relaxations for factorable functions.  Their presented method drastically reduces the dimension of the problems, making finding global optima much more tractable.  In this paper, we extend these ideas to algorithms without a fixed number of iterations known a priori, such as fixed-point iterations, which arise when solving steady-state process models.   

We have developed an algorithm for determining the global optima of nonconvex functions dependent on implicit functions defined through fixed-point iterations.  We employ parametric interval Newton methods to calculate rigorous bounds on the implicit function solution x(p), defined through a fixed-point iteration, over the independent variable set P.  The rigorous bounds are then used to calculate concave overestimators and convex underestimators of the implicit function over the set P, using McCormick's composition theorem and factorable decomposition [1], and subgradients of these relaxations [2].  The upper- and lower-bounds, as well as the McCormick relaxations, are refined through the branch-and-bound framework, branching on P.  The interval refinement properties of parametric interval Newton methods ensure tighter rigorous bounds of the implicit function are calculated for each partition of P.  This ensures that we calculate tighter under- and overestimators at each step in the branch-and-bound tree.  The algorithm terminates with a guaranteed ε-optimal global solution.

References

 [1]       Garth P. McCormick, Computability of global solutions to factorable nonconvex programs: Part I-convex underestimating problems, Math. Program, 10(1976), pp. 147-175.

[2]        Alexander Mitsos, Benoit Chachuat, and Paul I. Barton, McCormick-based relaxations of algorithms, In Press, SIAM Journal on Optimization (2009).