(89a) Deterministic Global Optimization: What Can We Compute and What Can We Guarantee? | AIChE

(89a) Deterministic Global Optimization: What Can We Compute and What Can We Guarantee?



There are several deterministic global optimization methods that can guarantee convergence to all solutions under certain conditions. Interval methods, the alphaBB methodology, and some integral path methods can provide convergence guarantees provided certain hypotheses are satisfied (e.g., smoothness of the objective function and/or constraints, constraint set compactness, connectedness and convexity, and other mathematical properties). However, many applications in science and engineering are NP-hard. For problems with a small number of solutions, convergence guarantees are often realizable, even though associated computational complexity can be demanding and physical interpretation of the solutions can be challenging. For example, the Statistical Associating Fluid Theory (SAFT) equation of state has a relatively small number of compressibility roots, usually less than 10. However, it is not finding all solutions for the SAFT equation that is challenging; this can be guaranteed in a number of ways. What is challenging is deciding with certainty which compressibility root is vapor and which corresponds to liquid. Similar remarks regarding physical relevance often apply to parameter estimation problems in which multiple sets of optimal parameters exist. Consider a second class of problems. Macroscopic phase stability and phase equilibrium problems involving non-ideal mixtures also generally have a small number of solutions - one global minimum and possibly several local minima and saddle points. However, here the choice of different models for each phase results in non-differentiability and large numbers of components increase computational complexity for many deterministic global optimization methods that are based on tangent plane analysis. In particular, it is often difficult to ensure that all stationary points of the tangent distance function are found for problems with large numbers of components. Dynamic global optimization problems involving differential objective functions or constraints, on the other hand, can place very high demands on computer resources and truly challenge the foundational assumptions of many deterministic global optimization methods. For example, the reliability of building block tools like successive quadratic programming and other infeasible path methodologies on which some global optimization methods are built and/or the availability of second partial derivative information can represent weak links in the global optimization of dynamical models. In addition, the existence of multiple optima to nonlinear programming sub-problems can undermine the theoretical and practical aspects of some decomposition techniques, often making it necessary to impose unrealistic assumptions like convexity in order to establish guarantees. Finally, computing all solutions or exploring all eigen-directions for global optimization problems with a very large number of solutions is often undesirable and impractical. For example, in conformational molecular modeling the number of local minima and saddle points can be extremely large (i.e., on the order of millions or billions) and the calculations can involve multiple length and time scales if the potential energy landscape is rough. In addition, it is often desirable to know the conformation of strong local minima in problems like protein folding because these local minima or misfolded protein structures can often be linked to certain diseases. These issues (i.e., physical relevance, constraint compactness, non-differentiability, the number of solutions, and varying computational goals), as well as others that arise in global optimization, presents serious challenges in terms of what can be guaranteed and what can actually be computed.

In this work, the performance of a variety of deterministic global optimization methods for several important classes of problems in science and engineering is surveyed. The deterministic global optimization methods considered include interval methods, branch and bound methods like branch and reduce (BARON) and alphaBB, Mixed Integer Nonlinear Programming (MINLP), integral path methods, and others. Challenging numerical examples that range from dynamic optimization problems to phase stability and equilibrium to difficult molecular confirmation problems, and whose computational goals vary significantly, are presented. Conditions under which guarantees can be provided will be discussed and compared with what can actually be computed. The importance of algorithmic implementation, the availability of computational resources, physical relevance, the presence of multiple length and time scales, and meaningful numerical comparisons will also be discussed. Many geometric illustrations and numerical results will be presented.