(101e) Improved Convex Relaxations for Global Dynamic Optimization | AIChE

(101e) Improved Convex Relaxations for Global Dynamic Optimization

Authors 

Cao, H. - Presenter, McMaster University
Song, Y., McMaster University
Khan, K., McMaster University
Chemical engineering applications such as safety verification and model validation, require minimizing dynamic process models to global optimality. Deterministic global optimization methods typically require obtaining global bounding information for the dynamic process model, to aid in locating and verifying the global optimum. The typical approach for providing these bounds is to generate convex relaxations and minimize them locally. However, this process may be computationally expensive for dynamic systems, which in turn limits the performance of global dynamic optimization methods. Tighter relaxations would intuitively translate into tighter lower bounds, which in turn translates into fewer iterations required by the overarching global optimization method, and ultimately shorter computational time. To this end, we describe a new class of convex relaxations for parametric ordinary differential equations (ODEs), which are guaranteed to be tighter than state-of-the-art approaches. In this approach, unlike existing approaches, subgradient and gradient information may also be readily evaluated and supplied to local optimization methods for efficient bounding.

As the key feature of the new relaxations compared to the state-of-the-art method of Scott and Barton [1], convex optimization problems are embedded into the right-hand side (RHS) of an auxiliary ordinary differential equation (ODE) system that generates relaxations, to intuitively squeeze the relaxations closer to the original dynamic process model. This approach has several significant advantages. Firstly, this method inherently constructs tighter convex relaxations than the state-of-the-art method [1]. Secondly, unlike existing approaches, it is compatible with any valid convex relaxations for the original system’s right-hand sides, including αBB relaxations [2], generalized McCormick relaxations [3], or relaxations obtained from physical intuition. In particular, when αBB relaxations of the RHS are employed in our new approach, the resulting relaxations have been empirically observed in many cases to be significantly tighter than the established αBB relaxations of ODEs.

Moreover, unlike the existing state-of-the-art relaxation approach by Scott and Barton [1], the new relaxations are described by standard ODE systems without discontinuities, and can be further modified to have differentiable solutions. To achieve this, a “safe-landing” mechanism is embedded into an auxiliary ODE system describing relaxations. Compared with the approach of [1], this new approach enables evaluating the gradients of relaxations, which benefit local optimization methods and thereby translate into more efficient lower bounds. This safe-landing mechanism further tightens the relaxations as well.

A proof-of-concept implementation in Julia is discussed, and several examples are presented for illustration.

Reference

[1] Scott, Joseph K., and Paul I. Barton. “Improved relaxations for the parametric solutions of ODEs using differential inequalities.” Journal of Global Optimization 57.1 (2013): 143-176.

[2] Adjiman, Claire S., et al. “A global optimization method, αBB, for general twice-differentiable constrained NLPs—I. Theoretical advances." Computers & Chemical Engineering 22.9 (1998): 1137-1158.

[3] Scott, Joseph K., Matthew D. Stuber, and Paul I. Barton. “Generalized McCormick relaxations.” Journal of Global Optimization 51.4 (2011): 569-606.