(483c) Constrained Robust Bayesian Optimization of Expensive Black-Box Functions Under Uncertainty | AIChE

(483c) Constrained Robust Bayesian Optimization of Expensive Black-Box Functions Under Uncertainty

Authors 

Sorourifar, F., Ohio State University
Paulson, J., The Ohio State University
Uncertainty is inevitably present in real-world problems due to noisy and incomplete datasets and highly variable environmental “disturbances” (such as product demand, raw material price, and many others). Therefore, so-called “optimal” solutions found by solving an underlying nominal optimization problem, which neglects these many sources of uncertainty can often be suboptimal, or even worse, infeasible [1]. Sensitivity-based methods are commonly used to study the impact of the uncertainties on specific designs; however, although these can be used to compare the robustness amongst a finite set of designs, they are not able to systematically improve robustness directly. Robust optimization [2], which adopts a minimax approach to define the robust optimal design as the one with the best worst-case performance subject to worst-case constraint satisfaction, is one such strategy for formulating optimization problems in the presence of uncertainty. Although there have been significant developments in robust optimization theory over the past two decades, a substantial gap remains between the currently available methods and problems that one encounters in the real world. In particular, most robust optimization methods are restricted to convex problems [3] or non-convex problems with particular structure [4]. In addition to having challenging nonlinearities, many relevant design problems involve the use of experiments or high-fidelity computer-based simulations such that the relationship between the design variables and the objective/constraints are not available as explicitly defined functions. In such cases, one often resorts to so-called derivative free optimization (DFO) methods that can be applied to any queryable black-box function. However, there has been limited work on DFO methods for robust optimization, with even less work on constrained robust optimization. A potential reason for this limited work is likely due to the fact that problems in this class involve expensive functions (e.g., an embedded partial differential equation solver or molecular dynamics simulation), which prohibit the use of many DFO methods that require a large number of function evaluations.

Bayesian optimization (BO) has become a popular surrogate-based DFO method in recent years due to its data efficiency. By constructing a Gaussian process (GP) model of the objective and constraint functions, BO can sequentially decide where next to sample the expensive functions in a way that tradeoffs between exploration (where the uncertainty in the prediction is large) and exploitation (where the mean function is predicted to have good performance). Although BO has been successfully applied to many different nominal problems, it has proved difficult to extend BO to the robust optimization setting due to the minimax problem requiring two competing optimization stages. The MiMaReK algorithm [5] is one of the earliest attempts to develop a robust BO strategy, which uses a two-level expected improvement acquisition function that requires a growing number of samples at each iteration. Thus, MiMaReK has limited applicability for very expensive functions. Adversarially robust BO (ARBO) [6][7] is a more recent attempt that overcomes this limitation by simultaneously modelling the effect of the design variables and uncertainties on the objective function. ARBO only requires a single expensive function evaluation at every iteration, which can potentially result in a drastic reduction in the number of function evaluations needed to achieve convergence. Furthermore, ARBO was able to provide a bound on the number of iterations needed to achieve convergence by exploiting recent theory developed based in terms of the lower and upper confidence bounds of the GP. Although powerful, ARBO does not directly handle worst-case constraints, which are important in a wide-variety of safety-critical applications in engineering and beyond.

In this work, we propose a constrained extension of ARBO, which we refer to as constrained ARBO (CARBO), that is applicable to constrained robust optimization problems whose objective and/or constraint functions are defined in terms of expensive, black-box functions. CARBO builds on recent work from our group on exploiting exact penalty functions to develop convergent constrained BO algorithms [8]. To prove convergences in the robust constrained optimization context, we need to introduce the notion of robust penalty-based regret, which measures the difference in quality between our recommended point and (unknown) globally optimal robust solution. We will prove that the cumulative robust penalty-based regret is a sub-linearly decaying function of the total number of iterations, which implies there exists a (finite) iteration such that our recommended point is arbitrarily close to the constrained robust global solution. We demonstrate CARBO’s performance on multiple test problems including a high-fidelity simulation of a complex bioreactor [9]. We also discuss the importance of the final recommendation procedure including a new heuristic strategy for improving the quality of the estimated worst-case performance and constraint satisfaction.

References

[1] Dimitris Bertsimas, Omid Nohadani, Kwong Meng Teo, (2010) Robust Optimization for Unconstrained Simulation-Based Problems. Operations Research 58(1):161-178.

[2] Ben-Tal, Aharon, and Arkadi Nemirovski. "Robust optimization–methodology and applications." Mathematical programming 92.3 (2002): 453-480.

[3] Žakovi´c, S., C. Pantelides. 2000. An interior point algorithm for computing saddle points of constrained continuous minimax. Ann. Oper. Res. 99 59–77.

[4] Diehl, M., H. G. Bock, E. Kostina. 2006. An approximation technique for robust nonlinear optimization. Math. Programming, Ser. B 107 213–230.

[5] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. De Freitas, Taking the human out of the loop: A review of Bayesian optimization, Proceedings of the IEEE 104 (1) (2015) 148-175.

[6] J. Marzat, E. Walter, and H. Piet-Lahanier, “A new expected improvement algorithm for continuous minimax optimization,” Journal of Global Optimization, vol. 64, no. 4, pp. 785–802, 2016.

[7] I. Bogunovic, J. Scarlett, S. Jegelka, and V. Cevher, “Adversarially robust optimization with Gaussian processes,” 2018

[8] 24. J.A. Paulson, G. Makrygiorgos, and A. Mesbah. Adversarially robust Bayesian optimization for efficient auto-tuning of generic control structures under uncertainty. AIChE Journal, e17591, 2021.

[9] C. Lu and J.A Paulson. “No-regret Bayesian optimization with unknown equality and inequality constraints using exact penaly functions”, IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems, 2022 (accepted).

[10] J. Chen, J. Daniell, D. Griffin, X. Li, and M. A. Henson, “Experimental testing of a spatiotemporal metabolic model for carbon monoxide fermentation with Clostridium autoethanogenum,” Biochem. Eng. J., vol. 129, pp. 64–73, 2018.