Robust Data-Driven Design of Generic Control Structures with Probabilistic Guarantees Using Gaussian Process Emulators
- Type: Conference Presentation
- Conference Type: AIChE Annual Meeting
- Presentation Date: November 8, 2021
- Duration: 19 minutes
- Skill Level: Intermediate
- PDHs: 0.50
In recent years, there has been a growing interest in applying data-driven optimization methods to address these inherent challenges to robust controller design, especially in cases that the controller structure is fairly complex. The Bayesian optimization (BO) framework , which emerged as a powerful method for automating hyperparameter optimization in machine and deep learning models, has been applied to the tuning of model predictive control (MPC) â€“ and other controller architectures â€“. Although promising results have been observed in practice, the majority of these works lack formal guarantees on the quality of the solution. Such guarantees are especially important in safety-critical systems in which the closed-loop system must satisfy constraints and performance requirements even when the system is operated under a significant amount of uncertainty. In , we developed a method that provides probabilistic guarantees on the performance of BO-designed controller, which involves two key steps. First, a set of potential â€œgoodâ€ candidate designs are identified using BO and then non-convex scenario optimization  is used to derive a distribution-independentbound on the probability of constraint violation. One limitation of this method is that it assumes the availability of a high-fidelity simulator that can be used for controller tuning; however, the construction of this simulator may be difficult in practice.
In this work, we develop a fully data-driven robust controller tuning approach that is applicable to any finite number of general closed-loop performance measures. To overcome the need for a high-fidelity simulator, we first construct a Gaussian process (GP)  emulator of the system; the GP is a probabilistic model for the state transition function that can be applied recursively over time to derive a closed-loop state distribution. It is important to note that the GP does not represent a single model for the dynamics, but instead is a distribution over function space. The complexity of the dynamic GP emulator, however, implies the worst-case objective and constraint violations appearing in the robust controller design problem cannot be exactly computed. Instead of relying on conservative worst-case approximations, we develop probabilistic estimates from a collection of independent and identically distributed (i.i.d.) samples. However, an important question is: How many randomly drawn i.i.d. samples are needed to provide strong probabilistic guarantees on the accuracy of the worst-case estimates? We address this problem by deriving a bound on the number of samples to ensure the worst-case performance estimates jointly achieve a specified accuracy level with high probability â€“ the bound is an extension of work presented in  and is completely independent of the number of uncertainties and their underlying probability distribution. Since the derived bound does not depend on dimension of the uncertainties, it can even be applied to GPs; however, this requires sampling an infinite dimensional stochastic process that cannot be done exactly a priori. Since we are interested in finite-time performance measures, we can alternatively develop a recursive sampling procedure that involves iteratively updating the posterior distribution of the GP at every step of the dynamic simulation . By incorporating our novel probabilistic bound into the BO framework, we ensure that desired probabilistic properties are satisfied at every tested controller design, which alleviates the need for the post verification step needed in our previous work . We apply the proposed method to an illustrative benchmark problem to demonstrate its relevant properties and advantages.
 V. D. Blondel and J. N. Tsitsiklis, â€œSurvey of computational complexity results in systems and control,â€ Automatica, vol. 36, no. 9, pp. 1249â€“1274, Sep. 2000, doi: 10.1016/S0005-1098(00)00050-9.
 J. A. Paulson and A. Mesbah, â€œShaping the Closed-Loop Behavior of Nonlinear Systems Under Probabilistic Uncertainty Using Arbitrary Polynomial Chaos,â€ in Proceedings of the IEEE Conference on Decision and Control, 2019, vol. 2018-Decem, doi: 10.1109/CDC.2018.8619328.
 J. A. Paulson, E. A. Buehler, and A. Mesbah, â€œArbitrary Polynomial Chaos for Uncertainty Propagation of Correlated Random Variables in Dynamic Systems,â€ IFAC-PapersOnLine, vol. 50, no. 1, 2017, doi: 10.1016/j.ifacol.2017.08.954.
 B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, â€œTaking the human out of the loop: A review of Bayesian optimization,â€ Proceedings of the IEEE, vol. 104, no. 1. Institute of Electrical and Electronics Engineers Inc., pp. 148â€“175, Jan. 01, 2016, doi: 10.1109/JPROC.2015.2494218.
 D. Piga, M. Forgione, S. Formentin, and A. Bemporad, â€œPerformance-oriented model learning for data-driven mpc design,â€ IEEE Control Syst. Lett., vol. 3, no. 3, pp. 577â€“582, Jul. 2019, doi: 10.1109/LCSYS.2019.2913347.
 Q. Lu, R. Kumar, and V. M. Zavala, â€œMPC Controller Tuning using Bayesian Optimization Techniques,â€ arXiv, Sep. 2020, Accessed: Mar. 29, 2021. [Online]. Available: http://arxiv.org/abs/2009.14175.
 F. Sorourifar, G. Makrygirgos, A. Mesbah, and J. A. Paulson, â€œA Data-Driven Automatic Tuning Method for MPC under Uncertainty using Constrained Bayesian Optimization,â€ arXiv, Nov. 2020, Accessed: Mar. 29, 2021. [Online]. Available: http://arxiv.org/abs/2011.11841.
 C. KÃ¶nig, M. Khosravi, M. Maier, R. S. Smith, A. Rupenyan, and J. Lygeros, â€œSafety-Aware Cascade Controller Tuning Using Constrained Bayesian Optimization,â€ Oct. 2020, Accessed: Mar. 29, 2021. [Online]. Available: http://arxiv.org/abs/2010.15211.
 M. Fiducioso, S. Curi, B. Schumacher, M. Gwerder, and A. Krause, â€œSafe Contextual Bayesian Optimization for Sustainable Room Temperature PID Control Tuning,â€ IJCAI Int. Jt. Conf. Artif. Intell., vol. 2019-August, pp. 5850â€“5856, Jun. 2019, Accessed: Mar. 29, 2021. [Online]. Available: http://arxiv.org/abs/1906.12086.
 M. Neumann-Brosig, A. Marco, D. Schwarzmann, and S. Trimpe, â€œData-efficient autotuning with bayesian optimization: An industrial control study,â€ IEEE Trans. Control Syst. Technol., vol. 28, no. 3, pp. 730â€“740, May 2020, doi: 10.1109/TCST.2018.2886159.
 J. A. Paulson and A. Mesbah, â€œData-Driven Scenario Optimization for Automated Controller Tuning with Probabilistic Performance Guarantees,â€ IEEE Control Syst. Lett., vol. 5, no. 4, pp. 1477â€“1482, Oct. 2021, doi: 10.1109/LCSYS.2020.3040599.
 M. C. Campi, S. Garatti, and F. A. Ramponi, â€œA General Scenario Theory for Nonconvex Optimization and Decision Making,â€ IEEE Trans. Automat. Contr., vol. 63, no. 12, pp. 4067â€“4078, Dec. 2018, doi: 10.1109/TAC.2018.2808446.
 C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning. Cambridge, MA: MIT Press, 2006.
 R. Tempo, E. W. Bai, and F. Dabbene, â€œProbabilistic robustness analysis: Explicit bounds for the minimum number of samples,â€ Syst. Control Lett., vol. 30, no. 5, pp. 237â€“242, Jun. 1997, doi: 10.1016/S0167-6911(97)00005-4.
 J. Umlauft, T. Beckers, and S. Hirche, â€œScenario-based Optimal Control for Gaussian Process State Space Models,â€ in 2018 European Control Conference, ECC 2018, Nov. 2018, pp. 1386â€“1392, doi: 10.23919/ECC.2018.8550458.
|AIChE Member Credits||0.5|
|AIChE Graduate Student Members||Free|
|AIChE Undergraduate Student Members||Free|