(17f) Learning-Based MPC with State-Dependent Uncertainty Using Adaptive Scenario Trees | AIChE

(17f) Learning-Based MPC with State-Dependent Uncertainty Using Adaptive Scenario Trees

Authors 

Bonzanini, A. D. - Presenter, University of California - Berkeley
Paulson, J., The Ohio State University
Mesbah, A., University of California, Berkeley
The complex and integrated dynamics of emerging systems pose several unique challenges to designing high-performance controllers that can be deployed in safety-critical applications. In practice, complex system dynamics are typically modeled with some known function and an additive term that describes the initially unknown dynamics. Hence, the desired safety objective, which is typically formulated as robust constraint satisfaction, is often achieved at the expense of closed-loop performance. To this end, methods that incorporate feedback in the predictions have been developed, which directly model the control inputs as recourse variables, thus reducing conservativeness compared to open-loop uncertainty propagation strategies [1], [2]. Yet, most of these methods use uncertainty descriptions based on worst-case bounds that are generally conservatively estimated offline from limited data.

Recent works in robust and stochastic model predictive control (MPC) have demonstrated that replacing the uncertainty bounds with a state- and input-dependent uncertainty model, typically learned from the data, can improve closed-loop performance [3], [4]. This is mainly due to the fact that such a model can represent the variability in the degree of the uncertainty throughout the state space. In particular, Gaussian process (GP) regression has been employed to learn such models from the data, since they intrinsically provide a measure of the modeling error in the form of confidence intervals, thus ensuring that conservativeness is reduced while maintaining safety guarantees up to a quantifiable probability [4], [5], [6]. However, GP models are known to produce a nonlinear and non-convex uncertainty description that is difficult to incorporate into currently available robust MPC methods.

In this work, we present a learning scenario-based MPC (L-sMPC) strategy that models feedback in the prediction in the form of a scenario tree that is adapted online according to a learned GP model, which explicitly accounts for the state- and input-dependence of the uncertainty. Although the L-sMPC strategy improves closed-loop performance, it cannot guarantee robust constraint satisfaction by design. Previous work has shown that the notion of robust control invariant (RCI) sets can be used to provide an online safety certificate. Thus, to provide a non-conservative real-time safety certificate, we propose to project the optimal inputs calculated by the L-sMPC strategy into a set that ensures the state evolves within the RCI set. The construction of RCI sets in this state- and input-dependent setting, however, is challenging to address due to inherent non-convexity. Thus, we also present a novel algorithm for constructing RCI sets in the presence of a GP uncertainty model. We then show how this can be combined with the scenario tree MPC approach to achieve improved performance, recursive feasibility, and robust constraint satisfaction on an illustrative chemical reactor case study.

[1] D. Bernardini and A. Bemporad, "“Scenario-based model predictive control of stochastic constrained linear systems," in Proceedings of the Conference on Decision and Control held jointly with the Chinese Control Conference, Shanghai, China, 2009.

[2] S. Lucia, T. Finkler and S. Engell, "Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty," Journal ofProcess Control, pp. 1306-1319, 2013.

[3] R. Soloperto, M. A. Muller, S. Trimpe and F. Allgower, "Learning-based robust model predictive control with state-dependent uncertainty," IFAC-PapersOnLine, pp. 442-447, 2018.

[4] A. D. Bonzanini, D. B. Graves and A. Mesbah, «Learning-based stochastic model predictive control for reference tracking under state- dependent uncertainty: An application to cold atmospheric plasmas,» IEEE Transactions on Control Systems Technology, 2020.

[5] A. Aswani, H. Gonzalez, S. Sastry y C. Tomlin, «Provably safe and robust learning-based model predictive control,» Automatica, pp. 1216-1226, 2013.

[6] A.. D. Bonzanini, J. A. Paulson and A. Mesbah, "Safe Learning-based Model Predictive Control under State-dependent Uncertainty using Scenario Trees," in Proceedings of the IEEE Conference on Decision and Control, Jeju Island, Republic of Korea, 2020.