(564b) Sample-Free Stochastic Nonlinear Model Predictive Control

Authors: 
Paulson, J., University of California - Berkeley
Bavdekar, V., University of California - Berkeley
Mesbah, A., University of California - Berkeley
Model predictive control (MPC) has been successfully applied for high-performance control of a wide range of complex systems due to its ability to handle multivariable dynamics, input and state constraints, and multiple (possibly conflicting) objectives [1]. However, the accuracy of the model predictions directly impacts the MPC behavior such that uncertainties (e.g., unknown parameters, incorrect model structure, or exogenous disturbances) can lead to degradation in the performance of the overall closed-loop system. Although the receding-horizon implementation of MPC provides some degree of robustness to uncertainty, this relatively small robustness margin may be inadequate in many safety-critical and high-precision control applications.

Extensive work has been done to develop so-called robust MPC (RMPC) methods that typically design control inputs with respect to the worst-case values of the uncertainty [2,3]. In many engineering systems, uncertainties are generally of a stochastic nature. When probabilistic descriptions of uncertainties are available, i.e., probability density functions (pdfs), they can be explicitly incorporated into a MPC formulation with the aim to achieve robustness to uncertainties in a probabilistic sense. This notion has led to stochastic MPC (SMPC) that solves an optimal control problem in which the cost function and constraints are defined in terms of the state pdfs [4]. Thus, SMPC can systematically tradeoff performance and constraint satisfaction in terms of uncertainty distribution, which is becoming increasingly important as high-performance operation pushes the operation toward system constraints. In addition, defining the control objectives in terms of the state distributions is essential for asymmetric cost functions that make the tails of the pdfs essential for effective control.

This work addresses the SMPC problem for nonlinear systems (i.e., SNMPC) with uncertain parameters and stochastic exogenous disturbances for the case of incomplete state information (commonly referred to as “output feedback”). Uncertainty propagation poses a key challenge to SNMPC since there exists no closed-form expressions for the time evolution of the state pdf as a function of the control inputs for nonlinear systems. Sample-based methods for uncertainty propagation (e.g., Markov chain Monte Carlo, sequential Monte Carlo, or the scenario approach) have been utilized in the context of stochastic optimal control of nonlinear systems [5,6]. However, the computational complexity of most sample-based approaches can be prohibitive for the real-time implementation of SNMPC since many samples must be drawn to provide accurate representations of the state pdfs (i.e., the number of required samples generally scales exponentially in terms of the number of uncertainties). To avoid the latter computational challenges, this work presents a sample-free approach to SNMPC which relies on closed-form expressions for describing the (approximate) evolution of the moments of the state pdfs as explicit functions of the control inputs. In addition to its attractive computational features from nonlinear optimization standpoint, a “sample-free” moment-based approach to SNMPC is further motivated by the fact that SNMPC cannot guarantee chance constraint satisfaction and state distribution shaping on the closed-loop system [7]. In this work, generalized polynomial chaos (gPC) [8] is used as the foundation for deriving explicit expressions for the time-evolution of the moments of the state distribution as a function of the control inputs that are amenable to fast, online computations. A moment-based surrogate for the intractable (nonconvex) chance constraints are proposed in terms of the Mahalanobis distance. A moment-matching formulation of the Bayesian estimation problem is presented for joint state and parameter estimation in order to account for incomplete state knowledge (i.e., output feedback implementation of SNMPC). The parameters are adapted (jointly with the states) at each time instant to reduce model uncertainty based on newly obtained measurements. The closed-loop performance of this proposed sample-free SNMPC approach is demonstrated on a continuous stirred-tank reactor benchmark problem [9] and is compared to that of scenario-based and certainty-equivalence NMPC.

References

[1] M. Morari, J. H. Lee. Model predictive control: Past, present and future, Computers & Chemical Engineering, 23, 667–682, 1999.

[2] A. Bemporad, M. Morari. Robust model predictive control: A survey, In: Robustness in Identification and Control, Springer, London, pp. 207–226, 1999.

[3] W. Langson, I. Chryssochoos, S. V. Rakovic, D. Q. Mayne. Robust model predictive control using tubes, Automatica, 40, 125–133, 2004.

[4] A. Mesbah. Stochastic model predictive control: An overview and perspectives for future research, IEEE Control Systems Magazine, 36, 30–44, 2016.

[5] A. Lecchini-Visintini, W. Glover, J. Lygeros, J. M. Maciejowski, Monte Carlo optimization for conflict resolution in air traffic control. IEEE Transactions on Intelligent Transportation Systems, 7, 470–482, 2006.

[6] N. Kantas, J. M. Maciejowski, A. Lecchini-Visintini. Sequential Monte Carlo for model predictive control, In: Nonlinear Model Predictive Control, Springer, Berlin Heidelberg, pp. 263–273, 2009.

[7] J. A. Paulson, E. Harinath, L. C. Foguth, R. D. Braatz. Nonlinear model predictive control of systems with probabilistic time-invariant uncertainties, In: Proceedings of the 5th IFAC Conference on Nonlinear Model Predictive Control, pp. 16–25, 2015.

[8] D. Xiu, G. E. Karniadakis. The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM Journal of Scientific Computation, 24, 619–644, 2002.

[9] K. U. Klatt, S. Engell. Gain-scheduling trajectory control of a continuous stirred tank reactor, Computers & Chemical Engineering, 22, 491–502, 1998.