(723b) Lyapunov-Based Model Predictive Control of Stochastic Nonlinear Processes | AIChE

(723b) Lyapunov-Based Model Predictive Control of Stochastic Nonlinear Processes

Authors 

Mahmood, M. - Presenter, McMaster University
Mhaskar, P. - Presenter, McMaster University


Incorporation of the complexity of system dynamics in the control design is central to achieving improved performance and closed–loop stability. Such characteristics can include highly nonlinear behavior, uncertainty (typically in the form of additive disturbances and/or uncertain model parameters), and input constraints. Neglecting these characteristics within the control design can lead to performance degradation or even closed-loop instability. Owing to the constraint handling ability of nonlinear model predictive control (NMPC), along with the ability to incorporate an explicit system model, the NMPC framework has been widely utilized to design robust, constrained optimization based controllers. In most NMPC approaches, the manipulated input trajectory is computed at each sampling time via dynamic optimization, where a cost function is minimized subject to a nonlinear dynamic system and input/state constraints. Several research studies dealing with NMPC have focused on issues such as feasibility, stability, constraint satisfaction, and uncertainty [1, 2] including Lyapunov-based NMPC (LMPC) designs [3] that provide a priori (i,e, before controller implementation or testing for feasibility), an explicit characterization of initial conditions from where stability and feasibility of the closed–loop system is guaranteed in the presence of constraints and bounded uncertainty.

All the aforementioned work on NMPC is however, dominated by use of deterministic system models with very few results using the stochastic nature of the process in the control design. In the existing results, stochastic noise is handled via use of inherently ‘worst-case’ robust NMPC schemes (e.g. [4]) where the uncertainty term is assumed to be bounded, however, such formulations are typically very numerically expensive, and as a result can impede on-line implementation. Moreover, the assumption of bounded disturbance, without the use of statistical information about the disturbances can lead to conservative control laws and thus degrade system performance. A natural alternative is to use stochastic unbounded system noise in the controller design.

This alternative approach has recently been addressed under the framework of stochastic MPC where the disturbances are modeled as random variables and the expected value of a cost function is minimized [5, 6, 7]. Although stochastic MPC circumvents the challenge of determining an a priori bound on the noise and also the conservatism originating from the use of a worst-case framework, it gives rise to several other challenging issues. Namely, the optimization problem is generally a stochastic program which induces significant computational burden. For example, the cost function requires the explicit calculation of a conditional expectation and/or probability associated with multi-dimensional random variables. For general nonlinear systems, this is a non-trivial task, and often requires resorting to probability density approximation techniques. Another conceptual challenge in the extension of MPC to stochastic systems is the concept of stability. Developments in probabilistic robust control, have shown that instead of stability guarantees under worst-case realizations of the uncertainty, control system performance with stochastic uncertainty can be improved by introducing a well-defined risk of instability. That is, the closed–loop trajectory will be only be able to reach a desired target region with an associated probability. Yet, such developments have not been applied to the MPC design. Moreover, Lyapunov techniques for stability analysis and control design for stochastic nonlinear systems do exist, albeit not as finely polished as their deterministic counterparts. The key hurdle in the use of Lyapunov techniques for stochastic systems is the presence of an additional Hessian term within the stochastic derivative. Nevertheless, there do exist stabilizing (in a suitable stochastic sense) control laws using stochastic Lyapunov techniques that provide explicitly-defined regions of attraction (in a probabilistic sense) for the closed–loop system [8, 9]. In fact, such results shadow the deterministic counterpart, and have recently been used to derive regions of attraction with well defined risk measures [10].

Deterministic Lyapunov-based control designs have been recently united with predictive control schemes to provide an explicit characterization of the states from where closed–loop stability is guaranteed in the presence of constraints and bounded uncertainty [11]. The analogous unification has yet to be addressed within the context of stochastic unbounded uncertainty. Hence, the MPC approach does stand to gain from the theoretical development in the area of stochastic Lyapunov-based bounded controllers. The incorporation of explicit optimality considerations in the control design in the MPC framework, together with explicit characterizations of states with well defined risk of instability measures which are derived using stochastic Lyapunov techniques, therefore becomes a meaningful goal.

Motivated by these considerations, in this work we propose a stochastic Lyapunov-based model predictive controller (SLMPC) for nonlinear systems with unbounded disturbances (with Ito noise and subject to input constraints). In particular, we utilize the information on the distribution of the uncertain variables (instead of traditionally used worst-case bounds) to develop model predictive controllers that yield less conservative (albeit probabilistic, with well characterized probabilities) stability region estimates while handling uncertainty. The key idea is to use stochastic Lyapunov-based feedback controllers, with well characterized stabilization in probability regions to design constraints in the SLPMC. The SLMPC scheme inherits all the stability and robustness properties of the Lyapunov-based feedback controller, while also incorporating optimality consideration. The SLMPC design provides an a priori characterization of a lower bound on the probability of achieving stability for initial conditions within an explicitly characterized set of initial conditions.

First, a general class of Lyapunov-based feedback controllers is introduced, and key properties pertaining to the sample and hold implementation of the Lyapunov-based feedback controller are derived. The SLMPC is then formulated and shown to inherit all the the stability properties from the Lyapunov-based feedback controller and simultaneously incorporates optimality considerations which improve closed-loop performance. Finally, the theoretical results are demonstrated on a chemical reactor example.

References

[1]     H. Michalska and D.Q. Mayne. Robust receding horizon control of constrained nonlinear systems. Automatic Control, IEEE Transactions on, 38(11):1623 –1633, nov 1993.

[2]     D. Q. Mayne. Nonlinear model predictive control: An assessment. In Proceedings of 5th International Conference on Chemical Process Control, pages 217–231, Tahoe City, CA, 1997.

[3]     M. Mahmood and P. Mhaskar. Enhanced stability regions for model predictive control of nonlinear process systems. AIChE J., 54:1487–1498, 2008.

[4]     D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. Constrained model predictive control: Stability and optimality. Automatica, 36:789–814, 2000.

[5]     P. Hokayem, E. Cinquemani, D. Chatterjee, J. Lygeros, and F. Ramponi. Stochastic Receding Horizon Control with Output Feedback and Bounded Control Inputs. In IEEE Conference on Decision and Control, pages 6095–6100, 2010.

[6]     M. Cannon, B. Kouvaritakis, and Xingjian Wu. Probabilistic Constrained MPC for Multiplicative and Additive Stochastic Uncertainty. IEEE Transactions on Automatic Control, 54(7):1626–1632, 2009.

[7]     Dennis Van Hessem and Okko Bosgra. Stochastic closed-loop model predictive control of continuous nonlinear chemical processes. Journal of Process Control, 16(3):225–241, 2006.

[8]     H Deng, M Krstic, and R.J Williams. Stabilization of stochastic nonlinear systems driven by noise of unknown covariance. IEEE Transactions on Automatic Control, 46(8):1237–1253, 2001.

[9]     R. Chabour and M. Oumoun. On a universal formula for the stabilization of control stochastic nonlinear systems. Stochastic Analysis and Applications, 17:359–368, 1999.

[10]   Battilotti Stefano and Alberto De Santis. Stabilization in probability of nonlinear stochastic systems with guaranteed region of attraction and target set. IEEE Transactions on Automatic Control, 48:1585–1599, 2003.

[11]   Prashant Mhaskar, Nael El-Farra, and Panagiotis Christofides. Techniques for uniting Lyapunov-based and model predictive control. In Rolf Findeisen, Frank Allgцwer, and Lorenz Biegler, editors, Assessment and Future Directions of Nonlinear Model Predictive Control, volume 358 of Lecture Notes in Control and Information Sciences, pages 77–91. Springer Berlin / Heidelberg, 2007.