(206e) Stochastic Model Predictive Control for Output Tracking with Bounded Control Inputs | AIChE

(206e) Stochastic Model Predictive Control for Output Tracking with Bounded Control Inputs

Authors 

Santoro, B. F. - Presenter, University of São Paulo
Odloak, D., University of São Paulo


Stochastic model predictive control for output tracking with bounded control inputs

In the last 40 years, model predictive control (MPC) has evolved from an industrial heuristic approach to a mature, well-based theory. From a practical standpoint, its use nowadays is widespread in industry sectors such as oil refining and petrochemicals. Concurrently, from a theoretical point of view, many important stability questions have been addressed and solved (Mayne et al., 2000). Necessary and sufficient conditions for stability of nominal controllers are now completely understood.
Since there is always some degree of plant-model mismatch, these nominal stability results may not be enough in practical situations. The earliest attempts to take uncertainty into account and improve robustness of MPC date back to the 1990s, usually considering that the parameters describing the plant belonged to a bounded set. This hypothesis allows for worst-case analyses, which have been used to guarantee stability and feasibility. However, the procedure may be over conservative in terms of performance (Cannon et al., 2012). More recently, there has been a trend to incorporate statistical knowledge about disturbances in MPC, reducing conservativeness without risking process safety.
Most of the literature on stochastic MPC is focused on the regulator problem. For example, Hokayem et al. (2012) presented an approach to achieve bounded state variance in presence of additive unbounded Gaussian noise but with constrained inputs. A state contraction condition is the key to ensure it. Cannon et al. (2012) developed a stochastic control technique for systems subject to bounded noise and probabilistic constraints on inputs and states. Dual mode paradigm was used, and stability guarantee was derived from the formulation of stochastic tubes. Both previous approaches were intended for output feedback, therefore the controllers were coupled with a state estimation layer.
Couchman et al. (2006) addressed the output tracking problem for systems with deterministic state evolution considering that the output is corrupted by noise. This hypothesis makes possible to use standard techniques for stability proof. A methodology to handle output tracking in non-linear disturbed systems was introduced by Van Hessem and Bosgra (2006). It uses a feed-forward trajectory for constraint
pushing while simultaneously minimizing the back-off. Lucia et al. (2013) have also tackled the problem of uncertain nonlinear MPC and proposed a scenario based approach. At each future sampling time, there is new information available due to the realization of random variables associated with system uncertainty. Therefore, future control actions are calculated in order to counteract these disturbances. The authors claim that their framework provides stability guarantee if uncertainty is discrete-valued but do not present the result in their work.
The main objective of this work is the synthesis of a closed-loop stable MPC for time-varying set-point tracking, considering statistical information about noise distribution to enhance performance. The system is assumed to be linear, subject to additive bounded noise with known distribution and the state is fully measured. The next paragraphs introduce the proposed methodology for objective function calculation, input parameterization and constraints handling.
Pannocchia and Rawlings (2003) achieve offset-free tracking by extending the state with integrating disturbances, which must be estimated from the output. It is known (Maeder et al., 2009) that another possibility to include integral action is using a velocity system description, i.e. �u replaces u as the input. The latter technique is adopted in this work.
There are three main differences in the proposed objective function when compared to standard MPC techniques. First, the stage cost is the distance between predicted outputs and an artificial set-point variable. Second, an offset penalty is added to avoid deviations between this artificial variable and the real set-point, as done in Ferramosca et al. (2010). Finally, the total cost is the expected value of the sum of stage costs, which may be expressed as a function of the decision variables in the same spirit as Hokayem et al. (2012).
Due to the presence of disturbances, open-loop calculation of future control moves may render the problem infeasible in the presence of state constraints (Scokaert and Mayne, 1998). Also, open-loop control could be excessively conservative when looking for inputs that stabilize the system for all possible disturbance realizations (Mayne et al.,
2000). Therefore, it is advantageous to consider causal feedback policies, but taking this whole class into account might lead to intractable optimization problems. A possible approach to find a suboptimal solution is to parameterize the input as an affine function
of the state. As shown in Goulart et al. (2006), this strategy is equivalent to use a parameterization concerning the process disturbance. Moreover, the resulting optimization problem is convex and hence tractable. In this work, we modify this parameterization to consider the control move �u as an affine function of the noise.
Our approach takes into account two types of constraints. Regarding inputs (u) and input changes (ï?u), hard constraints are both enforced. These bounds are translated in constraints over the affine parameters, which is possible due to the assumption of bounded noise. On the other hand, the state is subject only to probabilistic (soft) constraints. Probabilistic state constraints provide an opportunity window for performance improvements, because robust model predictive control is usually based on worst-case considerations. The approach in Korda et al. (2011) take advantage of this observation to synthesize a strongly feasible MPC with probabilistic state constraints. At each sampling time, the first calculated control move is such that the state at the next instant belongs to a â??stochastic robust controlled invariant setâ?. It is a refinement of the traditional robust invariant set, because besides being invariant when faced with all possible disturbances, there is also a bound on the probability of satisfying state constraints along future evolution. We incorporate this technique to our optimization problem, defining probabilistic constraints in some states.
All the proposed features in this workâ?? expected value of the quadratic distance between outputs and set-point, affine input parameterization, first-step set invariance to handle probabilistic state constraints â?? are translated into deterministic convex constraints, leading to a tractable optimization problem. A case study based on an adapted version of the ethanol-water distillation column presented in Ogunnaike et al. (1983) was used in order to assess this controllerâ??s performance. A Monte Carlo approach was implemented to compare it with an offset-free finite horizon MPC and a saturated Linear-Quadratic-Gaussian (LQG) controller. 1000 simulations were conducted, in which at every iteration the same noise realization was injected in the system controlled by each approach. The mean output values show that the proposed controller is able to track different set-points without offset. Moreover, it reduces outputâ??s variance by about 20% for some variables. Finally, recursive feasibility and stability are guaranteed due to the invariant terminal set condition.

References

Cannon, M., Cheng, Q., Kouvaritakis, B., & RakoviÄ?, S. V. (2012). Stochastic tube
MPC with state estimation. Automatica, 48(3), 536â??541.
Couchman, P. D., Cannon, M., & Kouvaritakis, B. (2006). Stochastic MPC with inequality stability constraints. Automatica, 42(12), 2169â??2174.
Ferramosca, A., Limon, D., González, A. H., Odloak, D., & Camacho, E. F. (2010).
MPC for tracking zone regions. Journal of Process Control, 20(4), 506â??516.
Goulart, P. J., Kerrigan, E. C., & Maciejowski, J. M. (2006). Optimization over state feedback policies for robust control with constraints. Automatica, 42(4), 523â??533.
Hokayem, P., Cinquemani, E., Chatterjee, D., Ramponi, F., & Lygeros, J. (2012).
Stochastic receding horizon control with output feedback and bounded controls.

Automatica, 48(1), 77â??88.

Korda, M., Gondhalekar, R., Cigler, J., & Oldewurtel, F. (2011). Strongly feasible stochastic model predictive control. IEEE Conference on Decision and Control and European Control Conference, 1245â??1251.
Lucia, S., Finkler, T., & Engell, S. (2013). Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty. Journal of Process Control, 23(9), 1306â??1319.
Maeder, U., Borrelli, F., & Morari, M. (2009). Linear offset-free Model Predictive
Control. Automatica, 45(10), 2214â??2222.
Mayne, D. Q., Rawlings, J. B., Rao, C. V, & Scokaert, P. O. M. (2000). Constrained
model predictive controlâ?¯: Stability and optimality. Automatica, 36, 789â??814.
Ogunnaike, B. A., Lemaire, J. P., Morari, M., & Ray, W. H. (1983). Advanced
Multivariable Control of a Pilot-Plant Distillation Column. AIChE, 29(4), 632â??640.
Pannocchia, G., & Rawlings, J. B. (2003). Disturbance models for offset-free model- predictive control. AIChE Journal, 49(2), 426â??437.
Scokaert, P. O. M., & Mayne, D. Q. (1998). Min-Max Feedback Model Predictive
Control for Constrained Linear Systems. IEEE Transactions on Automatic Control,

43(8), 1136â??1142.

Van Hessem, D., & Bosgra, O. (2006). Stochastic closed-loop model predictive control of continuous nonlinear chemical processes. Journal of Process Control, 16(3),
225â??241.

Checkout

This paper has an Extended Abstract file available; you must purchase the conference proceedings to access it.

Checkout

Do you already own this?

Pricing

Individuals

AIChE Pro Members $150.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
AIChE Explorer Members $225.00
Non-Members $225.00