(587e) Discreetly Discrete: Quietly Adding Integer-Valued Actuators to Model Predictive Control

Authors: 
Risbeck, M., University of Wisconsin--Madison
Rawlings, J. B., University of Wisconsin-Madison
In large-scale industrial processes, there are often both continuous ("how much") and discrete ("which one") control decisions that must be made. For the continuous decisions, a successful paradigm has been model predictive control (MPC), which chooses process inputs at regular intervals by solving an optimization problem considering the evolution of the system over some finite horizon. Owing to the historical difficulty of nonlinear and mixed continuous/discrete optimization, MPC theory and implementation almost always require that all control inputs take values in some convex set, as (in the case of linear models) the resulting convex optimization can be readily solved to (global) optimality. Because this framework precludes discrete optimization variables, such discrete decisions must be removed from the process control layer and are instead made using heuristics or other computational methods on a slower timescale, which can yield suboptimal control performance. The end result is an odd composite system that requires the controller to react to the discrete decisions instead of proactively making them. More desirable would be a single optimizing controller that has at its disposal all of the discrete and continuous actuators of the system.

With advances in computing hardware and optimization methods, it is now possible to quickly solve mixed continuous/discrete optimization problems provided that the formulation is sufficiently tight. For example, commercial solvers such as Gurobi or CPLEX can be used for "convex" mixed-integer quadratic programming (MIQP) problems, which consist of linear constraints and a convex quadratic objective function. This class of problems includes standard linear systems, as well as linear switched systems and piecewise-affine systems, both of which can be modeled using MIQPs. For nonlinear systems, spatial branch-and-bound algorithms can be applied; while such instances are often challenging, they are not significantly more difficult than global optimization for nonconvex continuous-variable problems, which is a requirement of many nonlinear MPC schemes proposed in the literature. As powerful optimization software becomes increasingly available, optimization-based control methods such as MPC become increasingly applicable.

While there have been significant improvements to computational capabilities, control theory for this class of problems has not kept pace, and there is a noticeable gap in the literature for MPC applied to general nonlinear systems with discrete actuators. However, it turns out that when assumptions are carefully chosen, MPC theory for systems with only continuous actuators can be extended without major modification to cover discrete-valued inputs. In this presentation, we discuss the extension of nominal stability theory to systems with both continuous and discrete actuators. We show that the key requirement is that the admissible input set not be assumed to contain the origin on its interior. Then, we discuss some of the computational difficulties of discrete actuators and show how they can be partially alleviated by suboptimal MPC, which relaxes the necessity of finding the global solution to each optimization problem. We conclude with a brief discussion of inherent robustness properties. While practical issues still remain, we feel that these results serve to break down some of the barriers toward industrial implementation of MPC with discrete actuators and allow practitioners to leverage advances in optimization methods to improve control capabilities.