We are aware of an issue with certificate availability and are working diligently with the vendor to resolve. The vendor has indicated that, while users are unable to directly access their certificates, results are still being stored. Certificates will be available once the issue is resolved. Thank you for your patience.

(667d) Unified Treatment of Scheduling and Control: Same Story, Different Dynamics?

Risbeck, M., University of Wisconsin--Madison
Maravelias, C. T., University of Wisconsin-Madison
Rawlings, J. B., University of Wisconsin-Madison
Traditionally, scheduling and control are viewed as two related but very different endeavors. For scheduling, the main decisions are typically discrete yes/no choices; the models capture only important discrete events but include many units; and, the objective is generally economic in some sense (minimize, e.g., cost or earliness). For control, the decisions are almost always continuous in nature; the models describe detailed temporal dynamics of the system but are local in scope; and, the objective function is artificially designed to drive the system to a predetermined setpoint. Despite these differences, both problems can be addressed by formulating a mathematical optimization and solving it repeatedly as new information is received. For control systems, re-optimization is necessary to correct for unmeasured disturbances or to respond to changes in setpoint, and these same general considerations also trigger rescheduling. While the timescales may be quite different (in terms of both the horizon of optimization and how frequently optimization is performed), this similarity begs the question of whether the two disciplines can be unified under a single mathematical treatment. In this presentation, we advance the idea that certain classes of scheduling and control problems are indeed two variants of the same overall problem that differ only in their respective system dynamics and decision space, and thus can be analyzed using a common set of tools.

To start, we revisit previous work that shows how a common scheduling model can be written in state-space form. This abstraction allows the model to be viewed as a dynamic system rather than as a set of combinatorial constraints. Next, we discuss results showing that, with suitable assumptions, the presence of discrete-valued control inputs does not affect the stability properties of model predictive control (MPC). Combining these ideas, we show that economically optimizing scheduling and control problems can both be viewed as cases of dynamic real-time optimization or economic MPC, which has important ramifications for closed-loop implementation. For example, without end constraints, dynamic economic optimization often exhibits the so-called "turnpike" property, whereby the system follows an optimal (or near-optimal) trajectory throughout most of the horizon, but at the end sharply deviates in pursuit of lower cost without regard for what the system will do after the prediction horizon. This behavior has two main effects: first, a given schedule remains useful for only part of its prediction horizon; and second, that it becomes very difficult to prove anything about closed-loop performance. However, by using a recursively feasible reference trajectory (e.g., a periodic solution) as an end constraint, it is possible to show that, in the nominal case, closed-loop cost will asymptotically be no worse than that of the reference trajectory. This and other types of terminal constraints can shrink the necessary prediction horizon, which allows optimization to be performed more rapidly, i.e., in real time instead of just when some specific upset triggers a reschedule. Thus, the decisions made in closed loop remain nearly optimal (as they are taken from the beginning of the prediction horizon, rather than the end) to achieve low closed-loop cost.

While it is important to understand the nominal properties of the closed-loop system, real systems always deviate from their models due to external disturbances, modeling errors, etc. Thus, we also examine what robustness properties can be applied to economic scheduling and control problems when the realized system behavior does not match the model's predictions. With a tracking objective, it can be shown that MPC is inherently robust (i.e., cannot be destabilized by small disturbances) even when discrete and continuous actuators are present. However, in economic contexts, these guarantees may no longer apply, and further, it is rare that scheduling environments are subject only to vanishingly small disturbances. Thus, using several small example systems, we examine whether nominal online optimization performs well when subject to disturbances despite having fewer theoretical guarantees. We then conclude with some directions for future research. The overall goal is to use insights from scheduling and control to advance a unified perspective that increases the applicability and improves the performance of closed-loop online dynamic optimization techniques.

Works Cited

  1. Subramanian, K., Maravelias, C.T., Rawlings, J.B., 2012. A state-space model for chemical production scheduling. Comput. Chem. Eng. 47, 97-160.

  2. Rawlings, J.B., Risbeck, M.J., 2017. Model predictive control with discrete actuators: Theory and application. Automatica 78, 258-265.

  3. Gupta, D., Maravelias, C.T., 2016. On deterministic scheduling: Major considerations, paradoxes, and remedies. Comput. Chem. Eng. 94, 312-330.

  4. Angeli, D., Amrit, R., Rawlings, J.B., 2012. On average performance and stability of economic model predictive control. IEEE Trans. Auto. Cont. 57, 1615-1626.