(64a) A Framework for Integrating Model Predictive Controllers to Control Large-Scale Systems | AIChE

(64a) A Framework for Integrating Model Predictive Controllers to Control Large-Scale Systems

Authors 

Venkat, A. N. - Presenter, University of Wisconsin, Madison
Rawlings, J. B., University of Wisconsin-Madison
Wright, S. J., University of Wisconsin, Madison


Control of large, networked systems is typically accomplished by applying local modeling and control techniques to the smaller, more manageable subsystems. In a chemical plant, for instance, raw materials are transformed into high value-added products through a network of interacting unit operations. Model predictive control subsystems have been widely implemented across the chemical industry sector exploiting the rich theoretical developments in the area [9, 3, 6]. It is well known that a decentralized control approach can cause unacceptable closed-loop behavior when the subsystems are tightly coupled. Centralized MPC of large-scale systems, on the other hand, is viewed by most practitioners as highly unrealistic, even undesirable. With several plants already functional with decentralized MPCs in place, operators do not wish to invest in a complete control system redesign as would be necessary to implement centralized MPC. The opportunity presented for cross-integration within the MPC framework and potential requirements and benefits of such technology has been discussed in [4, 5]. Representative distributed MPC formulations in the literature are either suboptimal strategies or have unproven nominal properties [1, 2, 10]. In this work, the problem of distributed control of networked systems through the integration of the different subsystems MPCs is addressed. A modeling framework that quantifies the interactions among the subsystems is employed. A cooperation-based distributed MPC algorithm with guaranteed performance properties was described in a previous work [8]. All iterates (intermediate state and input trajectories) generated by this distributed MPC algorithm are feasible and the iterates monotonically converge to the optimal, centralized MPC solution. The distributed controller defined based on any intermediate iterate can be shown to stabilize the system in closed loop.

In a practical MPC implementation, the states of each subsystem are estimated rather than measured. Closed-loop stability of the output feedback distributed MPC controller (terminated at any intermediate iterate) requires that all local subsystem-based state estimators are stable. For unconstrained distributed estimation, one possible choice is to use subsystem-based Kalman filters to assess the subsystem states from local measurements. We expand upon our earlier results and examine the role of distributed state estimation and disturbance modeling within the framework of distributed MPC. Specifically, we answer the following questions: Under what conditions do stable distributed state estimators exist? Are they optimal? What impact does the choice of the distributed estimation framework have on closed-loop stability? What are the different choices of disturbance models that guarantee offset-free performance; can the disturbance models employed in the decentralized MPC framework be used in the distributed MPC scheme? To incorporate physical constraints in the distributed estimation framework, a distributed moving horizon estimation (MHE) strategy is formulated. Stability arguments for the distributed MHE formulation are derived from arrival cost approximation ideas described in [7] for constrained, centralized estimation.

In [8], the steady-state target calculation was carried out in a centralized manner and the availability of the optimal, steady-state subsystem input and state vectors was assumed (^1). As an alternative to centralized steady-state target computation, a cooperation-based iterative algorithm for distributed steady-state target calculation is proposed. In this framework, the steady-state input and state targets are computed at the subsystem level with the exchange of steady-state input and state information among the subsystems MPCs. All intermediate iterates are feasible steady states and the algorithm monotonically approaches the optimal steady-state target with iteration number. These two properties allow the intermediate termination of the distributed target calculation algorithm without compromising controller stability.

The distributed MPC controller, therefore, consists of three main components:

1. Distributed regulator.

2. Distributed state estimator with disturbance model.

3. Distributed target calculation.

The flexibility to terminate the distributed MPC at any intermediate iterate without affecting feasibility and closed-loop stability enables the practitioner to terminate the distributed MPC control algorithm at the end of each sampling interval, even if convergence is not attained. Such a distributed control philosophy also presents an opportunity to enhance control performance in cases in which different parts of an interconnected system are owned by different organizations.

We present examples from chemical engineering and other engineering fields to illustrate the effectiveness of the proposed distributed MPC approach. In each example, the performance of the distributed MPC framework is evaluated against other existing MPC formulations. In many cases, we observe that the cooperation-based distributed MPC formulation terminated after just 1 iterate achieves a significant improvement in closed-loop performance compared to decentralized MPC.

(^1): Composite Linear Program (CLP), which is an industrial application from Aspentech Ltd. also solves a large, centralized steady-state target problem and passes the relevant steady-state target vectors to each subsystems DMC controller. The authors would like to thank Dr. Rahul Bindlish, Dow Chemicals and Dr. Tom Badgwell, Aspentech Ltd. for this information.

References: [1] J. Antwerp and R. Braatz. Model predictive control of large scale processes. J. Proc. Control, 10:1 8, 2000.

[2] E. Camponogara, D. Jia, B. H. Krogh, and S. Talukdar. Distributed model predictive control. IEEE Ctl. Sys. Mag., pages 44 52, February 2002.

[3] J. J. Downs. Linking control strategy design and model predictive control. In J. B. Rawlings, B. A. Ogunnaike, and J.W. Eaton, editors, Chemical Process Control VI: Sixth International Conference on Chemical Process Control, pages 352 363, Tucson, Arizona, January 2001. AIChE Symposium Series, Volume 98, Number 326.

[4] R. Kulhavy, J. Lu, and T. Samad. Emerging technologies for enterprise optimization in the process industries. In J. B. Rawlings, B. A. Ogunnaike, and J.W. Eaton, editors, Chemical Process Control VI: Sixth International Conference on Chemical Process Control, pages 352 363, Tucson, Arizona, January 2001. AIChE Symposium Series, Volume 98, Number 326.

[5] J. Lu. Challenging control problems and emerging technologies in enterprise optimization. Control Eng. Prac., 11(8):847 858, August 2003.

[6] S. J. Qin and T. A. Badgwell. A survey of industrial model predictive control technology. Control Eng. Prac., 11(7):733 764, 2003.

[7] C. V. Rao, J. B. Rawlings, and J. H. Lee. Constrained linear state estimation a moving horizon approach. Automatica, 37(10):1619 1628, 2001.

[8] A. N. Venkat, J. B. Rawlings, and S. J. Wright. Stability and optimality of distributed model predictive control. Submitted to the CDC-ECC Joint Conference, Seville, Spain, 2005.

[9] R. E. Young, R. D. Bartusiak, and R.W. Fontaine. Evolution of an industrial nonlinear model predictive controller. In J. B. Rawlings, B. A. Ogunnaike, and J.W. Eaton, editors, Chemical Process Control VI: Sixth International Conference on Chemical Process Control, pages 342 351, Tucson, Arizona, January 2001. AIChE Symposium Series, Volume 98, Number 326.

[10] G. Zhu and M. Henson. Model predictive control of interconnected linear and nonlinear processes. Ind. Eng. Chem. Res., 41:801 816, 2002.

Checkout

This paper has an Extended Abstract file available; you must purchase the conference proceedings to access it.

Checkout

Do you already own this?

Pricing

Individuals

AIChE Pro Members $150.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
AIChE Explorer Members $225.00
Non-Members $225.00