(62d) Three Emerging Technologies That Will Soon Disrupt Manufacturing and Process Control | AIChE

(62d) Three Emerging Technologies That Will Soon Disrupt Manufacturing and Process Control

Authors 

Badgwell, T. - Presenter, Collaborative Systems Integration
Bartusiak, R. D., Collaborative Systems Integration
We are currently in the midst of a fourth industrial revolution (Industry 4.0 [1]), involving the large-scale automation of traditional manufacturing and industrial practices, made possible by recent developments in mathematical algorithms, computer hardware, and internet connectivity (Industrial Internet of Things (IIoT) [2]). While most of this work can be considered evolutionary in nature, in this presentation we highlight three emerging technologies that appear to be truly disruptive; that is, they are likely to have such a large impact that they will change the way theoreticians and practitioners view manufacturing and process control. These technologies are Economic Model Predictive Control (EMPC), Open Process Automation (OPA), and Deep Reinforcement Learning (DRL).

Economic MPC (EMPC) is a relatively new technology that combines economic optimization with Model Predictive Control [3], two functions that are traditionally implemented separately. While the theory was worked out a few years ago [4], EMPC applications have only begun to appear recently. Professor Jim Rawlings and co-workers, for example, presented a successful EMPC implementation for the Stanford University campus heating and cooling system [5]. Recent theoretical work has shown that scheduling problems, which are usually approached from the point of view of static optimization, can be considered as a special case of closed-loop EMPC [6]. This unification of closed-loop scheduling and control has shed new light on such problems as rescheduling in the face of disturbances, and provides academics and practitioners with a completely new framework for viewing and analyzing scheduling problems.

The industrial automation marketplace, comprised of Distributed Control System (DCS), Programmable Logic Controller (PLC), and Supervisory Control and Data Acquisition (SCADA) technology offerings, will soon experience a historic, game-changing disruption with the emergence of Open Process Automation (OPA) technology. Manufacturers, whose innovations have been constrained for decades by the limitations of closed, proprietary systems, will soon experience the benefits of open, interoperable, resilient, secure-by-design automation systems, made possible by the development of the consensus-based Open Process Automation Standard (O-PAS) by the Open Process Automation Forum (OPAF) [7]. Once O-PAS certified automation systems become widespread, vendors will see the market for their products and services expand significantly as the visions of I4.0 and the IIoT are realized. Academics and technology developers will see more opportunities to test their solutions as it becomes easier to deploy them. Dr. Don Bartusiak, co-director of OPAF, summarizes their progress to date in a paper presented recently at the IFAC World Congress 2020 [7].

Reinforcement Learning (RL) is a Machine Learning (ML) technology in which a computer agent learns, through trial and error, the best way to accomplish a particular task [8]. Deep Learning (DL) is a technology in which neural networks with a large number of layers are used to model relationships [9]. When DL is used to parametrize the policies and value functions of an RL agent, the resulting Deep Reinforcement Learning (DRL) technology is especially powerful. In 2017, for example, a DRL agent named AlphaGo soundly defeated the reigning world champion Go player [10]. Applications of this technology to manufacturing and process control systems are currently under study [11]. It is likely that DRL will not replace currently successful control algorithms such as PID and MPC, but will rather takeover some of the mundane tasks that humans perform to manage automation and control systems. For example, it appears that a DRL agent can learn how to tune PID loops effectively [12]. Other possibilities include advising operators during transient and upset conditions, mitigating disturbances such as weather events, and detecting and mitigating unsafe operations [11].

References

[1] Y Liao, F Deschamps, EdFR Loures, LFP Ramos, “Past, present, and future of Industry 4.0 - a systematic literature review and research agenda proposal”, Intl J Production Research, 55 (12), 3609-3629, (2017).

[2] H Boyes, B Hallaq, J Cunningham, T Watson, “The industrial internet of things (IIoT): An analysis framework”, Computers in Industry, 101, 1–12, (2018).

[3] SJ Qin, TA Badgwell, “A survey of industrial model predictive control technology”, Control Engineering Practice 11 (7), 733-764, (2003).

[4] JB Rawlings, D Angeli, CN Bates, “Fundamentals of economic model predictive control”, 51st Conference on Decision and Control, 3851-3861, (2012).

[5] JB Rawlings, NR Patel, MJ Risbeck, CT Maravelias, MJ Wenzel, RD Turney, “Economic MPC and real-time decision making with application to large-scale HVAC energy systems”, Computers & Chemical Engineering, 114 (6), 89-98, (2018).

[6] MJ Risbeck, CT Maravelias, JB Rawlings, “Unification of Closed-Loop Scheduling and Control: State-space Formulations, Terminal Constraints, and Nominal Theoretical Properties”, Computers & Chemical Engineering, 129 (10), (2019).

[7] RD Bartusiak, S Bitar, DL DeBari, BG Houk, D Stevens, B Fitzpatrick, P Sloan, “Open Process Automation: A Standards-Based, Open, Secure, Interoperable Process Control Architecture”, Proc IFAC 2020 World Congress, (2020).

[8] RS Sutton, AG Barto, “Reinforcement Learning – An Introduction”, The MIT Press, (2018).

[9] I Goodfellow, Y Bengio, A Courville, “Deep Learning”, The MIT Press, (2016).

[10] D Silver, A Huang, CJ Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I. Antonoglou, V Panneershelvam, M Lanctot, “Mastering the game of Go with deep neural networks and tree search”, Nature 529, 484–489 (2017).

[11] J Shin, TA Badgwell, KH Liu, JH Lee, “Reinforcement Learning – Overview of recent progress and implications for process control”, Computers & Chemical Engineering, 127, 282-294 (2019).

[12] TA Badgwell, KH Liu, NA Subrahmanya, WD Liu, MH Kovalski, “Adaptive PID Controller Tuning via Deep Reinforcement Learning”, U.S. patent 1095073, granted February 9, 2021.