Three Emerging Technologies That Will Soon Disrupt Manufacturing and Process Control
- Type: Conference Presentation
- Conference Type: AIChE Annual Meeting
- Presentation Date: November 8, 2021
- Duration: 25 minutes
- Skill Level: Intermediate
- PDHs: 0.50
Economic MPC (EMPC) is a relatively new technology that combines economic optimization with Model Predictive Control , two functions that are traditionally implemented separately. While the theory was worked out a few years ago , EMPC applications have only begun to appear recently. Professor Jim Rawlings and co-workers, for example, presented a successful EMPC implementation for the Stanford University campus heating and cooling system . Recent theoretical work has shown that scheduling problems, which are usually approached from the point of view of static optimization, can be considered as a special case of closed-loop EMPC . This unification of closed-loop scheduling and control has shed new light on such problems as rescheduling in the face of disturbances, and provides academics and practitioners with a completely new framework for viewing and analyzing scheduling problems.
The industrial automation marketplace, comprised of Distributed Control System (DCS), Programmable Logic Controller (PLC), and Supervisory Control and Data Acquisition (SCADA) technology offerings, will soon experience a historic, game-changing disruption with the emergence of Open Process Automation (OPA) technology. Manufacturers, whose innovations have been constrained for decades by the limitations of closed, proprietary systems, will soon experience the benefits of open, interoperable, resilient, secure-by-design automation systems, made possible by the development of the consensus-based Open Process Automation Standard (O-PAS) by the Open Process Automation Forum (OPAF) . Once O-PAS certified automation systems become widespread, vendors will see the market for their products and services expand significantly as the visions of I4.0 and the IIoT are realized. Academics and technology developers will see more opportunities to test their solutions as it becomes easier to deploy them. Dr. Don Bartusiak, co-director of OPAF, summarizes their progress to date in a paper presented recently at the IFAC World Congress 2020 .
Reinforcement Learning (RL) is a Machine Learning (ML) technology in which a computer agent learns, through trial and error, the best way to accomplish a particular task . Deep Learning (DL) is a technology in which neural networks with a large number of layers are used to model relationships . When DL is used to parametrize the policies and value functions of an RL agent, the resulting Deep Reinforcement Learning (DRL) technology is especially powerful. In 2017, for example, a DRL agent named AlphaGo soundly defeated the reigning world champion Go player . Applications of this technology to manufacturing and process control systems are currently under study . It is likely that DRL will not replace currently successful control algorithms such as PID and MPC, but will rather takeover some of the mundane tasks that humans perform to manage automation and control systems. For example, it appears that a DRL agent can learn how to tune PID loops effectively . Other possibilities include advising operators during transient and upset conditions, mitigating disturbances such as weather events, and detecting and mitigating unsafe operations .
 Y Liao, F Deschamps, EdFR Loures, LFP Ramos, âPast, present, and future of Industry 4.0 - a systematic literature review and research agenda proposalâ, Intl J Production Research, 55 (12), 3609-3629, (2017).
 H Boyes, B Hallaq, J Cunningham, T Watson, âThe industrial internet of things (IIoT): An analysis frameworkâ, Computers in Industry, 101, 1â12, (2018).
 SJ Qin, TA Badgwell, âA survey of industrial model predictive control technologyâ, Control Engineering Practice 11 (7), 733-764, (2003).
 JB Rawlings, D Angeli, CN Bates, âFundamentals of economic model predictive controlâ, 51st Conference on Decision and Control, 3851-3861, (2012).
 JB Rawlings, NR Patel, MJ Risbeck, CT Maravelias, MJ Wenzel, RD Turney, âEconomic MPC and real-time decision making with application to large-scale HVAC energy systemsâ, Computers & Chemical Engineering, 114 (6), 89-98, (2018).
 MJ Risbeck, CT Maravelias, JB Rawlings, âUnification of Closed-Loop Scheduling and Control: State-space Formulations, Terminal Constraints, and Nominal Theoretical Propertiesâ, Computers & Chemical Engineering, 129 (10), (2019).
 RD Bartusiak, S Bitar, DL DeBari, BG Houk, D Stevens, B Fitzpatrick, P Sloan, âOpen Process Automation: A Standards-Based, Open, Secure, Interoperable Process Control Architectureâ, Proc IFAC 2020 World Congress, (2020).
 RS Sutton, AG Barto, âReinforcement Learning â An Introductionâ, The MIT Press, (2018).
 I Goodfellow, Y Bengio, A Courville, âDeep Learningâ, The MIT Press, (2016).
 D Silver, A Huang, CJ Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I. Antonoglou, V Panneershelvam, M Lanctot, âMastering the game of Go with deep neural networks and tree searchâ, Nature 529, 484â489 (2017).
 J Shin, TA Badgwell, KH Liu, JH Lee, âReinforcement Learning â Overview of recent progress and implications for process controlâ, Computers & Chemical Engineering, 127, 282-294 (2019).
 TA Badgwell, KH Liu, NA Subrahmanya, WD Liu, MH Kovalski, âAdaptive PID Controller Tuning via Deep Reinforcement Learningâ, U.S. patent 1095073, granted February 9, 2021.
|AIChE Member Credits||0.5|
|AIChE Graduate Student Members||Free|
|AIChE Undergraduate Student Members||Free|