(199f) Optimal Control for Simulated Moving Bed Process with Deep Reinforcement Learning | AIChE

(199f) Optimal Control for Simulated Moving Bed Process with Deep Reinforcement Learning

Authors 

Oh, T. H. - Presenter, Seoul National University
Lee, J. M., Seoul National University

The Simulated Moving Bed (SMB) process is the continuous separation process that separates the mixture by using the affinity difference with the solid adsorbent of each solutes [1]. Compared to batch chromatography processes, SMB typically shows higher productivity and lower eluent consumption. However, determining a proper control scheme of the SMB process to achieve the optimality is still an open question. Typically, the optimal operating condition of the process is on the boundary of the purity constraints and therefore it is inevitable to use the adaptive or robust optimal feedback control [2–4]. The main problem is the short of computation time which restricts the proper state estimation and computing the optimal input. Several attempts have been proposed such as MPC, adaptive control, and two-stage control. However, all these studies have made some compromise on the computational load by using a simple approximated model, reducing the number of state and input dimensions, or applying the input with time delay.

Recently, attempts are made on applying the Deep Reinforcement Learning (DRL) on a chemical process to achieve the optimality of the process [5]. The main benefits of this approach are three folds. It could achieve the optimality without any knowledge on the process and could continuously adapt to the system. More importantly, the computational time to determine the input is negligible compared to any model-based multi-stage optimal control scheme. The downside is that a large amount of process data with good quality are required.

In this work, we apply the DRL to the SMB process in order to calculate the optimal feedback input with a constraint on computation time. Instead of an actual plant, a rigorous first-principles model served as a plant and process data were obtained by numerically solving the model. For this rigorous model, obtaining a single simulation data may take several hours and it is infeasible for a single machine to gather sufficient process data. One could ease this task by using an approximate model or sacrificing the accuracy of the process. In this study, we use the computation source from the Google cloud service to run the simulation in a paral fashion. The environment to handle the data from the multiple machines has been set up and the updating algorithm for the DRL was also modified to fit our situation. For the DRL, the proper states, actions, and environment are determined so that the agent does not experience any jump of the states. The model-free algorithm with utilizing the approximated Q-function was selected [6, 7]. The Q-function was approximated by the deep neural network but with a specific structure so that it could do better on capturing the cyclic property of the process. This Q-function was trained to predict the value of given the states and discretized control inputs were selected based on the greedy policy from the Q-function. The simulation test has been conducted on the various scenarios, such as start-up and changing parameters. The results indicate the improvement of the process performance.

References

[1] A. Rajendran, G. Paredes, M. Mazzotti, “Simulated moving bed chromatography for the separation of enantiomers” Journal of Chromatography A 1216 (4) (2009) 709-738.

[2] G. Erdem, S. Abel, M. Morari, M. Mazzotti, M. Morbidelli, J. H. Lee, “Automatic control of simulated moving beds” Industrial & Engineering Chemistry Research 43 (2) (2004) 405-421

[3] P.Suvarov, A. Kienle, C. Nobre, G.De Weireld, A. V. Wouwer, “Cycle to cycle adaptive control of simulated moving bed chromatographic separation processes” Journal of Process Control 24 (2) (2014) 357-367.

[4] A. A. Neto, A. Secchi, M. Souza Jr, A. Barreto Jr, “Nonlinear model predictive control applied to the separation of praziquantel in simulated moving bed chromatography” Journal of Chromatography A 1470 (2016) 42-49.

[5] J. H. Lee, J. Shin, M. J. Realff, “Machine learning: Overview of the recent progresses and implications for the process systems engineering field” Computers & Chemical Engineering 114 (2018) 111-121.

[6] S. Thrun, A. Schwartz, “Issues in using function approximation for reinforcement learning” in: Proceedings of the 1993 Connectionist Models Summer School Hillsdale, NJ. Lawrence Erlbaum, 1993.

[7] H. Van Hasselt, A. Guez, D. Silver, “Deep reinforcement learning with double q-learning” in: Thirtieth AAAI Conference on Artifial Intelligence, 2016.

Checkout

This paper has an Extended Abstract file available; you must purchase the conference proceedings to access it.

Checkout

Do you already own this?

Pricing

Individuals

AIChE Pro Members $150.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
AIChE Explorer Members $225.00
Non-Members $225.00