Facilitate Process Optimization using Machine Learning Methodologies | AIChE

Facilitate Process Optimization using Machine Learning Methodologies

Authors 

Process optimization is one of the key components for process development. Due to the complex nature of the industrial processes, especially (semi-) batches, process optimization usually relies on knowledge-driven models, which are developed based on first principles. In the last 50 years, the application of knowledge-driven models has increased considerably, especially for continuous chemical, petroleum, and petrochemical processes1. However, developing an accurate, knowledge-driven model is not always achievable. Some challenges that engineers/scientists may frequently encounter are: 1) lack of knowledge about process inner workings; 2) high cost to collect informative data to improve the model; 3) difficulty to justify the return from investment for model development; and 4) long development time of fundamental model to achieve the process optimization task.

As an alternative, machine learning methodologies (also known as data-driven methodologies) that optimize the process-based input-output relationships could be applied to optimize processes in a timely and economical manner. It has been proved that with a few well-designed experiments, similar optimal process performance could be achieved using a data-driven approach compared to the optimum obtained using knowledge-driven models2. The limitation of the data-driven methodologies is that their interpretability is lower compared to knowledge-driven approaches. So, leveraging the advantages of both data-driven and knowledge-driven approaches to facilitate the process optimization tasks is of high priority.

In this paper, we propose a hybrid methodology that combines an existing yet not very accurate knowledge-driven model with historical data collected from regular manufacturing activities to optimize the performance of a semi-batch process without running new experiments. Specifically, a Long-Short-Term-Memory (LSTM) network3-4 is developed using both simulated data from the knowledge-driven model and the historical data. The LSTM model learns the inner workings of the process described by both datasets. Then the LSTM network serves as an interactive environment to train the reinforcement learning (RL) agent5 6 to learn optimized fed-batch profiles. The RL agent suggests new optimal conditions with higher reactor temperature and higher reactant feed than the current ones. A 3% improvement in product values is suggested after implementing the optimal conditions in the knowledge-driven model. In addition, the estimated improvement also helps to justify the investment to further develop a more accurate and complete first principle model that could lead to higher process improvement.

Reference

1. Bonvin, D.; Georgakis, C.; Pantelides, C.; Barolo, M.; Grover, M.; Rodrigues, D.; Schneider, R.; Dochain, D., Linking models and experiments. Industrial & Engineering Chemistry Research 2016, 55 (25), 6891-6903.

2. Georgakis, C.; Chin, S.-T.; Wang, Z.; Hayot, P.; Chiang, L. H.; Wassick, J. M.; Castillo, I., Data-Driven Optimization of an Industrial Batch Polymerization Process Using the Design of Dynamic Experiments Methodology. Industrial & Engineering Chemistry Research 2020.

3. Hochreiter, S.; Schmidhuber, J., Long short-term memory. Neural computation 1997, 9 (8), 1735-1780.

4. Ma, Y.; Wang, Z.; Castillo, I.; Rendall, R.; Bindlish, R.; Aschraft, B.; Bentley, D.; Benton, M.; Romagnoli, J. A.; Chiang, L. In Reinforcement Learning-Based Fed-Batch Optimization with Reaction Surrogate Model, American Control Conference (ACC), New Orleans, Louisiana, USA, New Orleans, Louisiana, USA, 2021.

5. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O., Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 2017.

6. Rendall , R.; Ma, Y.; Castillo, I.; Wang, Z.; Peng, Y.; Chiang, L. In Applying Reinforcement Learning for Batch Trajectory Optimization in a Chemical Industrial Applicatio, AIChE Spring Meeting, Virtual, Virtual, 2021.