(12b) Development of Mass and Energy Constrained Neural Networks
AIChE Annual Meeting
Sunday, November 13, 2022 - 3:49pm to 4:08pm
The typical approach considered in the development of physics-constrained neural networks (PCNN) is to include an additional penalty term in the objective function in terms of specific parameterization or loss criteria3. Such hybrid first-principles AI modeling approaches can suffer from excessive computational expense and slow convergence rates depending on the complexities of the first-principles model. Moreover, most data-driven modeling of complex nonlinear dynamic systems with respect to available measurements may not provide any information about the âtrueâ data. Though a lot of different implementations of PCNN have been found in solving systems of partial differential equations or theoretical modeling examples in the electrical4, metallurgical5 and computational fluid dynamics6,7 fields, no example can be traced in existing literature specific to modeling a chemical process system by constraining the laws of conservation of mass and energy. In this work, a novel class of network models is proposed, namely the Mass and Energy Constrained Neural Networks (MECNN), which guarantees that the neural network outputs satisfy the mass and energy balance (first-principles) equations for the system at steady state, even if the training data violates the same. Efficient training algorithms are also developed for optimal synthesis of the network and estimation of parameters. This approach can be further extended to include other thermodynamic constraints specific to the system.
The proposed network structures are applied to train three nonlinear dynamic processes, namely the nonisothermal Van de Vusse reactor system, a pilot plant for post-combustion CO2 capture using the monoethanolamine solvent8 as well as a supercritical boiler system. It is observed that the outputs from MECNN satisfy the first-principles equations, even if the measurements used for training the network violates the mass and energy balance.
1. Su, H.-T., Bhat, N., Minderman, P. A. & McAvoy, T. J. Integrating Neural Networks With First Principles Models for Dynamic Modeling. Dynamics and Control of Chemical Reactors, Distillation Columns and Batch Processes (IFAC, 1993). doi:10.1016/b978-0-08-041711-0.50054-4.
2. Venkatasubramanian, V. The promise of artificial intelligence in chemical engineering: Is it here, finally? AIChE J. 65, 466â478 (2019).
3. Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686â707 (2019).
4. He, Q., Stinis, P. & Tartakovsky, A. Physics-constrained deep neural network method for estimating parameters in a redox flow battery. (2021) doi:10.1016/j.jpowsour.2022.231147.
5. Ghaderi, A., Morovati, V. & Dargazany, R. A physics-informed assembly of feed-forward neural network engines to predict inelasticity in cross-linked polymers. Polymers (Basel). 12, 1â20 (2020).
6. Zheng, H., Huang, Z. & Lin, G. PCNN: A physics-constrained neural network for multiphase flows. 1â21 (2021).
7. Kumar, A., Ridha, S., Narahari, M. & Ilyas, S. U. Physics-guided deep neural network to characterize non-Newtonian fluid flow for optimal use of energy resources. Expert Syst. Appl. 183, 115409 (2021).
8. Chinen, A. S., Morgan, J. C., Omell, B., Bhattacharyya, D. & Miller, D. C. Dynamic Data Reconciliation and Validation of a Dynamic Model for Solvent-Based CO 2 Capture Using Pilot-Plant Data. Ind. Eng. Chem. Res. 58, 1978â1993 (2019).