(222g) Enhancing Operator's Trust in AI-Based Process Monitoring Technologies: Providing Explanations for Multi-Mode Processes
AIChE Annual Meeting
Monday, November 14, 2022 - 5:03pm to 5:20pm
Explainable Artificial Intelligence (XAI) is an emerging field of research that aims at explaining predictions of DNNs. It seeks to interpret the predictions that humans can understand by assigning a relevance or contribution to each input variable for a given sample. In XAI research, two separate approaches have been pursued, i.e., inherently interpretable AI methods (such as decision trees) and posthoc methods . Posthoc approaches are used to explain pre-developed models; they regard the model as a black box and have no effect on the model's performance. Recently, XAI-based strategies have successfully explained the results generated by DNNs for FDD using different methods, including Integrated Gradients  and Layerwise Relevance Propagation (LRP).
Previously , we presented a method for the interpretability of DNNs used for fault diagnosis using the Integrated Gradients (IG) method. It is motivated by the concept of Shapely value  derived from cooperative game theory. The IG is a gradient-based attribution, a technique to explain DNNs by attributing them to the neural network's inputs. The obtained attributions are the summation of the difference between the target output and the output as evaluated at the baseline . These are used during real-time process monitoring to identify key variables responsible for the fault. Explanations are calculated using a windowed attribution scheme to make them robust to process noise. An inseparability metric is also used to highlight the most pertinent explanatory variables.
The above IG-based method and other similar explainability techniques reported in the literature are limited to the situation where the process has one nominal operating region. These assumptions often do not hold for real industrial chemical processes, where several operating modes may be prevalent even during nominal operations . In such cases, the previously developed XAI methods fail to provide correct explanations even when the process is operating under normal conditions since any deviation from the baseline is considered to be a change. We seek to overcome this limitation in the current work.
In this work, we report a novel strategy for generating explanations of a DNN developed for multi-mode operation. The proposed methodology comprises two distinct components â (a) a bank of XAI models, and (b) a real-time model selector. The bank of IG-based XAI models, one for each operating mode, is developed offline. Each of these XAI models relies on data and DNN results from an operating mode and uses a mode-specific baseline. Hence it can provide accurate explanations â attributions to the input process variables - when the plant operates in the corresponding mode. The model-selector component utilizes real-time process data to identify the plant's operating mode dynamically and hence determines the XAI model that should be used to obtain explanations . The efficiency of the proposed methodology is demonstrated using the Tennessee-Eastman challenge process . In this paper, we will describe the overall architecture of the proposed technique. A comparison of the results from the multi-mode IG with those from a traditional IG will also be reported.
 Z. Jiao, P. Hu, H. Xu, and Q. Wang, âMachine Learning and Deep Learning in Chemical Health and Safety: A Systematic Review of Techniques and Applications,â ACS Chem. Health Saf., vol. 27, no. 6, pp. 316â334, 2020, doi: 10.1021/acs.chas.0c00075.
 F. K. Dosilovic, M. Brcic, and N. Hlupic, âExplainable artificial intelligence: A survey,â in 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2018, pp. 210â215. doi: 10.23919/MIPRO.2018.8400040.
 A. Bhakte, V. Pakkiriswamy, and R. Srinivasan, âAn Explainable Artificial Intelligence Based Approach for Interpretation of Fault Classification Results from Deep Neural Networks,â Chem. Eng. Sci., p. 117373, 2021, doi: https://doi.org/10.1016/j.ces.2021.117373.
 P. Agarwal, M. Tamer, and H. Budman, âExplainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes,â Comput. Chem. Eng., vol. 154, p. 107467, 2021, doi: https://doi.org/10.1016/j.compchemeng.2021.107467.
 L. S. Shapley, â17. A Value for n-Person Games,â in Contributions to the Theory of Games (AM-28), Volume II, Princeton University Press, 2016, pp. 307â318. doi: doi:10.1515/9781400881970-018.
 M. Sundararajan, A. Taly, and Q. Yan, âAxiomatic attribution for deep networks,â in 34th International Conference on Machine Learning, ICML 2017, 2017, vol. 7, pp. 5109â5118. [Online]. Available: arXiv:1703.01365
 S. J. Zhao, J. Zhang, and Y. M. Xu, âMonitoring of Processes with Multiple Operating Modes through Multiple Principle Component Analysis Models,â Ind. Eng. Chem. Res., vol. 43, no. 22, pp. 7025â7035, Oct. 2004, doi: 10.1021/ie0497893.
 R. Srinivasan, P. Viswanathan, H. Vedam, and A. Nochur, âA framework for managing transitions in chemical plants,â Comput. Chem. Eng., vol. 29, no. 2, pp. 305â322, 2005, doi: https://doi.org/10.1016/j.compchemeng.2004.09.024.
 J. J. Downs and E. F. Vogel, âA plant-wide industrial process control problem,â Comput. Chem. Eng., vol. 17, no. 3, pp. 245â255, Mar. 1993, doi: 10.1016/0098-1354(93)80018-I.