An Explainable Artificial Intelligence Based Approach for Interpretation of Fault Detection Results from Deep Neural Networks

Source: AIChE
  • Checkout

    Checkout

    Do you already own this?

    Pricing


    Individuals

    AIChE Member Credits 0.5
    AIChE Members $19.00
    AIChE Graduate Student Members Free
    AIChE Undergraduate Student Members Free
    Non-Members $29.00
  • Type:
    Conference Presentation
  • Conference Type:
    AIChE Annual Meeting
  • Presentation Date:
    November 19, 2020
  • Duration:
    14 minutes
  • Skill Level:
    Intermediate
  • PDHs:
    0.20

Share This Post:

Data Driven Fault Detection and Diagnosis methods can play an important role in Abnormal Event Management by timely detecting and diagnosing faults with the help of the enormous amount of data recorded in chemical process industries. Recently, deep learning techniques have been a promising area of research in this field. Models built using deep learning methods provide better predictive performance on benchmark datasets like the Tennessee Eastman Process than other statistical methods[1]. But these techniques do not possess good interpretability and we may be unable to determine how the model has reached a certain conclusion. Using these methods, it is difficult to diagnose how a fault has occurred and we cannot provide suitable recommendations to prevent their occurrence in the future.

eXplainable Artificial Intelligence (XAI) is a new field in machine learning and has been gaining popularity because of the need for the solutions provided by AI technologies to be understood by human experts. Research progress in XAI has been focussed in two areas - building interpretable and transparent AI Systems and post-hoc methods. Post-hoc methods make explanations from pre-developed model, treating the model as a black box and it does not impact performance of the model. [3] One way to obtain post modelling explanations is using the Shapley Value framework that is based on the Shapley concept from co-operative Game Theory. In Game Theory, Shapley Value provides a solution to fairly distributing the gains and costs to all the actors working in a coalition to achieve some outcome. It is essentially the average expected marginal contribution of one actor towards the outcome after all the possible combinations are considered. In machine learning, the Shapley Value is becoming a popular method to attribute the model’s prediction(the outcome) on an input to its base features(the actors). The Integrated Gradients Technique is one of the multiple ways to operationalize Shapley Value for the feature attribution problem.[4] Integrated Gradients aggregate along the inputs that fall on the straight path between the chosen baseline to the input and the input. It is based on the Aumann-Shapley cost sharing technique which extends the discrete Shapley Value to the continuous context and quantifies the importance of each input feature in a prediction.[5]

In this work, we present a novel approach to build an interpretable AI system for detecting and diagnosing faults. In this system, a deep neural network is trained to solve a fault detection problem. Then, the Integrated Gradients Technique is used on the trained deep neural network model to understand the relative contributions of input features in detecting faults. This helps in identifying which factors have contributed to the occurrence of a fault. We also perform Key Variable Selection by choosing an optimal subset of measured variables by removing those input variables which have negligible contributions to the model predictions. In order to implement this approach, the software used is AI Platform Predictions by Google Cloud.

The performance of the approach is demonstrated on 3 synthetic datasets where each dataset contains 10000 samples of 4 input features and one output variable. The output variable is related to various combinations of the input variables through both linear and nonlinear functions. A neural network was trained for fault detection using the data in each case study and could correctly detect the faults with high accuracy. Next, using the proposed approach we determined the input variable(s) which influenced the output for each test sample. In all thress case studies, the XAI method could accurately determine the input variable(s) correctly. In this paper, we will present the proposed methodology as well as the results from the 3 synthetic data sets. The results for fault detection in the benchmark Tennessee Eastman process will also be presented.

REFERENCES

A novel process monitoring approach based on variational recurrent autoencoder - Feifan Chang, Peter B Qe, Jinsong Zhao. Link - https://doi.org/10.1016/j.compchemeng.2019.106515

Optimal variable selection for effective statistical process monitoring by Kaushik Ghosh, ManojkumarRamteke, Rajagopalan Srinivasan. Link - https://doi.org/10.1016/j.compchemeng.2013.09.014

Explainable artificial intelligence: A survey. By Dosilovic, F. K., Brcic, M., & Hlupic, N. Link - https://ieeexplore.ieee.org/abstract/document/8400040

The many Shapley values for model explanation by Mukund Sundararajan, Amir Najmi. Link - https://arxiv.org/abs/1908.08474

Axiomatic Attribution for Deep Networks by Mukund Sundararajan, Ankur Taly, Qiqi Yan. Link - https://arxiv.org/abs/1703.01365

Presenter(s): 
Once the content has been viewed and you have attested to it, you will be able to download and print a certificate for PDH credits. If you have already viewed this content, please click here to login.

Checkout

Checkout

Do you already own this?

Pricing


Individuals

AIChE Member Credits 0.5
AIChE Members $19.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
Non-Members $29.00
Language: