(504b) Utility Functions for Bayesian Design of Tests for Fault Detection and Isolation | AIChE

(504b) Utility Functions for Bayesian Design of Tests for Fault Detection and Isolation

Authors 

Stefanidis, E. K. - Presenter, University of Connecticut
Bollas, G., University of Connecticut

Modern
cyber-physical systems (CPS) are often characterized by complexity and
corresponding parametric uncertainty. The latter necessitates advanced fault
detection and isolation (FDI) algorithms that can handle the instantiation of uncertainty
as false alarms or non-detections.
1
The implementation of experimental design for an active FDI can be an important
asset when it is feasible. The approaches2–4
cast active FDI as an exercise of design of experiments that generates tests to
maximize the information extracted from available system sensors. However,
model-based active FDI approaches are not popular in the literature due to
their vulnerability to parametric uncertainty and model error, which result in
non-robust designs. Bayesian design of experiments (BDoE) for the design of FDI
tests is capable of maximizing test information for the entire model/parameter
space and provide a robust solution.5

Bayesian approaches are based on the Bayes theorem, where a
posterior belief on parameters can be updated using prior information on
parameters multiplied with the likelihood of the observed data. In BDoE, an
optimal design can be found by maximizing the expected utility based on the
observed data. BDoE can utilize a variety of metrics as utility function, such
as the Fisher Information Matrix (FIM) or Shannon Entropy of Information.5
The choice of the utility function impacts the solution robustness and the
computational cost of the BDoE. For instance, adopting FIM as a utility
function can lead to an alphabetical design criterion such as D- or Ds-optimality.
However, these criteria involve sensitivity analysis to deterministically
calculate the FIM of the system and consequently are computationally expensive.
On the other hand, when Kullback-Leibler divergence from prior to posterior is
used to make an inference on parameters, the expected utility calculations
involve a nested Monte Carlo estimation which affect both computation time and
prediction.6

Our discussion will begin with a formalistic comparison of
Frequentist (classic) experimental design and BDoE for different objective
metrics. We will focus on the impact that different utility functions have on
the optimal design and their computation time. To illustrate the importance of
BDoE, a variation of the benchmark 3-tank system (Figure 1) is used as a case
study, while we will present the deterministic variance of the optimal design for
different utility functions for this case study.7
We will show how different utility functions can be used for inferences on
parameters for FDI, depending on the data availability and the associated
computational cost.


Figure 1: Three
Tank System

Acknowledgment

This work was sponsored by the UTC Institute for Advanced
Systems Engineering (UTC-IASE) of the University of Connecticut and the United
Technologies Corporation. Any opinions expressed herein are those of the
authors and do not represent those of the sponsor.

References

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">1.        Venkatasubramanian, V., Rengaswamy, R., Yin, K. & Kavuri, S. N. A review of
process fault detection and diagnosis: Part I: Quantitative model-based
methods. Comput. Chem. Eng. 27, 293–311 (2003).

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">2.        Palmer,
K. A., Hale, W. T., Such, K. D., Shea, B. R. & Bollas, G. M. Optimal design
of tests for heat exchanger fouling identification. Appl. Therm. Eng. 95,
382–393 (2016).

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">3.        Palmer,
K. A. & Bollas, G. M. Active fault diagnosis for uncertain systems using
optimal test designs and detection through classification. ISA Trans.
(2019). doi:10.1016/j.isatra.2019.02.034

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">4.        Palmer,
K. A., Hale, W. T. & Bollas, G. M. Active Fault Identification by
Optimization of Test Designs. IEEE Trans. Control Syst. Technol. 1–15
(2018). doi:10.1109/TCST.2018.2867996

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">5.        Chaloner,
K. & Verdinelli, I. Bayesian experimental design: A review. Stat. Sci.
273–304 (1995).

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">6.        Huan, X.
& Marzouk, Y. M. Simulation-based optimal Bayesian experimental design for
nonlinear systems. J. Comput. Phys. 232, 288–317 (2013).

margin-left:32.0pt;text-indent:-32.0pt;text-autospace:none">7.        Mesbah,
A., Streif, S., Findeisen, R. & Braatz, R. D. Active fault diagnosis for
nonlinear systems with probabilistic uncertainties
. IFAC Proceedings
Volumes (IFAC-PapersOnline)
19, (IFAC, 2014).