(710f) Control Loop Performance Assessment Using Detrended Fluctuation Analysis

Spinner, T., Texas Tech University
Rengasamy, R., Texas Tech University

Across the process industries there exist millions of automatic control loops, numbering several hundred for each process control engineer . Up to 60% of these controllers have less than acceptable levels of performance, with a corresponding deleterious effect on profits that could range into the hundreds of millions of dollars annually . The first step to improving control loop performance is to determine which loops are performing poorly . Process control engineers must divide their time between implementing new control assets and maintaining existing ones, so any detection method for poor performance should be fully automated. Integral of squared error and output variance are commonly monitored measures of loop performance, but these statistics are essentially meaningless unless compared against a benchmark . Devries & Wu  and Harris  have proposed benchmarking the mean squared error of the output against the theoretical minimum variance. Signicantly, the method of Harris uses knowledge of the process time delay along with routine operating data to find the controller invariant part of the process disturbance in order to calculate the minimum variance. This benchmark is based on the theoretical framework of minimum variance control developed by Astrom. However, time delay determination (for computation of minimum variance) in most cases will require the control engineers to schedule a identication test with operating personnel. This is because routine plant operating data will not in general contain sufficient information for time delay determination, unless there have been abrupt control signal changes or external excitations during data collection.

In this work, a new index is developed for assessing the performance of a single-input single-output (SISO) linear feedback control loop. The proposed metric is a specic scaling of the generalized Hurst exponent, computed through the method of detrended fluctuation analysis (DFA). We refer to this scaled exponent as the Hurst index. The new method compares favorably with the widely used minimum variance index (MVI), with both indices showing similar trends under changes in controller tunings during closed-loop simulations. The main advantage of the Hurst index over MVI and other existing performance measures is that its determination does not require a priori knowledge of any loop parameters. Instead, computation of the index relies solely upon process output data collected during routine plant operation. Therefore, this new technique could potentially allow engineers to more efficiently identify problematic control loops.