(315d) Comparison of Global, Stochastic Optimization Algorithms Using Toy Problems and Multi-Parameter Models to Kinetic Fermentation and Rheological Data | AIChE

(315d) Comparison of Global, Stochastic Optimization Algorithms Using Toy Problems and Multi-Parameter Models to Kinetic Fermentation and Rheological Data

Authors 

Armstrong, M. - Presenter, United States Military Academy
James, C., United States Military Acacemy
Miller, A., University of Nebraska
The evaluation of multiple parameters involved in nonlinear dynamic simulation models based on a minimization of the sum of the squares of the differences between the predictions and experimental data is often a non-trivial step. Classical least squares minimization methods are all local methods that only converge to a local minimum that is not necessarily the global one, the answer depending crucially of the initial guess. The evaluation of the sensitivity of the local minimization solution to the initial guess can be quite time-consuming, especially when the model predictions involve the solution of a set of ODEs the time-integration of which requires by itself a non-negligible computational time. In this work we present a stochastic method to evaluate that dependence effectively with an application to the evaluations of parameters in the dynamic simulation of a thixotropic concentrated suspension system using Large Amplitude Oscillatory Shear (LAOS) and other transient data, as well as kinetic data from a set of fermentation reactions. In addition, with this work we compare several other contemporary stochastic methods using several classic chemical engineering “toy problems” from literature [1,2].

A classical approach to fitting parameters has been the minimization of the sum of least squares, also known as the weighted residuals least-squares technique, using a local, gradient method. There are several variations of this basic strategy currently in use. Some examples are the general Newton’s method, trust-region methods, line search methods, and Levenberg-Marquardt methods. Alternatively, stochastic methods have been developed in order to better explore the available parameter space in search of a global minimum and/ or alleviate the dependence of the solution on the initial guess. These methods include simultaneous perturbation stochastic approximation methods (SPSA), singular simulated annealing, particle swarm (PSO) and genetic algorithms (GA) [1,3].

The approach developed in the present work incorporates a hybrid parallel, simulated annealing, “Metropolis” –like algorithm, within a least squares fitting algorithm recently developed by Armstrong et al. [1]. The parallel simulated annealing is used to develop a good initial guess of the values of the parameters space using as an objective function the squares of the differences of the model predictions from oscillatory in time experimental data obtained over a period. The selection of the initial guess is followed by an application of a modified least squares local minimization procedure to determine accurately the “global” minimum. An important advantage of the proposed method is that all parameters needed to execute it are evaluated numerically based on few simulated annealing runs. A further advantage of the proposed algorithm is the potential speedup by executing its most time-consuming step, a series of simulated annealing simulations, in parallel where possible.

The parallel simulated annealing algorithm proposed in the present work attempts to surpass the shortcomings of previous stochastic algorithms by more systematically exploring the parameter space without the need of complicated adjustable parameters. The parallel simulated annealing algorithm employs features of the more traditional simulated annealing ‘Monte-Carlo’ energy minimization approach, with the pseudo-energy to be minimized is derived from the least squares error function constructed between the model predictions and available data. In contrast to the standard simulated annealing process, the parallel simulated annealing procedure that is proposed to be followed here involves utilization of a series of stochastic simulated annealing runs each one built around its own Boltzmann energy level. However, the parallel simulated annealing sequences are not completely independent. From time to time, there is the possibility offered for a parameter information exchange between ‘nearest neighbor’ Boltzmann energy levels. When this exchange is conducted appropriately it allows for a maximum efficiency search for the optimum parameter values in a wide parameter space, without requiring finely tuned parameters or a sensitive dependence to the initial guess [1, 4, 5, 6, 7].

[1] Armstrong et al. “An Adaptive Parallel Tempering Method for the Dynamic Data-Driven Parameter Estimation of nonlinear Models.” AIChE J. (2016).

[2] Battles and Trefethen “An extention of MATLAB to continuous functions and operators.” SIAM J Sci Com. (2005).

[3] Armstrong. PhD Thesis, University of Delaware (2015).

[4] Spall. “Adaptive Stochastic Approximation by the Simultaneous Perturbation (SPSA) Method.” IEEE Transactions of Automatic Control (2000).

[5] Spall, J.C., Introduction to Stochastic Search Optimization. Wiley Interscience. 2003, Hoboken, New Jersey: John Wiley and Sons Inc.

[6] Earl, D.J., M.W. Deem, Parallel Tempering: Theory, applications and new perspectives. Phys. Chem. Chem, Phys., 2005. 7: p. 3910-3916.

[7] Amar, J.G., The Monte Carlo Method in Science and Engineering. Institution of Electrical Engineering Computer Science and American Institute of Physics, 2006. MAR/ APR p. 9 - 19.