(587f) Effective Importance Sampling in Sequential Monte Carlo Moving Horizon Estimation for Nonlinear Dynamic Systems | AIChE

(587f) Effective Importance Sampling in Sequential Monte Carlo Moving Horizon Estimation for Nonlinear Dynamic Systems


Ungarala, S. - Presenter, Cleveland State University

State estimation from noisy measurements is a fundamental problem common to many fields of applied science such as engineering, meteorology and remote sensing, as well as in widely different fields such as agricultural sciences and economics. Adequate knowledge of the state of a dynamic system is crucial for the tasks of forecasting, control or optimization of the system dynamics in all these fields. For general nonlinear dynamic systems characterized by non-Gaussian measurement and model uncertainties, analytical solutions to state estimation problems do not exist. In recent decades, there has been a great deal of interest in solving the nonlinear state estimation problem. The focus has turned to sampling based methods that can approximately implement the recursive Bayesian solution with significant computational efficiency, these methods are broadly known as the sequential Monte Carlo (SMC) numerical approach. Instead of a closed form representation of the state conditional probability density function (pdf), the SMC approach obtains a dynamic set of samples drawn according to the conditional pdf, from which point estimates of the state trajectory can be easily estimated.

The SMC approach has been recently used to implement a sequential Monte Carlo moving horizon estimation (SMCMHE)[1]. The estimation of the most probable state trajectory conditioned on a moving window of measurements is achieved by maximizing the joint conditional pdf of the states in the trajectory. The resulting optimization problem is posed over an evolving discrete grid in state space over a window of sampling instances inside a chosen horizon. Samples drawn from the joint conditional pdf naturally form a non-uniform and randomized dynamic grid, which is a suitable candidate for implementing a dynamic programming approach for solving the optimization problem. Such a dynamic grid of samples, evolving according to state dynamics, effectively eliminates "the menace of expanding grids", which is a commonly encountered limitation in applications of dynamic programming. The samples are automatically clustered in high probability regions of the state space. Furthermore, a fixed sample size is independent of the number of state variables, which protects the approach from "the curse of dimensionality". There are many statistical tests available to assess an effective sample size. The Viterbi algorithm has been effectively used for forward and backward sweep searches of the objective function values on the grid. The main computational cost is on maintaining a grid of objective function values by appending new values at each sample time and dropping the earliest values as the horizon moves forward.

In this paper the focus is on effective and efficient sampling of the conditional pdf to provide a randomized non-uniform grid on which a maximum a posterior (MAP) state estimation problem is approximately solved. Since the actual conditional pdf is not available for sampling, one must resort to other approaches to draw samples such as importance sampling. In the literature, many approaches have been proposed to identity suitable importance density functions, many of which are tailored to specific classes of problems. Examples include the use of approximate nonlinear filters such as the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). These are generally known as local linearization particle filters because each sample is drawn from a different importance density function, typically a bank of Gaussians propagated by linearized dynamics. They entail a considerable cost of computation due to the burden of implementing a large bank of nonlinear filters in the background. In this paper a single non-Gaussian importance density is proposed, which is known analytically up to a constant. Two key features of this importance density can be readily established, its mode and its support. This involves an optimization problem and the solution to a nonlinear equation. Such knowledge of the importance density makes it amenable to effective sampling by established methods such as the Metropolis-Hastings algorithm, by centering the sampling process around the mode. This approach is readily extended to state estimation problems involving constraints on state variables. Both the mode and support of the importance density can be constrained, hence the randomized grid is automatically limited to the constraint hyper-surface in state space.

The presentation will include numerical examples of bench mark nonlinear problems from literature to demonstrate the efficacy of the proposed approach. Results include mean squared error of estimation and computational times averaged over multiple realizations. Comparisons will be provided with implementation of traditional moving horizon estimation with arrival cost computed by the EKF.


[1]  Ungarala, S., Sequential Monte Carlo moving horizon estimation, AIChE Annual Meeting, Pittsburgh, PA, 2012.