(735f) Global Optimization of Nonsmooth Dynamic Systems
AIChE Annual Meeting
Thursday, October 20, 2011 - 4:55pm to 5:15pm
When dynamic optimization problems exhibit nonsmoothness in their embedded differential equations, this nonsmoothness can hinder the application of conventional optimization methods relying on differentiability assumptions. Nonsmoothness of the differential equations is generally a consequence of modeling autonomous changes in the physical regime of a system, such as flow transitions, phase changes, irregularities in the geometry of vessels, the action of nonreturn valves, etc. In this presentation we examine differential equations with continuous but nondifferentiable right-hand sides.
Extended stationarity conditions are presented for a class of dynamic optimization problems with nonsmooth right-hand sides. Satisfaction of these stationarity conditions is necessary for local optima. These conditions make use of the linear Newton approximation : a generalization of the gradient that satisfies several calculus properties sharply. A nonsmooth single shooting method is developed to locate such points, and consists of two main elements: the Objective and Constraint Evaluator, and the Proximal Modified Bundle Method. During each iteration of the method, the Objective and Constraint Evaluator solves the embedded ODE system at a candidate parameter value, in order to compute an element of the linear Newton approximation of the objective function at this parameter value. The Proximal Modified Bundle Method then uses a subset of the linear Newton approximation elements obtained so far to determine if the candidate parameter value satisfies the extended stationarity conditions. If so, the procedure terminates; if not, the method furnishes a new candidate parameter value for the next iteration.
Next, a technique is presented for constructing concave and convex relaxations of the objective function of dynamic optimization problems with nonsmooth right-hand sides over interval subsets of parameter space. This technique is an extension of the relaxation theory developed in [3, 4]. These techniques are embedded in a branch-and-bound framework to yield an algorithm that solves such optimization problems to global optimality. Within this branch-and-bound framework, each lower-bounding subproblem is a convex nonsmooth NLP obtained as a convex relaxation of the original problem, which can be solved to global optimality using the nonsmooth single shooting method described above. Each upper-bounding subproblem may be either simply an objective function evaluation, or any point satisfying the necessary conditions for local minima, obtained using the nonsmooth single shooting method. Convergence of the method is discussed.
The presented techniques are applied to several examples for illustration: including a nonsmooth dynamic optimization problem based on an electric circuit system  which has several sub-optimal local maxima.
 F. Facchinei, J.-S. Pang, Finite-dimensional variational inequalities and complementarity problems: Volume II, Springer-Vergag New York, Inc., New York, 2003.
 E. M. Izhhikevich, Dynamical systems in neuroscience: The geometry of excitability and bursting, The MIT Press, Cambridge, 2007.
 J. K. Scott, M. D. Stuber, P. I. Barton, Generalized McCormick relaxations, J. Glob. Optim., In press (2011).
 J. K. Scott, B. Chachuat, P. I. Barton, Nonlinear Convex and Concave Relaxations for the Solutions of Parametric ODEs, submitted (2009).