(183e) Derivative-Free Optimization: A Review of Algorithms and Comparison of Software Implementations | AIChE

(183e) Derivative-Free Optimization: A Review of Algorithms and Comparison of Software Implementations

Authors 

Rios, L. M. - Presenter, Carnegie Mellon University


This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms.

Compared to derivative-based algorithms, derivative-free approaches are easier to implement and can be used even when the objective and constraints are entirely unavailable, as is the case when proprietary, ``black-box'' simulators are used. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied and found renewed interest in recent time, mostly due to the appearance of convergence results and new model-based approaches that seem to perform well in practice. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a brief review of derivative-free algorithms, followed by a systematic comparison of 21 related implementations using a test set of 518 problems. The basic questions we address are: (1) What is the quality of solutions obtained by current derivative-free solvers for a given limit on the number of allowable function evaluations? Does quality drop significantly as problem size increases? (2) Which solver is more likely to obtain global or near-global solutions for nonconvex problems? What is the effect of a multi-start strategy on the relative performance of solvers in this regard?, and (3) Is there a subset of existing solvers that would suffice to solve a large fraction of problems when all solvers are independently applied to all problems of interest? Conversely, are there problems which can be solved only by one or a few solvers?

To answer these questions, the algorithms are tested under the same conditions and ranked under several criteria, including their ability to find global solutions to nonconvex problems. A total of 80,808 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size; yet, surprisingly good solutions are obtained for low-dimensional problems. For the problems used in this study, deterministic methods based on surrogate models are found to be better, on average, than other derivative-free solvers, including the by far more popular stochastic approaches, in terms of solution quality within 2500 function evaluations. Finally, to illustrate the potential of these algorithms, we present an application of derivative-free optimization to protein-ligand conformation prediction problems, for which we find that derivative-free algorithms return solutions that considerably outperform solutions returned by the popular AutoDock package.