(751b) Delaunay-Based Domain Exploration and Adaptive Placement for Surrogate Based Black-Box Optimisation
A straightforward application of the conventional gradient based optimisation algorithms to black-box simulation models is impractical as gradient information is not readily available, and its numerical approximation is computationally prohibitive. Automated analytical derivatives are not feasible, since the simulation (source) code is a closed box. These hurdles have inspired âDerivative Free Optimizationâ (DFO) that does not need gradients. The earliest efforts on DFO involved simplex-based methods. These were complemented by directional direct search methods such as generalised pattern search (GPS), generating search set (GSS), and mesh adaptive direct search (MADS). Over the years, several modifications of these methods have been proposed and interested readers may refer to  for detailed expositions on the DFO literature. Due to their derivative-free nature, these methods cannot exploit the inherent characteristics of the black-box systems, hence require many function evaluations. Moreover, most of them are designed to find a static/stationary point, hence cannot guarantee a global optimum. Therefore, a multi-start strategy becomes imperative, and the compute expense grows even further for global optimisation. Heuristics methods such as evolutionary algorithms (e.g. Genetic Algorithm (GA), Particle Swarm Optimisation (PSO), Ant Colony Optimisation, and Simulated Annealing (SA)) are also essentially DFO methods since they rely solely on function values. However, randomness is a key element of these methods and guaranteeing a global optimum is not possible. In most instances, they simply provide a âbetterâ rather than the âbestâ solution. In addition, they involve many arbitrary parameters that are usually set by trial and error and require many function evaluations. Thus, despite their simplicity (hence popularity), they are less attractive for the global optimisation of compute-intensive black-box systems.
Optimisation involving expensive black-box systems (functions) poses two key challenges: (i) minimise function evaluations and (ii) find a global optimum. For the former, smooth and inexpensive surrogates are attractive, which has resulted into a class of practical and promising approaches called âSurrogate-based Optimisationâ (SBO). The surrogate offers inexpensive estimates for both function values and gradients. Although pinpointing the origin of SBO is rather difficult, its early footprints exist in the works of . While its fundamentals were known for a few decades, SBO did not attract attention until the late 1990s. Most SBO approaches developed thereafter adopted a structure involving three key steps: (1) Sampling or Design of Experiments (DoE) for the initial surrogate, (2) Selection and construction of a surrogate model, and (3) New sample generation strategy. A variety of surrogates such as polynomial response surface, artificial neural network (ANN), kriging, support vector regression (SVR), radial basis functions (RBF) etc. has been used over the years. Most SBO algorithms in the literature exploit surrogate specific features. For instance, the adaptive sample placement strategies rely primarily on Krigingâs expected improvement estimates. Recently, Garud et al.  highlighted that the best surrogate model form and the best modelling technique vary with intrinsic system characteristics, thus surrogate specific SBO algorithms may not be uniformly effective across different complex black-box systems. This motivated us to develop a comprehensive surrogate assisted framework for black-box optimisation with following key novelties. First, our framework is independent of the surrogate model form and/or modelling technique. Second, we iteratively generate and place new samples using a novel two-stage approach. The first stage exhaustively but efficiently divides the search domain into sub-regions using the Delaunay triangulation . It then selects a promising sub-region by balancing global exploration vs. local exploitation. The second stage places a new point in the selected sub-region via optimisation rather than heuristics or predefined metrics. This optimisation trades off global surrogate accuracy vs. space filling. This two stage strategy enables our framework to escape local traps and progress towards a global optimum.
This work develops a surrogate-based optimisation algorithm to find the global minimum estimate within the maximum compute-budget. The general steps of our algorithm are as follows:
- Select a surrogate modelling technique and model form.
- Generate initial sample set and evaluate costly function at these points.
- Construct the Surrogate Model using the input-output data set.
- Optimise the Surrogate and update the data set by adding the optimiser.
- Construct simplicial (triangular) sub-regions using Delaunay triangulation.
- Select the most promising Delaunay simplex for sample placement.
- Add a new sample point in the promising sub-region via placement optimisation.
Contrary to some literature SBO algorithms, our proposed algorithm is surrogate independent and does not exploit any specific feature/s of a surrogate model or technique. We evaluate our algorithm on a battery of 20 test functions with varying characteristics, modalities, and complexities. Furthermore, we use six basis functions and two sampling methods for each test function. Although no single basis function is the best for all test functions, cubic basis function exhibits the best performance for 17 out of 20 functions. Having shown that our algorithm succeeds on the test functions, we also compare its performance with seven commercially available global optimisation algorithms including a surrogate-based approach. For this, we select Generalised Pattern Search (GPS), Generating Search Set (GSS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimisation (PSO), Genetic Algorithm (GA), Simulated Annealing (SA), and Surrogate Optimisation (SO) from MATLABâs global optimisation toolbox. Our algorithm locates the global minimum for 18 of 20 functions i.e. with a success rate of 90%. In contrast, the success rates of GPS, GSS, MADS, PSO, GA, SA, and SO are 45% (9), 40% (8), 60% (12), 35% (7), 5% (1), 10% (2), and 75% (15). Clearly, our algorithm performs better. While it cannot reach the exact global minima of couple of functions, it gives the closer solutions than any other literature algorithm. In terms of computational efficiency, our algorithm requires 60% less compute effort compared to SO, the best literature approach. GA (the worst performer) requires about five times more compute effort than our approach. Pattern search approaches demand roughly three times more effort than our algorithm. In summary, our algorithm performs better than the seven literature algorithms in both success rate and compute effort, thus holds much promise for practical black-box systems.
 L. M. Rios and N. V. Sahinidis, âDerivative-free optimization: a review of algorithms and comparison of software implementations,â Journal of Global Optimization, vol. 56, no. 3, pp. 1247â1293, 2013.
 H. J. Kushner, âA new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise,â Journal of Basic Engineering, vol. 86, no. 1, pp. 97â106, 1964.
 S. S. Garud, I. A. Karimi, and M. Kraft, âLEAPS2: Learning based evolutionary assistive paradigm for surrogate selection,â Computers & Chemical Engineering, 2018.
 B. Delaunay, âSur la sphere vide,â Izv. Akad. Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk, vol. 7, no. 793-800, pp. 1â2, 1934.