(227c) Computing Lower Bounds in Global Optimization By Tractably Sampling Convex Relaxations
AIChE Annual Meeting
2021
2021 Annual Meeting
Computing and Systems Technology Division
Advances in global optimization
Tuesday, November 9, 2021 - 8:38am to 8:57am
This presentation shows that, for such a convex relaxation of n variables on a box domain, guaranteed affine underestimators and lower bounds may be constructed and evaluated tractably without requiring a local NLP solver [1]. This process involves evaluating the convex relaxation at most (2n+1) times and rearranging the results using a new variant of centered finite differencing, without requiring any further assumptions. If the underlying convex relaxations converge rapidly to an objective function as the box domain shrinks, then this property also holds for the new bounds. Noise in the relaxation evaluations is handled easily. Several numerical examples are presented for illustration, including examples where these new sampled relaxations are deployed effectively in nontrivial global optimization problems. Beyond addressing the obstacles in the previous paragraph, these developments also enable effective global optimization of sums of known nonconvex functions and black-box convex functions.
Reference
[1] Y. Song, H. Cao, C. Mehta, and K.A. Khan, Bounding convex relaxations of process models from below by tractable black-box sampling, under review, 2021.