(253d) New Underestimator and Branching Scheme for the Global Optimization of General Nonconvex Problems | AIChE

(253d) New Underestimator and Branching Scheme for the Global Optimization of General Nonconvex Problems

Authors 

Hasan, F. - Presenter, Texas A&M University
Bajaj, I., Texas A&M University
A deterministic global optimization approach, when minimizing a nonconvex problem, is based on three key steps: (i) generating lower bounds using valid underestimators, (ii) obtaining feasible/local solutions and use as upper bounds, and (iii) branching variables to create new nodes for progressively updating the lower and upper bounds until they converge to ϵ-global optimality. While local solvers can be used for upper bounds, the selection of the underestimator and branching scheme are central to the effectiveness of a branch-and-bound (B&B) algorithm for faster convergence. Underestimators are usually obtained by relaxing the original problem such that the relaxation is convex, thereby ensuring that the solution of the convex problem is a valid lower bound. While convex envelopes are the tightest convex underestimators, currently these are available only for a handful of functions with explicit forms. Furthermore, the relaxation of a highly nonconvex problem using general convex underestimators may not be tight, since the relaxation often depends on how close the original problem is to convexity. In terms of branching, the feasible space is generally divided into rectangular partitions by dividing along one of the edges. Different strategies on selecting the branching variable and branch point could lead to different computational performances, especially if the number of variables is large.

Our contribution is two-fold. First, instead of using convex underestimators, we propose to exploit a new class of underestimators that is edge-concave [1]. The underestimator is constructed by subtracting a positive quadratic expression such that all non-edge-concavities in the original function is overpowered by the added expression. While the edge-concave underestimator is nonlinear, the linear facets of its vertex polyhedral convex envelope [2] leads to a linear programming (LP)-based relaxation of the original nonconvex problem. We will present some theoretical results on this new class of underestimators and compare the performance of the LP relaxation with relaxations obtained by convex underestimators such as αBB and its variants. Second, we propose a novel branching technique based on a univariate projection of the original multi-dimensional space into a single auxiliary space [3]. The idea is based on the fact that when all function values are projected onto a univariate space t defined by summation of the decision variables x, a point-to-set map can be generated and a univariate function exists on this map such that its global minima is the same as that of the original function. This enables us to devise a novel branching scheme only on t rather than on x in a B&B framework We will present separate results showing the efficacy of the proposed relaxation and branching schemes. Furthermore, results will be presented for the cases where we employ the proposed LP relaxation and the t-branching scheme together in a B&B framework for fast convergence to ϵ-global optimality.

References:

[1] Hasan, M. M. F. An edge-concave underestimator for the global optimization of twice-differentiable nonconvex problems. Journal of Global Optimization, 1-18, 2018. https://doi.org/10.1007/s10898-018-0646-x.

[2] Tardella, F.: On the existence of polyhedral convex envelopes. In: Floudas, C.A., Pardalos, P.M. (eds.) Frontiers in Global Optimization, pp. 563–573. Kluwer Academic Publishers, Dordrecht (2003).

[3] Bajaj, I. and Hasan, M. M. F. UNIPOPT: Univariate projection-based optimization without derivatives, Under Review.