(275c) A New Approach To Minlp Containing Noisy Variables And Black-Box Functions
When accurate closed-form descriptions of chemical processes are unavailable, gradient-based optimization methods can fail due to the lack of reliable derivative information for these black-box systems. To overcome this problem, direct search techniques can be used, but convergence to an optimum can be slow. Convergence can be accelerated using surrogate model information, but it is possible for misleading search directions to be identified and/or for premature termination to occur, as can happen when the input-output data are noisy. Furthermore, the value of the information obtained must not be outweighed by model building costs for the nonlinear program (NLP). This last consideration is very important if integer variables are also present, as the solution of many relaxed NLP subproblems may be required. In addition, a global optimum must be obtained as the solution to the corresponding NLP in order to guarantee a lower or upper bound; early termination at a local optimum can lead to longer search and/or termination of the original problem at a suboptimal solution. Since local methods ensure a global optimal solution only under convexity conditions, which cannot be known a priori for black-box systems, these approaches are inefficient for solving mixed-integer NLP (MINLP) whose relaxed NLP subproblems are nonconvex. Let x, y, and z denote the vectors of continuous, integer and output variables, respectively. The y variables are further classified as y1 if they are feasible over a range of integer values and y2 otherwise. Consider the problem of maximizing distillate purity. Design decisions represented by both x and y1 include feed rates, tower reflux, and feed tray locations. Similarly, synthesis decisions represented by y2 describe unit existence within cascade sequences. This paper proposes a new method for determining optimal process synthesis and design decisions when the model equations for noisy outputs are described by black-box functions and depend on both x and y1. The new algorithm extends previous work in which a Branch-and-Bound Kriging-RSM algorithm was used to solve problems containing asymmetrical convex feasible regions and when the black box functions depended only on continuous variables (Davis and Ierapetritou, 2007). Kriging is a global modeling technique whose predictor approximates both deterministic and stochastic system components (Goovaaerts, 1997). The model is built using sampling data dispersed throughout the feasible region. Prediction and variance estimates at unsampled points are generated according to a weighted sum of nearest-neighbor function values. After constructing the predictor and variance mappings, the kriging model is iteratively improved by incorporating additional sampling information. Once the global model is accurate, candidate optima are identified and locally optimized using response surfaces. Response surfaces are inexpensively fitted quadratic polynomials which accurately describe behavior near an optimum in the continuous space (Myers and Montgomery, 2002). The main advantages of applying kriging prior to RSM are that 1) no convexity assumptions are required in order to guarantee a global optimum, and 2) confidence in the global optimum can be increased by refining the kriging predictor until the expected improvement falls below a tolerance. In the proposed algorithm, y2 (synthesis decisions) are optimized using Branch-and-Bound after both x and y1 (continuous and design integer decisions) have been optimized by solving relaxed NLP subproblems. At the root node, the kriging predictor is generated from an initial sampling set (x,y1,y2) and refined using additional sampling information obtained at regions of high variance, minimal prediction, and high prediction error over consecutive iterations. Refinement stops when the mean predictor value converges or when the maximum estimated variance falls below a tolerance. After determining the most promising set of candidate solutions over the global region, the y1 variables are optimized using integer methods at fixed values of the corresponding continuous and synthesis 0-1 variables. Once the integer optimal y1 vector is attained, the continuous variables are optimized using sequential response surfaces and the vector (x,y1,y2)opt is established as the relaxed NLP solution. Additional subproblems are created if integer integrality in y2 is not satisfied. For each new subproblem, the kriging predictor is refined over the corresponding feasible subregion in order to determine the best ?warm start? locations for further design optimization. The procedure terminates when the list of candidate subproblems is exhausted after which the MINLP solution is established as the best integer feasible solution in terms of both y1 and y2. Numerical examples and case studies are presented to illustrate the steps of the proposed methodology.
Davis, E. and M. Ierapetritou, 2007, AIChE J., Submitted for Publication.
Davis, E. and M. Ierapetritou, 2007, Adv. Glob. Opt.: Methods and App. Conf.
Goovaaerts, P., 1997, Geostatistics for Natural Resources Evolution.
Myers, R. and D. Montgomery, 2002, Response Surface Methodology.