(589b) Alamo: Automatic Learning of Algebraic Models for Optimization | AIChE

(589b) Alamo: Automatic Learning of Algebraic Models for Optimization

Authors 

Cozad, A. - Presenter, Carnegie Mellon University
Sahinidis, N., Carnegie Mellon University
Miller, D. C., National Energy Technology Laboratory



We address a central problem in machine learning, namely that of learning an algebraic model from data obtained from simulations or experiments.  The problem arises naturally in situations in which there is a need to replace a computationally intensive model with a cheaper to compute surrogate or reduced order model.  The problem also arises in situations where experimental measurements are used to construct a theoretical model.  We are interested in developing a technique that learns models that are (a) as accurate as possible and (b) as simple as possible.  Requirement (a) is obvious, while requirement (b) is driven by our desire to utilize the developed model in further studies, for instance by embedding it in a larger multi-scale model for optimization, simulation, or analysis.  Finally, we are interested in developing methodology that achieves the above goals without requiring too many simulations or experiments.

We present a methodology aiming to achieve the above requirements.  The proposed approach begins by building a low-complexity surrogate model.  The model is built using a best subset technique that leverages a mixed-integer linear programming formulation to allow for considering a very large number of possible functional components in the model, without enumerating all their possible combinations.  The model is then tested, exploited, and improved through the use of derivative-free optimization solvers to adaptively sample new simulation or experimental points. 

Finally, we describe ALAMO, the computational implementation of the proposed methodology, along with extensive computational comparisons between ALAMO and a variety of machine learning techniques, including Latin hypercube sampling, simple least squares regression, and the lasso.