We are aware of an issue with certificate availability and are working diligently with the vendor to resolve. The vendor has indicated that, while users are unable to directly access their certificates, results are still being stored. Certificates will be available once the issue is resolved. Thank you for your patience.

(253b) Advances in Decomposition Strategies for Non-Convex Nonlinear Programs

Rodriguez, J. S., Purdue University
Nicholson, B., Center for Computing Research, Sandia National Laboratories
Laird, C. D., Sandia National Laboratories
Zavala, V. M., University of Wisconsin-Madison
Optimization plays an important role in a variety of areas, including process design, operation, and control. Large-scale nonconvex optimization problems that commonly arise in these areas need to be solved efficiently. These problems are characterized by having a large number of equations and variables and can become computationally intractable, exceeding the computational capabilities of a serial algorithm on a single workstation. However, these problems often have a structure that can be exploited by decomposition algorithms.

In this presentation, we describe two decomposition strategies for solving large-scale nonconvex structured optimization problems. In the first strategy, we decompose all scale-dependent operations of a serial algorithm to achieve parallelization. We focus on the parallelization of the interior-point (IP) algorithm and use the Schur-complement to decompose the solution of the KKT system in each iteration of the IP algorithm. The second decomposition strategy partitions the large-scale optimization problem into subproblems and coordinates their solutions to find the overall optimal solution. For this second type of decomposition, we focus on studying the Alternating Direction Method of Multipliers (ADMM). We present the pros and cons of both approaches and propose acceleration techniques to improve both.

We demonstrate these two decomposition strategies on several nonconvex stochastic programming and dynamic optimization problems. We also study the convergence of ADMM on nonconvex problems, and present connections between ADMM and the classical augmented Lagrangian (AL). We summarize our results in terms of a Lyapunov function and primal and dual feasibility metric and demonstrate that performing multiple coordination steps at each ADMM iteration approaches the behavior of the AL method.