(406d) MFIX-Exa: A CFD-DEM Code for Exascale Computer Architectures
AIChE Annual Meeting
2018
2018 AIChE Annual Meeting
Particle Technology Forum
Industrial Application of Computational and Numerical Approaches to Particle Flow
Tuesday, October 30, 2018 - 4:24pm to 4:42pm
The development of a code called MFIX-Exa, which enables such simulations on exascale computers, began in November 2016. This development project takes place under the Exascale Computing Project (ECP), a collaborative effort of two US Department of Energy organizations â the Office of Science and the National Nuclear Security Administration. ECP was established to accelerate delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 50 times more performance than the most powerful supercomputers in use today in the US. ECPâs work encompasses applications, system software, hardware technologies and architectures.
The first version of MFIX-Exa has been developed by migrating the core hydrodynamic models from the widely used, open-source code MFIX into AMReX, a new ECP software framework that uses block-structured adaptive mesh refinement algorithms to solve systems of partial differential equations with complex boundary conditions on exascale architectures. The correctness of MFIX-Exa was established by comparing its solutions with MFIX (version 2016-1) solutions for a set of benchmark cases that represent flow regimes encountered in the projectâs challenge problem â a 1 MWe chemical looping reactor. The parallel performance of the code has been improved by implementing logical tiling, enabling OpenMP threading over tiles, and employing a dual grid approach for fluid and particle load balancing. To handle complex geometries in the simulations, AMReX embedded boundary (EB) data structures and iterators were incorporated and an algorithm to address particle collisions with EB walls was implemented. The first major performance milestone for MFIX Exa has been reached with the 0.025 s simulation of 1 billion particles undergoing homogeneous cooling in 12.9 hours of wall time using 6912 processors of NRELâs Peregrine supercomputer.