(291b) Large Scale Coupled Eulerian-Lagrangian Simulation of Fluidized Bed | AIChE

(291b) Large Scale Coupled Eulerian-Lagrangian Simulation of Fluidized Bed


    Coupled Eulerian-Lagrangian simulation of multiphase flow has been used widely to analyze dense particulate flows.  In this approach, the fluid is treated as a continuum while the solid phase is modeled using the Discrete Element Method (DEM). In DEM particle-particle and particle-wall interactions are resolved and the time integration is carried out using Newton’s second law of motion. This approach offers better accuracy than the traditional Eulerian-Eulerian Two-Fluid method used for modeling multiphase flows. The computational expense of DEM is very high and hence previous studies have been limited to small scale (mostly 2D) problems. In this study a parallel algorithm for DEM was developed for the open source code, Multiphase Flow with Interphase Exchanges (MFIX). MFIX, developed at NETL has been widely used to simulate hydrodynamics, heat transfer and chemical reactions occurring in bubbling and circulating fluidized beds.

    Most previous DEM parallelization efforts available in the literature are based on the domain decomposition framework for fluid and mirror domain method for DEM. In this approach each processor holds the information of all the particles in the system but carries out computations only on a subset. Synchronization of the dataset is carried out via interchange between the processors after every particle time step. The disadvantages of this method are large memory requirements for holding all particle information and high communication cost associated with the synchronization of the data. An alternative approach[1] is to employ the domain decomposition technique for fluid and graph partitioning algorithm based on contact between particles for DEM. In this approach, the fluid variables required by the DEM have to be mirrored. The main advantage of this system is that load balancing is ensured for all configurations. However the communication cost associated with the mirroring of flow variables and the global summation required for particle-fluid interaction terms is high, which affects the scalability for large scale simulations with large processor counts.

    In the present study, a hybrid MPI/OpenMP parallelization architecture is employed. Both fluid and DEM are solved using the standard domain decomposition framework along with selective OpenMP parallelization directives for critical routines. Use of OpenMP offers flexibility in decomposing the flow based on the number of computational cells and DEM based on particle numbers. Each processor holds only information related to its domain and hence the memory requirement is low. Further the communication cost is low as only the information near the interface between processors has to be exchanged and no mirroring is required for either fluid or particle information.

    For validation, the pseudo-2D fluidized bed experiment by Muller[2] was simulated at two superficial velocities (0.6 and 0.9 m/secs). The average voidage and velocity profiles at different axial locations were compared against the experimental results. The scalability of the code was studied using a fluidized bed with 4.5 million particles with the number of processors ranging from 16 to 128. Both MPI and hybrid MPI/OpenMP simulations were carried out along with TAU profiling. The study shows that the scaling of the MPI simulation drops from 32 to 64 processors due to increased MPI communications required by the flow solver, while the hybrid simulation shows strong scalability up to 128 processors due to reduced communication overhead.