(178w) GPU MCMC Developments: CBMC Nonpolar Molecules, Verlet Lists, and Architectural Optimizations | AIChE

(178w) GPU MCMC Developments: CBMC Nonpolar Molecules, Verlet Lists, and Architectural Optimizations

Authors 

Ibrahem, K., Wayne State University
Russo, V., Wayne State University
Potoff, J., Wayne State University


A Monte Carlo simulation engine capable of simulating linear chain molecules is presented.  This code is written in a mixture of C/C++ and NVIDIA’s CUDA GPU-programming API.  This code includes a GPU accelerated version of the configurational-bias algorithm.  The performance of the code is demonstrated through the calculation of vapor-liquid coexistence curves and chemical potentials for linear alkanes from methane to decane as a function of system size for systems of 128 to 128,000 interaction sites. The efficiency of the proposed GPU based algorithms is assessed through comparison to state of the art, special purpose serial CPU-bound codes.

This work details refinements in Monte Carlo simulations performed on GPU [1] to enable the rapid simulation of 100,000 atom systems on typical desktop workstations.  Neighbor lists[2] are adapted to a form suitable for the GPU’s memory [3], which consists of high-speed shared memory and registers, plus low-speed global memory.  By keeping track of nearby molecules, the number of interactions to be considered is reduced significantly, which reduces looping on the CUDA-core-constrained GPU.  Further improvements include placing current CPU side logic (pseudo-random number generation (PRNG), etc.) on the GPU device and optimizations to the parallel displacement, volume swap, and particle insertion moves, in an effort to minimize the computationally expensive transfer of information from the device to the CPU over the PCI bus.  Additional speed gains are realized by using idle threads during the tree summation of energies to calculate part of the pair interactions based on the next random draws in the PRNG sequence.  This allows each kernel call for a specific move selection to perform the necessary arithmetic for the current move and part of the next.  The benefits of these improvements are highlighted by the simulation of large (N>100,000 particles) systems in the canonical and Gibbs ensembles.  The results of a very-large-scale simulations near the critical point [4] of a tail-corrected Lennard-Jones fluid are also presented.

1.            Mick, J.R., Potoff, J.J., Hailat, E., Russo, V., Schwiebert, L. GPU Accelerated Monte Carlo Simulations In the Gibbs and Canonical Ensembles. in AIChE Annual Conference. 2011. Minneapolis, MN.

2.            Verlet, L., Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules. Physical Review, 1967. 159(1): p. 98-103.

3.            Wang, P. Short Range Molecular Dynamics on GPU. in GPU Tech Conf. 2006.

4.            Potoff, J.J. and A.Z. Panagiotopoulos, Critical point and phase behavior of the pure fluid and a Lennard-Jones mixture. Journal of Chemical Physics, 1998. 109(24): p. 10914-10920.

Topics