Computer Physics Communications Program LibraryPrograms in Physics & Physical Chemistry |

[Licence| Download | New Version Template] adxp_v2_0.tar.gz(24480 Kbytes) | ||
---|---|---|

Manuscript Title: mm_par2.0: An object-oriented molecular dynamics simulation program parallelized using a hierarchical scheme with MPI and OPENMP | ||

Authors: Kwang Jin Oh, Ji Hoon Kang, Hun Joo Myung | ||

Program title: mm_par2.0 | ||

Catalogue identifier: ADXP_v2_0Distribution format: tar.gz | ||

Journal reference: Comput. Phys. Commun. 183(2012)440 | ||

Programming language: C++. | ||

Computer: Any system operated by Linux or Unix. | ||

Operating system: Linux. | ||

Has the code been vectorised or parallelized?: The code has been parallelized using both MPI and OpenMP | ||

Keywords: Molecular dynamics, Langevin dynamics, dissipative particle dynamics, object-oriented programming, parallel computing. | ||

PACS: 02.70.Ns. | ||

Classification: 7.7. | ||

External routines: We provide wrappers for FFTW[1], Intel MKL library[2] FFT routine, and Numerical recipes[3] FFT, random number generator, and eigenvalue solver routines, SPRNG[4] random number generator, Mersenne Twister[5] random number generator, space filling curve routine [6]. | ||

Does the new version supersede the previous version?: Yes | ||

Nature of problem:Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. | ||

Solution method:Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. | ||

Reasons for new version:First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme[7] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI[8] and OPENMP[9]. | ||

Summary of revisions:- Object-oriented programming has been used.
- A hierarchical parallelization scheme has been adopted.
- SPME routine has been fully parallelized with parallel 3D FFT using a volumetric decomposition scheme.[10]
KJO thanks Mr. Seung Min Lee for a useful discussion on programming and debugging.
| ||

Running time:Running time depends on system size and methods used. Timing statistics for some example tests are given in the document, mm_par.pdf, which is included in the distribution file. | ||

References: | ||

[1] | http://www.fftw.org. | |

[2] | http://software.intel.com/en-us/articles/intel-math-kernel-library-documentation/. | |

[3] | William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery, Numerical Recipes: The art of scientific computing, Cambridge University Press, New York (2007). | |

[4] | http://sprng.cs.fsu.edu/. | |

[5] | http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html. | |

[6] | http://www.tiac.net/~sw/2008/10/Hilbert/moore/index.html | |

[7] | K. J. Oh and M. L. Klein, Com. Phys. Comm., 174 (2006) 263. | |

[8] | http://www.mcs.anl.gov/research/projects/mpi/ | |

[9] | http://openmp.org/wp/ | |

[10] | K. J. Oh and Y. Deng, Com. Phys. Comm., 177 (2007) 426. |

Disclaimer | ScienceDirect | CPC Journal | CPC | QUB |