Elsevier Science Home
Computer Physics Communications Program Library
Full text online from Science Direct
Programs in Physics & Physical Chemistry
CPC Home

[Licence| Download | New Version Template] aeri_v1_0.tar.gz(2430 Kbytes)
Manuscript Title: MDMC2: A molecular dynamics code for investigating the fragmentation dynamics of multiply charged clusters
Authors: David A. Bonhommeau, Marie-Pierre Gaigeot
Program title: MDMC2
Catalogue identifier: AERI_v1_0
Distribution format: tar.gz
Journal reference: Comput. Phys. Commun. 185(2014)684
Programming language: Fortran 90 with MPI extensions for parallelization.
Computer: x86 and IBM platforms.
Operating system:
  1. CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH2.
  2. CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 + MPICH2.
  3. Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI.
  4. IBM Power 6 4.7 GHz, xlf + PESS (IBM parallel library).
.
Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: up to 9999.
RAM: (per CPU core): 5 - 10 MB.
Keywords: Molecular dynamics simulations, Mesoscopic coarse-grained models, Charged clusters and droplets, Electrospray ionisation, Evaporation, fission.
PACS: 02.70.Ns, 36.40.Qv, 36.40.Wa.
Classification: 3, 16.13, 23.

Nature of problem:
We provide a general parallel code to perform the dynamics of multiply charged clusters and a serial conjugate gradient code for locally minimising configurations obtained during the dynamics. Both of these programs are compatible with the input and output files of the MCMC2 code [1].

Solution method:
Parallel molecular dynamics simulations can be performed by the integration of classical equations of motion where all the derivatives are computed analytically whatever the details of the potential-energy surface. The parallelization aims to distribute different trajectories on different CPU cores, which makes parallelization efficiency optimal, with up to 9999 trajectories that could be run at the same time. A conjugate gradient program is also provided to investigate the local minima corresponding to the energy landscape explored during MD or MC simulations performed with MDMC2 and MCMC2, respectively.

Restrictions:
The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge-charge and polarisation terms). The simulations are performed in the NV E ensemble. There is no container which allows the user to study the fragmentation of the clusters (if any fragmentation occurs), which is our primary goal. Unlike MCMC2, that included a large choice of histograms for interpreting simulations (such as radial and angular histograms), MDMC2 does not include these features.

Unusual features:
The input and output configuration files are fully compatible with the files generated by MCMC2 which makes MDMC2 (+CGMC2) a useful companion of MCMC2 to model structural, thermodynamic and dynamic properties of multiplly-charged clusters. All the derivatives, even those including polarisation, are computed analytically in order to prevent inaccuracies due to numerical derivatives. MDMC2 is provided with one random number generator from the LAPACK library.

Running time:
The running time depends on the number of molecular dynamics steps, cluster size, and type of interactions selected (eg, polarisation turned on or off). For instance, a 12-trajectories MD simulation composed of 2 × 106 time steps (δt = 104) performed for A100+100 clusters, without inclusion of polarisation, and running on 12 Intel Xeon E5520 2.27 GHz CPU cores lasts 16 minutes. The same kind of MD simulation performed on the same type of processors for A309+309 clusters lasts a bit less than 3 hours. The physical memory used by the code also increases from about 44 MB to 74 MB for the whole job.

References:
[1] D. A. Bonhommeau, M.-P. Gaigeot, Comput. Phys. Commun. 184 (2013) 873-884.