Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator.

LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grain systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the mesoscale or continuum levels.

For more information see the LAMMPS homepage.

Usage

First you need to decide on the version of the software you want to use. Use 

module avail lammps

to check what versions are available. We normally recommend using the latest version available. For example, to load the version 11Aug17 of lammps use 

module load lammps/11Aug17

For more details on using modules see our modules help guide.

Lammps runs under the PBS batch system using a job script similar to the following file jobscript:

#!/bin/bash
#PBS -P your_project_code
#PBS -l walltime=10:00:00
#PBS -l ncpus=4
#PBS -l mem=400MB
#PBS -l jobfs=1GB
#PBS -l software=lammps
#PBS -l wd

module load lammps/11Aug17

mpirun lmp_openmpi -i input_filename > output

To submit the job to PBS run the following command 

qsub jobscript

If the number of CPUs you requesting is less than 16, you need to add --bind-to-none keyword to mpirun, like this:

mpirun --bind-to-none lmp_openmpi > output

Using Lammps OMP

Starting from version 16Feb16 lammps build includes user-omp package. This is designed to allow usage of multi-threading. Our tests shows that this may lead to 10% speed up (possibly more, this is system and method depending) of pure MPI calculations. A typical PBS script for this, looks like the following:

#!/bin/bash
#PBS -P your_project_code
#PBS -l walltime=10:00:00
#PBS -l ncpus=64
#PBS -l mem=64GB
#PBS -l jobfs=1GB
#PBS -l software=lammps
#PBS -l wd

module unload intel-fc intel-cc openmpi
module load openmpi/1.10.2
module load lammps/16Feb16

n=8

mpirun -map-by ppr:$((8/$n)):socket:PE=$n lmp_openmpi -sf omp -pk omp $n > output

This will run 8 threads on each CPU. Our small tests indicates that n=8 or n=4 give the best performance. However, we encourage you to run your own tests.

Using the GPU queue

For lammps/7Dec15-gpu: lmp_gpu, double precision

module load lammps/7Dec15-gpu
ngpus=$(( PBS_NGPUS<8?PBS_NGPUS:8 ))
mpirun -np $PBS_NCPUS lmp_gpu -sf gpu -pk gpu $ngpus -i input_filename > output
  • No labels