LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator.
LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grain systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the mesoscale or continuum levels.
For more information see the LAMMPS homepage.
First you need to decide on the version of the software you want to use. Use
module avail lammps
to check what versions are available. We normally recommend using the latest version available. For example, to load the version 3Mar20 of lammps use
module load lammps/3Mar20
For more details on using modules see our modules help guide.
Lammps runs under the PBS batch system using a job script similar to the following file jobscript:
#!/bin/bash #PBS -P your_project_code #PBS -l walltime=10:00:00 #PBS -l ncpus=4 #PBS -l mem=4GBB #PBS -l jobfs=1GB #PBS -l software=lammps #PBS -l wd module load lammps/3Mar20 mpirun lmp_openmpi -i input_filename > output
To submit the job to PBS run the following command
qsub jobscript
Our lammps build includes user-omp package. This is designed to allow usage of multi-threading. Our tests shows that this may lead to 10% speed up (possibly more, this is system and method depending) of pure MPI calculations. A typical PBS script for this, looks like the following:
#!/bin/bash #PBS -P your_project_code #PBS -l walltime=10:00:00 #PBS -l ncpus=96 #PBS -l mem=100GB #PBS -l jobfs=1GB #PBS -l software=lammps #PBS -l wd module load lammps/3Mar20 n=12 mpirun -map-by ppr:$((12/$n)):numa:PE=$n lmp_openmpi -sf omp -pk omp $n > output
This will run 12 threads on each CPU. Our small tests indicates that n=8 or n=4 give the best performance. However, we encourage you to run your own tests.
For lammps/7Dec15-gpu: lmp_gpu, double precision
module load lammps/7Dec15-gpu ngpus=$(( PBS_NGPUS<8?PBS_NGPUS:8 )) mpirun -np $PBS_NCPUS lmp_gpu -sf gpu -pk gpu $ngpus -i input_filename > output