Page tree

Overview

DFTB+ is fast and efficient stand-alone implementation of the Density Functional based Tight Binding (DFTB) method.

It was developed in Paderborn in the group of Professor Frauenheim and is the successor of the old DFTB and Dylax codes.

For more information, see the official DFTB+ site.

Users of the program are advised to sign up on the DFTB-Plus-User Mail list. Also, you may find answered questions of other DFTB+ users in the Mail list archive.

How to use


To use this version of DFTB+ package, please load the appropriate dftbpplus modulefile with the command

 $ module load dftbplus/20.1

For more details on using modules see our software applications guide.

Three major binary executables of the package are dftb+, waveplot, and modes. These executables built with support of OMP parallelism. Additionally, MPI-OMP version of the binaries is provided (dftb+.mpi, waveplot.mpi and modes.mpi). These binary can be used for  either MPI or hybrid MPI-OMP jobs. The hybrid MPI-OMP regime is preferred for large multi-node calculations on Gadi.

To facilitate the use of the binaries for beginners, we offer an auxiliary script file run.sh which decides what version of binary (OMP or MPI-OMP) to run, set all OMP and MPI environment settings up based on your PBS number of requested CPUs.

However, some parallel options must be provided via the input file. We leave it on user to provide this options when necessary. If you find the  MPI settings for the job (can be seen  in first lines of the job log)  is not what you want, you can make your own settings via direct use of mpirun command, i.e. without use of the "run.sh script".

The script input arguments are:

  • %1 is the binary name, default is dftb+
  • %2 is the number of MPI threads. Default is equal to the number of the nodes.
  • %3 is the number of OMP threads per MPI thread. Default is lesser of the  number of cores per node and the number of requested cpus through PBS.

In most cases, the first argument (binary name) is enough. The script will set the number of MPI and OMP threads for you based on available PBS resources.

Here is an example of a parallel DFTB+ job run under PBS. The example file dftb+.pbs using project a99, asks for 16 CPUs, 1 hour of walltime, 16 GiB of memory and 1 GiB of fast jobfs disk space.

#!/bin/bash
 
#PBS -P a99
#PBS -l ncpus=16
#PBS -l mem=16GB
#PBS -l jobfs=1GB
#PBS -l walltime=01:00:00
#PBS -l wd
 
# Load module, always specify version number.
module load dftbplus/20.1
 
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
 
run.sh dftb+ > output

The input file dftb_in.bsd must be located in the directory from which the job has been submitted.  To submit the job to the queuing system:

$ qsub dftb+.pbs

The input file requires to provide path to the Slater-Kostner parameter sets. A large number of Slater-Kostner parameter sets is available in directory /apps/dftbplus/slako/

A sample input, PBS submission script, and output files for DFTB+ and Waveplot programs are available in directory /apps/dftbplus/20.1/first-calc.

Read file read_me inside the directory on protocol of running DFTB+ and Waveplot on NCI machines.

A number of python DFTB+ utilities to process DFTB+ results, is provided, see content of /apps/dftbplus/20.1/bin. The python utilities require preload of python/3.7.4.

The user's manual, manual.pdf, for DFTB+ and included utility programs can be found in directory $DFTBPLUS_ROOT. A large set of documentation including the manual, recipes and tutorials is available online on the developers website.


Authors: Ivan Rostov, Mohsin Ali
  • No labels