Page tree

Overview

The ABAQUS suite of engineering analysis software packages is used to simulate the physical response of structures and solid bodies to load, temperature, contact, impact, and other environmental conditions.

ABAQUS is distributed by Simulia.

How to use 


To access the ABAQUS software package, you must first confirm that you agree to the licence conditions found detailed below, and then request the abaqus  software group on my.nci.org.au.
After you have been added to this group, you will be able to use the ABAQUS package on Gadi after loading the abaqus  module.

$ module load abaqus/2020

ABAQUS simulations should always be run in PBS jobs. The following example script creates the environment files required for an ABAQUS simulation from parameters supplied to PBS, and can therefore be reused between jobs.

#!/bin/bash
 
#PBS -l ncpus=12
#PBS -l walltime=10:00:00
#PBS -l mem=48GB
#PBS -l jobfs=10GB
#PBS -l software=abaqus
 
# Load modules, always specify version number.
module load abaqus/2020
module load intel-mpi/2019.8.254
 
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
 
# Copy input file from submission directory to jobfs.
cp $PBS_O_WORKDIR/myinput.inp $PBS_JOBFS
  
# Change in jobfs directory.
cd $PBS_JOBFS
 
# Construct Abaqus environment file.
cat << EOF > abaqus_v6.env
mp_rsh_command="/opt/pbs/default/bin/pbs_tmrsh -n -l %U %H %C"
mp_mpi_implementation = IMPI
mp_mpirun_path = {IMPI: "$INTEL_MPI_ROOT/intel64/bin/mpiexec.hydra"}
memory = "$(bc<<<"$PBS_VMEM*90/100") b"
cpus = $PBS_NCPUS
EOF
  
# Run Abaqus.
/opt/nci/bin/pid-ns-wrapper.x -w -- abaqus analysis job=jobname input=myinput scratch=$PBS_JOBFS
 
# Make "results" directory in submission directory.
mkdir -p $PBS_O_WORKDIR/results.$PBS_JOBID
 
# Copy everything back from jobfs to results directory.
cp * $PBS_O_WORKDIR/results.$PBS_JOBID

It is also possible to run abaqus on multiple nodes. You will need top add mp_file_system=(LOCAL,LOCAL) line to the abaqus_v6.env file. The example PBS script is below:

#!/bin/bash
 
#PBS -l ncpus=96
#PBS -l walltime=10:00:00
#PBS -l mem=390GB
#PBS -l jobfs=700GB
#PBS -l software=abaqus
 
# Load modules, always specify version number.
module load abaqus/2021
module load intel-mpi/2019.9.304
 
# Copy input file from submission directory to jobfs.
cp $PBS_O_WORKDIR/myinput.inp $PBS_JOBFS
  
# Change in jobfs directory.
cd $PBS_JOBFS
 
# Construct Abaqus environment file.
cat << EOF > abaqus_v6.env
mp_rsh_command="/opt/pbs/default/bin/pbs_tmrsh -n -l %U %H %C"
mp_mpi_implementation = IMPI
mp_mpirun_path = {IMPI: "$INTEL_MPI_ROOT/intel64/bin/mpiexec.hydra"}
mp_file_system=(LOCAL,LOCAL)
memory = "$(bc<<<"$PBS_VMEM*90/100") b"
cpus = $PBS_NCPUS
EOF
  
# Run Abaqus.
/opt/nci/bin/pid-ns-wrapper.x -w -- abaqus analysis job=jobname input=myinput scratch=$PBS_JOBFS
 
# Make "results" directory in submission directory.
mkdir -p $PBS_O_WORKDIR/results.$PBS_JOBID
 
# Copy everything back from jobfs to results directory.
cp * $PBS_O_WORKDIR/results.$PBS_JOBID
Note that abaqus places output to the first node jobfs directory and uses other nodes jobfs disks for temporary files only.This is why the above script only copies back data from the first node jobfs. 

Keep in mind that giving abaqus more CPUs doesn't necessary means that it will run faster. For example, the test system I used, completes in 37minutes on 1 node (48 CPUs) but needs 2hours 8minutes on two nodes (96 CPUs).

Always check abaqus performance on a single node! 

Note the use of the pid-ns-wrapper.x  application to launch abaqus . This application is a workaround for a bug in the 2020 version of abaqus  and may not be required for future versions.

We suggest that you run each single-node abaqus job in the $PBS_JOBFS  directory as indicated above. The local SSDs on each compute node outperform /scratch  and /g/data  file systems for small, non-sequential read/write operations. 

If you wish to run the double precision version of Abaqus then add the keyword "double" to the command line abaqus analysis . You may also need to add the flag output_precision=full in to the command line.

Both Abaqus/Standard and Abaqus/Explicit can be run in parallel on more than 1 CPU core. The parallelism is based on MPI for some parts of the Abaqus analysis and is threaded for other parts. Running on more cores is not a guarantee that your simulation will run faster. We advise that you run a shorter representative simulation (e.g. same mesh, but fewer time steps) at several different core counts, and record the timing and SU usage. Then run your full-length simulations on the number of cores that minimised either walltime or SU usage, depending on whether you are constrained by time or compute allocation.

Licence requirements 


Simulia Australia have imposed several conditions on the Abaqus licence at the NCI National Facility. To be granted access to Abaqus you must confirm that you will use Abaqus only for academic research. You must also confirm that you do not receive funding from any commercial source. This includes CSIRO and CRCs. You should also be aware that we are required to provide Simulia with a list of Abaqus users every month. Once you agree to these conditions you can be added to the Abaqus group.

Staff members of RMIT who are part of the abaqus_rmit software group can access their own licence files to run abaqus. To do this members of abaqus_rmit will need to modify the beginning of the above example submission script to a batch script like the following.

#!/bin/bash
 
#PBS -l ncpus=12
#PBS -l walltime=10:00:00
#PBS -l mem=48GB
#PBS -l jobfs=100GB
#PBS -l software=abaqus_rmit
#PBS -l wd
 
# Load modules, always specify version number.
module load abaqus/2020
module load intel-mpi/2019.8.254
module load abaqus_licence/rmit
...

Additional Notes

GUI access
Most users will prefer setting up the GUI (viewer of CAE) on their desktop machine using the abaqus cae  command. However, if you need to use the viewer over the network from the systems you will need to ensure that you have a client that supports OpenGL calls. We recommend using VNC viewer. You can use the SSVNC to access the cluster via VNC.
Authors: Yue Sun, Dale Roberts, Mohsin Ali, Andrey Bliznyuk
  • No labels