Page tree

Overview

IPM is a low-overhead and widely-used MPI profiler with many sites using it by default for every MPI job.

The level of detail is selectable at runtime and presented through a variety of text and web reports.

More information: http://ipm-hpc.sourceforge.net

How to use 


You can check the versions installed in Gadi with a module query:

$ module avail ipm

We normally recommend using the latest version available and always recommend to specify the version number with the module command:

$ module load ipm/2.0.6

For more details on using modules see our software applications guide.

An example PBS job submission script named ipm_job.sh is provided below. It requests 48 CPUs, 128 GiB memory, 400 GiB local disk on a compute node on Gadi from the normal queue, for 30 minutes against the project a00. It also requests the system to enter the working directory once the job is started.

This script should be saved in the working directory from which the analysis will be done. To change the number of CPU cores, memory, or jobfs required, simply modify the appropriate PBS resource requests at the top of the job script files according to the information available in our queue structure guide. 

Note that if your application does not work in parallel, setting the number of CPU cores to 1 and changing the memory and jobfs accordingly is required to prevent the compute resource waste.

#!/bin/bash
  
#PBS -P a00
#PBS -q normal
#PBS -l ncpus=48
#PBS -l mem=128GB
#PBS -l jobfs=400GB
#PBS -l walltime=00:30:00
#PBS -l wd
  
# Load modules, always specify version number.
module load openmpi/4.0.2

# Note, load ipm after MPI, we need it to detect the correct library to LD_PRELOAD from the environment.
module load ipm/2.0.6
  
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
  
# Generate full report
export IPM_REPORT=full

# If your program uses Fortran bindings to MPI, add the following:
export LD_PRELOAD=$IPM_BASE/lib/ompi/libipmf.so:$LD_PRELOAD
  
# Run application
mpirun -np $PBS_NCPUS <your MPI exe>

To run the job you would use the PBS command:

$ qsub ipm_job.sh

Profile Data


By default IPM produces a summary of the performance information for the application on stdout. In this case the report will be added to the job output file <JobScriptName>.o<JobID>  when the job finishes. It also generates an XML data file which will be named as <UserID>.**********.******.ipm.xml.

Parser and Viewer


The XML data file can be parsed to generate a text report in the following way:

# Load module, always specify version number.
$ module load ipm/2.0.6
 
# Parse the XML file and show the report
$ ipm_parse -full <UserID>.**********.******.ipm.xml

The XML data file can be parsed to generate a graphical HTML webpage in the following way:

# Load module, always specify version number.
$ module load ipm/2.0.6
 
# Parse the XML file and generate a graphical HTML webpage
$ ipm_parse -html <UserID>.**********.******.ipm.xml

The graphical HTML webpage will be generated under the directory  named <ExecutableName>_<NumberOfCores>_<UserID>.**********.******.ipm.xml_ipm_<JobID>.gadi-pbs. You can copy this directory to your local computer disk and open the index.html file under this directory using your favourite web browser on your local computer to view the graphical HTML webpage report.

Some sample webpage reports are available at http://ipm-hpc.sourceforge.net/examples/ex1/, http://ipm-hpc.sourceforge.net/examples/ex2/ and http://ipm-hpc.sourceforge.net/examples/ex3/.

Authors: Mohsin Ali
  • No labels