Page tree


mpiP is a lightweight profiling library for MPI applications.

In addition to the MPI summary profiling provided by IPM, mpiP can provide "call site" statistics showing which calls in the code are dominating MPI execution time.  

More information:

How to use 

You can check the versions installed in Gadi with a module query:

$ module avail mpiP

We normally recommend using the latest version available and always recommend to specify the version number with the module command:

$ module load mpiP/3.4.1

For more details on using modules see our software applications guide.

Using mpiP does not require code recompilation (however adding -g is recommended) but requires adding some libraries at the link time. The following libraries need to be added:

-lmpiP -lm -lbfd -liberty -lunwind

For example, compile and link may look like this:

# Load modules, always specify version number.
$ module load openmpi/4.0.2
$ module load mpiP/3.4.1
$ mpicc -g -o mpip_linked_mpi_program mpi_program.c -lmpiP -lm -lbfd -liberty -lunwind

An example PBS job submission script named is provided below. It requests 48 CPUs, 128 GiB memory, and 400 GiB local disk on a compute node on Gadi from the normal queue for 30 minutes against the project a00. It also requests the system to enter the working directory once the job is started. This script should be saved in the working directory from which the analysis will be done.

To change the number of CPU cores, memory, or jobfs required, simply modify the appropriate PBS resource requests at the top of the job script files according to the information available in our queue structure guide.

Note that if your application does not work in parallel, setting the number of CPU cores to 1 and changing the memory and jobfs accordingly is required to prevent the compute resource waste.

#PBS -P a00
#PBS -q normal
#PBS -l ncpus=48
#PBS -l mem=128GB
#PBS -l jobfs=400GB
#PBS -l walltime=00:30:00
#PBS -l wd
# Load modules, always specify version number.
module load openmpi/4.0.2
module load mpiP/3.4.1
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
# Run application
mpirun -np $PBS_NCPUS ./mpip_linked_mpi_program

To run the job you would use the PBS command:

$ qsub

Profile Data

By default mpiP profiler generates a text based output file named <ExecutableName>.<NumberOfCores>.*******.*.mpiP  when the job finishes.


The text based output file can be viewed using a text editor like the following way:

$ vim <ExecutableName>.<NumberOfCores>.*******.*.mpiP
Authors: Mohsin Ali
  • No labels