Page tree

The mpi4py(MPI for Python) package provides Python bindings for the Message Passing Interface (MPI) standard, allowing Python applications to exploit multiple processors on workstations, clusters and supercomputers.

You can run mpi4py enabled python script via NCI-data-analysis module (2022.06 and later)  in both batch (Gadi) and interactive (ARE JupyterLab) ways.

Gadi

You could submit a PBS job to run your script in a batch way at Gadi.

The example job script is show below (make sure "gdata/dk92" is included in your storage request) 

#!/bin/bash
 
#PBS -l ncpus=4
#PBS -l mem=16GB
#PBS -l jobfs=20GB
#PBS -q normal
#PBS -l walltime=02:00:00
#PBS -P a00
#PBS -l storage=gdata/dk92+gdata/a00+scratch/a00
#PBS -l wd
  
module use /g/data/dk92/apps/Modules/modulefiles
module load NCI-data-analysis/2022.06

mpirun python3 helloworld.py >& output.log

The helloworld.py is shown below

#!/usr/bin/env python
"""
Parallel Hello World
"""
from mpi4py import MPI
import sys

size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()

sys.stdout.write(
"Hello, World! I am process %d of %d on %s.\n"
% (rank, size, name))

you will get the output simply printing each MPI rank ID:

Hello, World! I am process 3 of 4 on gadi-cpu-clx-0489.gadi.nci.org.au.
Hello, World! I am process 2 of 4 on gadi-cpu-clx-0489.gadi.nci.org.au.
Hello, World! I am process 1 of 4 on gadi-cpu-clx-0489.gadi.nci.org.au.
Hello, World! I am process 0 of 4 on gadi-cpu-clx-0489.gadi.nci.org.au.

You can request multiple nodes to run your mpi4py script in a batch job at Gadi.

You can find more mpi4py examples here.

ARE

You can also run the mpi4py script in your ARE Jupyterlab session. Please note, you can only run them in a single node.


  • No labels