Page tree

The Message Passing Interface (MPI) is arguably the primary programming model used for applications' internode parallelism. Developed by the NCI Training Team, the Introduction to MPI workshop demonstrates MPI procedures based on the latest MPI Standard - 4.0, with the hands-on finite difference exercise seen in the Introduction to OpenMP workshop.

In this workshop participants will be shown various MPI communication mechanisms to achieve exchanging boundary information of parallel finite difference method. Additionally, MPI-IO and MPI profiling will also be discussed. Event registration is now closed.

If you have any questions regarding this training, please contact training.nci@anu.edu.au.


Date/Time

Introduction to MPI is now available online at NCI Teachable.


Prerequistes

The workshop demonstrates examples in C. Only basic experience with C/C++ is required. Knowledge about C functions, pointers and memory management is sufficient.

Serial codes will be provided for the exercises. The training will focus on MPI programming and C programming is secondary.

The training session is driven on the Australian Research Environment (ARE) and Gadi. Attendees are encouraged to review the following page for background information.


Objectives

The training is designed to be the first MPI programming course for scientists. As such, it aims to help attendees

  • Understand the MPI programming model,
  • Familiarise with the semantic terms in MPI Standard,
  • Perform various MPI communication operations.


Learning outcomes

At the completion of this training session, you will be able to

  • Know when to use MPI for parallelization,
  • How to prepare the buffer for communication calls,
  • Distinguish and use blocking and non-blocking communications,
  • Understand different communication modes,
  • Overlap the communication and computation,
  • Perform basic one-sided communications
  • Output data in parallel with MPI-IO,
  • Profile MPI applications,
  • Feel confident for more advanced parallel programming.


Covered topics
  • MPI semantics
  • Point-to-point communication
  • Blocking communication
  • Nonblocking communication
  • Persistent communication
  • Collective communication
  • One-sided communication
  • Overlapping communication and computation
  • MPI-IO
  • Profiling MPI codes



  • No labels