Page tree

Many scientific modelling programs rely on numerical iterative methods, e.g. Finite Difference method, Conjugate Gradient method etc. or stochastic methods such as Monte-Carlo methods. The nature of those methods heavily involves iterations. More often they are the bottleneck of the code's performance.

OpenMP is a directive-based API (application programming interface) for writing parallel programs on a shared-memory system. The implementation renders parallelism for programs by running concurrent multithreads. The common usage case is to accelerate nested loops by sharing workloads between multiple threads, which had been its main delivery before OpenMP 3.0.

The workshop is designed to introduce some most common yet powerful OpenMP practices to scientists to quickly turn a serial iterative C code into parallel.

If you have any questions regarding this training, please contact training.nci@anu.edu.au.

Date/Time

Registration for 2022 is now closed. Please stay tuned for sessions in 2023.

A new session on 6th December 2023 opens for registration.


Prerequisites

Only basic experience with C/C++ is required. Knowledge about C preprocessor directives, functions, pointer array is sufficient.

Serial codes will be provided for the exercises. The training will focus on OpenMP programming and C programming is secondary.

The training session is driven on the Australian Research Environment (ARE) and Gadi. Attendees are encouraged to review the following page for background information.


Objectives


The training is designed to be the first parallel programming course for scientists. As such, it aims to help attendees

  • understand the multithreaded programming model,
  • convert serial iterations into parallel,
  • reduce errors from common misusages of OpenMP clauses.


Learning Outcomes

At the completion of this training session, you will be able to

  • know when to use OpenMP,
  • create Parallel Construct,
  • create a team of threads,
  • identify potential data race conditions,
  • distinguish data storage attributes,
  • understand how to split loop iterations to improve efficiency,
  • understand the limitations of multithreaded programming,
  • feel confident to advance to next-level parallel programming.


Topics Covered
  • Threading in OpenMP
  • Shared-memory system v.s. Distributed-memory system
  • Loop parallelism methodologies
  • Parallel construct
  • Worksharing-loop construct
  • Reduction
  • Data race condition
  • OpenMP Library routines
  • Synchronisations
  • Data storage attributes
  • Loop Scheduling
  • Profiling OpenMP


  • No labels