Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Many scientific modelling programs are written in C, one of the most popular programming languages used by the HPC community. This course covers the basic concepts of C programming, including syntax, core features, and best practices. It targets students and researchers who have little to no programming background, but would like to learn a low level programming language in depth. Those with some basic C knowledge will also likely learn something new.

Note that although C++ is not covered directly, the vast majority of the content is directly transferrable as a starting point for learning C++.

If you have any questions regarding this training, please contact training.nci@anu.edu.au.

Date/Time

Nov 8-9, 2022 13:00 - 16:00 AEDT, Register via this link.


Prerequisites

A placeholder to be modified: The training session is driven on the Australian Research Environment (ARE) and Gadi. Attendees are encouraged to review the following page for background information.


Objectives

A placeholder to be modified: The training is designed to be the first parallel programming course for scientists. As such, it aims to help attendees

  • understand the multithreaded programming model,
  • convert serial iterations into parallel,
  • reduce errors from common misusages of OpenMP clauses.


Learning Outcomes

A placeholder to be modified: At the completion of this training session, you will be able to

  • know when to use OpenMP,
  • create Parallel Construct,
  • create a team of threads,
  • identify potential data race conditions,
  • distinguish data storage attributes,
  • understand how to split loop iterations to improve efficiency,
  • understand the limitations of multithreaded programming,
  • feel confident to advance to next-level parallel programming.


Topics Covered

A placeholder to be modified:

  • Threading in OpenMP
  • Shared-memory system v.s. Distributed-memory system
  • Loop parallelism methodologies
  • Parallel construct
  • Worksharing-loop construct
  • Reduction
  • Data race condition
  • OpenMP Library routines
  • Synchronisations
  • Data storage attributes
  • Loop Scheduling
  • Profiling OpenMP


  • No labels