Page tree


Intel oneAPI has replaced the Intel Parallel Studio XE software suite as the source of the Intel Compilers and other Intel software development tools. On Gadi, the Base Toolkit and HPC Toolkit are available. Accessing components of the Intel oneAPI tool kits on Gadi is the same as accessing components of Intel Parallel Studio XE, by loading the appropriate modules. The Intel oneAPI variants of the Intel software development tools are denoted by their software versions. A version of 2021 and later denotes an Intel oneAPI software product, while 2020 and earlier denotes an Intel Parallel Studio XE product. All Intel oneAPI tools have been integrated with NCI's modules and compiler wrapper environment, and should work in the same manner as their Intel Parallel Studio XE counterparts. Please notify help if you see a link or runtime error while using an Intel oneAPI software product that was not present when using an Intel Parallel Studio XE software product. 

The full set of tools available under Intel oneAPI, and the modules used to access them are listed below. Note that A version number of 2021.1.1  is used in these examples, however any version of these modules dated 2021 or later corresponds to Intel oneAPI components. We always recommend specifying a version number when loading modules. Intel oneAPI has adopted semantic versioning for all of its components, hence this difference in version numbers between oneAPI components and Intel Parallel Studio XE components. More information is provided here:

The Intel Compilers are freely available software as of version 2021.1.1 and do not require a licence. If your PBS job is using Intel Compilers from an Intel oneAPI installation, you do not need to specify 


Do not use the scripts provided in the top-level intel-oneapi directory. These scripts work on the flawed assumption that there is only a single version of Intel oneAPI available on the system, and as a result, search for and run all scripts for every installation of every component of Intel oneAPI. This will result in unknown and possibly conflicting environment settings, and causes extreme load on Gadi's lustre metadata servers, especially when run from multiple nodes. The environment modules listed below set the environment exactly as if the  scripts had been used, as well as doing extra configuration (e.g. selecting underlying compilers for MPI) that the scripts do not perform.




Performance profiling and software development tools

Intel oneAPI compilers 

The 2021 and onwards intel-compiler  modules contain two sets of C,C++ and Fortran compilers. The compilers present in Intel Parallel Studio XE have been renamed "Intel Compiler Classic", and the new LLVM-based oneAPI compilers are now known as the "Intel Compilers".

The table below shows the "Classic" compiler name, and its oneAPI equivalent. Note that a Data Parallel C++ compiler is new to oneAPI, and was not present in Intel Parallel Studio XE.


Intel Compiler Classic

Intel Compiler (oneAPI)


Note that despite the name, the "classic" compilers are still under active development, and will be present in every Intel oneAPI installation on Gadi for the foreseeable future. When the intel-compiler module is loaded on Gadi, a number of environment variables are set that control the underlying compilers used by OpenMPI, Intel MPI and the HDF5 compiler wrappers. By default, the intel-compiler module will continue to set these variables such that those libraries will use the "classic" compiler. If you wish to switch to the oneAPI compilers while using an OpenMPI, Intel MPI or HDF5 module present in /apps, the table below shows the variables you need to set after loading the compiler module in order to effect this change.


Intel MPI 


C++OMPI_CXX=icpx I_MPI_CXX=icpx HDF5_CXX=icpx; HDF5_CLINKER=icpx 
FortranOMPI_FC=ifx; OMPI_F77-ifx I_MPI_F90=ifx; I_MPI_F77=ifx HDF5_FC=ifx; HDF5_FLINKER=ifx 

The intel-compiler-llvm modules have been provided as of oneAPI version 2022.0, which set these environment variables according to the table above. Note that the intel-compiler and intel-compiler-llvm modules both provide access to the "classic" and oneAPI compilers, the only difference between the modules are the values for these environment variables. As of oneAPI 2022.0, the version numbers for the "classic" and oneAPI compilers have diverged. The intel-compiler module carries the version number of the "classic" compiler, and the intel-compiler-llvm module carries the version number of the oneAPI compilers.

Please note that, at time of writing, the oneAPI Intel Fortran Compiler, ifx, is still in beta stage, and our testing has shown that it fails to build several application benchmarks due to internal compiler errors.

Any errors of the type catastrophic error: **Internal compiler error: ... should be reported directly to Intel along with a reproducer. Please also note that several flags commonly used with the Intel Compiler Classic are not supported by the Intel oneAPI compilers. Please check the compiler man pages (man icx or man ifx) for equivalent flags if you see warnings of the form: ifx: command line warning #10148: option '-no-prec-div' not supported .

As of the 2022 release of Intel oneAPI, the intel-compiler-llvm module has been provided that sets all of the above environment variables to their oneAPI compiler settings. Both this module and the intel-compiler module provide access to the Classic and oneAPI compilers, the only difference between the two modules are the values of the environment variables in this section.

This release also marks the divergence of the version numbers of the Classic and oneAPI compilers. The Classic compilers retain the 2021 major version number, whereas the oneAPI compilers have been assigned the 2022 major version number. For example, version 2021.5.0  of the Classic compilers and version 2022.0.0 of the oneAPI compilers were provided in the same release of the oneAPI HPC Toolkit. Intel has also changed the way LLVM components are accessed in the 2022 release of Intel oneAPI, see: for more information.

The README.txt file found in $INTEL_COMPILER_LLVM_BASE/linux/bin-llvm provides further information. In most cases, this change will have no effect on regular application building, however, if you are building static libraries, and require access to llvm-ar directly, you will need to access it using the path provided by the command dpcpp --print-prog-name=llvm-ar. The method for doing this will vary depending on the build system your project uses. If your project uses autotools, you can modify your makefile as follows:

 AR := $(shell dpcpp --print-prog-name=llvm-ar)

If your project uses CMake, you can add the following argument to your cmake invocation

$ cmake -DCMAKE_AR=$( dpcpp --print-prog-name=llvm-ar )

oneAPI Tool Variants 

Several Intel oneAPI tools have variants available due to incompatible requirements for supporting several languages and threading models. The different variants are selected by setting an environment variable before loading the appropriate module. The tables below detail which tools have variants available. what those variants are, and what environment variable to set to switch between them. A warning will be issued and the module will not load if the environment variable in the table header is set to a value other than one in the "Variant" table column. The (default) variant will be loaded if the environment variable is not present.Module variants

Intel Collective Communications Library: intel-ccl. Environment Variable: NCI_ONEAPI_CCL_VARIANT
cpu_icc (default)CPU-only variant, compatible with Intel C Compiler Classic
cpu_gpu_dpcppData Parallel C++ variant, only compatible with oneAPI DPC++ compiler
Intel Deep Neural Network Library: intel-dnnl. Environment Variable: NCI_ONEAPI_DNNL_VARIANT
cpu_iomp (default)CPU-only variant with Intel OpenMP threading model
cpu_gompCPU-only variant with GNU OpenMP threading model
cpu_tbbCPU-only variant with Intel Threading Building Blocks threading model
cpu_dpcpp_gpu_dpcppData Parallel C++ variant, only compatible with oneAPI DPC++ compiler
Intel MPI: intel-mpi. Environment Variable: I_MPI_LIBRARY_KIND
release (default)Enables the optimised MPI library, limited to MPI_THREAD_SERIALIZED  threading level
release_mtEnables the multi-threaded optimised MPI library. MPI_THREAD_MULTIPLE threading level is available.
debugEnables the debugging MPI library, limited to MPI_THREAD_SERIALIZED  threading level
debug_mtEnables the multi-threaded debugging MPI libraryMPI_THREAD_MULTIPLE  threading level is available.

Running module help on any of these modules will display the list of variants and the associated environment variable used to select a variant. Note that to change to a different variant after a module has been loaded, it is necessary to first unload the module, then set the environment variable, then load the module again.

Device Compatibility 

At time of writing Data Parallel C++ applications are only compatible with CPU host devices on Gadi. At this stage, the Intel oneAPI DPC++ toolchain only provides device support for Intel GPUs and FPGAs, neither of which are currently available on Gadi.
Authors: Dale Roberts, Mohsin Ali
  • No labels