Cori GPU Nodes Software¶
The software stack optimized for Cori GPU nodes is maintained in a different module tree. You can access the stack using the cgpu module.
module load cgpu
Ideally, you should purge your modules through module purge
before loading the cgpu
module, this will remove the default Cori stack meant for the production nodes i.e. Haswell and KNL.
Compilers, MPI, and GPU Offloading¶
This page offers information about which compilers and MPI libraries are available for use on the Cori GPU nodes. It also describes methods of offloading code onto GPUs (CUDA, OpenMP, OpenACC, etc.) with the available system software.
Math Libraries¶
Notes about using Intel MKL, Thrust, and other libraries are on this page.
Python¶
On the Cori GPU nodes, we recommend that users build a custom conda environment for the Python GPU framework they would like to use; instructions are detailed on this page.
Shifter with CUDA¶
Instructions for using Shifter with CUDA on the Cori GPU nodes are provided.
Profiling¶
The Cori GPU nodes provide a few tools for profiling GPU code; this page discusses how to use the tools, with examples given.
Debugging¶
Several tools are available on Cori GPU nodeswhich can aid in debugging GPU code; this page offers examples and guidance.
Known Issues¶
There are a few existing known issues regarding the Cori GPU nodes; these are posted online at this page.