Skip to content

Python

There are many options for using Python on GPUs, each with their own set of pros/cons. We have tried to provide a brief overview of several frameworks here. The Python GPU landscape is changing quickly so please check back periodically for more information.

On the Cori GPU nodes, we recommend that users build a custom conda environment for the Python GPU framework they would like to use. You can find instructions for building a custom conda environment here. Make sure that you are on corigpu when you build your environment and install the packages you need.

In all cases you'll need to:

  1. Make sure you have sourced your conda environment via source activate mypythonenv
  2. Run your code with the general format srun -n 1 python yourscript.py args...

CuPy

  • module load python cuda
  • Build a custom conda environment
  • pip install cupy into your environment following the directions here
  • As of May 2020, our default CUDA module is 10.2 Your CuPy and CUDA versions must match, so you'll need to pip install cupy-cuda102

Numba CUDA

  • module load python cuda
  • Build a custom conda environment
  • conda install numba and cudatoolkit into your environment following the directions here

PyOpenCL

  • module load python
  • Build a custom conda environment with conda install -n pyoencl-env -c conda-forge pyopencl cudatoolkit
  • And then create a link (only once) to the CUDA OpenCL vendor driver via
    ln -s /etc/OpenCL/vendors/nvidia.icd ~/.conda/envs/pyopencl-env/etc/OpenCL/vendors/nvidia.icd
    

PyCUDA

  • module load python cuda
  • build a custom conda environment
  • pip install pycuda

JAX

  • module load python cuda
  • Build a custom conda environment
  • JAX installation is somewhat complex. You can use this script on corigpu:
    #!/usr/bin/env bash
    
    # install jaxlib
    PYTHON_VERSION=cp37  # alternatives: cp36, cp37, cp38
    CUDA_VERSION=cuda102  # alternatives: cuda92, cuda100, cuda101, cuda102
    PLATFORM=linux_x86_64  # alternatives: linux_x86_64
    BASE_URL='https://storage.googleapis.com/jax-releases'
    pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-0.1.46-$PYTHON_VERSION-none-$PLATFORM.whl
    
    pip install --upgrade jax  # install jax
    

RAPIDS

We provide a RAPIDS kernel which you will find at jupyter.nersc.gov.

If you would like your own custom conda environment and/or Jupyter kernel that contains RAPIDS, you can follow the directions below.

1) Make sure you are on a Cori gpu node

2) module load python cuda

3) conda create -n rapids_env python=3.7

4) source activate rapids_env

5) Using the tool at https://rapids.ai/start.html, we generated the following command:

conda install -c rapidsai -c nvidia -c conda-forge \
    -c defaults rapids=0.13 python=3.7 cudatoolkit=10.2

to conda install RAPIDS into your rapids_env.

6) If you intend to use RAPIDS via scripts/command line, you're ready to go. If you would like to create your own RAPIDS kernel to use in Jupyter, you'll need to conda install ipykernel and python -m ipykernel install --user --name rapids_env --display-name rapids

7) You'll need to restart your Jupyter server. When you log in, you should now see your rapids kernel as an option.

8) If you need other libraries like matplotlib, you may want to install them during your original conda install command (see step 5) OR you may want to install later via pip with --user. These will help you avoid dependency problems.

For more information about how to use NVIDIA RAPIDS, please see our Examples page.

MPI4py

You can build mpi4py and install it into a conda environment on Cori to be used with one of the MPI implementations available for use with the GPU nodes. First, request an interactive session on a GPU node:

module purge; module load python esslurm
salloc -C gpu -A <account> -t 30 -G 1 -c 10

Then, on the GPU node, create or activate a conda environment, load your MPI implementation of choice (including relevant compiler), download mpi4py, and build/install the software using the mpicc wrapper:

user@cgpu12:~> conda create -n mpi4pygpu python=2.7
user@cgpu12:~> source activate mpi4pygpu
(mpi4pygpu) user@cgpu12:~> module load gcc/7.3.0 cuda mvapich2 (or pgi/intel instead of gcc)
(mpi4pygpu) user@cgpu12:~> wget https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-3.0.0.tar.gz
(mpi4pygpu) user@cgpu12:~> tar zxvf mpi4py-3.0.0.tar.gz
(mpi4pygpu) user@cgpu12:~> cd mpi4py-3.0.0
(mpi4pygpu) user@cgpu12:~> python setup.py build --mpicc=mpicc
(mpi4pygpu) user@cgpu12:~> python setup.py install

Deep Learning Software

Tensorflow:

module load tensorflow/gpu-1.13.1-py36

PyTorch:

module load pytorch/v1.1.0-gpu