There are 18 GPU nodes. Each GPU node contains the following:
- two sockets of 20-core Intel Xeon Gold 6148 ('Skylake') @ 2.40 GHz
- 384 GB DDR4 memory
- 930 GB on-node NVMe storage
- 8 NVIDIA V100 ('Volta')
GPUs, each with with
16 GB HBM2 memory
- Connected with NVLink interconnect
- 4 dual-port Mellanox MT27800 (ConnectX-5) EDR InfiniBand network cards
Each Cori GPU node contains 8 GPUs connected to each other in a 'hybrid cube-mesh' topology. In this arrangement, each GPU contains a single NVLink connection to each of two GPUs, and a doubly-bonded NVLink connection to two more GPUs, with twice the bandwidth of a single NVLink connection. So, each GPU is connected directly to 4 others. All GPUs are connected to the Skylake CPUs and the Infiniband network interface cards (NICs) via PCIe 3.0; there are 4 switches per node connecting the GPUs, NIC, and CPUs at a peak bandwidth of 16 GB/s in each direction. A diagram of this topology is provided below.
In the above diagram, one arrow represents one NVLink connection with a peak bandwidth of 25 GB/s per direction. A parallel set of arrows represents two NVLink connections which combine for a peak bandwidth of 50 GB/s. So, for example, on any node, GPU 0 has point-to-point connections with GPUs 1 and 2 at a peak bandwidth of 25 GB/s. Additionally, as illustrated, GPU 0 has point-to-point connections with GPUs 3 and 4 at a peak bandwidth of 50 GB/s.
Image and information adapted from Nvidia.