University of Bath

Research Computing Team (DDaT)

Anatra HPC Documentation

Hardware

Overview:

  • Teaching: 10 min
  • Exercises: 0 min

Questions

  • What hardware is available?

Objectives

  • Understand what compute resources are available on Anatra

Slurm partitions

The new hpc service Anatra provides user access to an array of different compute instances. These instances are accessed through different slurm partitions.

In order to list the partitions issue the following command:

sinfo

which will list the partitions, the nodes in the partitions, availability and the current state:

[bad45@loginnode1 ~]$
❯ sinfo
PARTITION    AVAIL  TIMELIMIT  NODES  STATE NODELIST
nodes*        up   infinite      8   idle node-[001-008]
chemistry     up   infinite      5    mix node-[009-012,014]
chemistry     up   infinite      3  alloc node-[013,015-016]
chemgpu       up   infinite      1   idle node-018
lifesci       up   infinite      1   idle node-017
lifescigpu    up   infinite      1   idle node-019

Hardware

The Anatra cluster consists of a login node and nineteen compute nodes.

Login node

The login node has the following resources:

  • 16 Cores
  • 100 GB memory
  • 2 TB of shared storage space

Compute nodes

The compute nodes have the following resources:

  • 96 Cores (node-0[01-13])
  • 48 Cores (node-0[14-16])
  • 384 Cores (node-017/High Mem)
  • 256 GB memory (node-0[01-08])
    • ~2.7 GB / core
    • 4 NUMA zones per socket
  • 384 GB memory (node-0[09-13])
    • ~4 GB / core
  • 768 GB memory (node-0[14-16])
    • ~16 GB / core
  • 2.2 TB memory (node-017)
    • ~5.7 GB / core
  • ChemGpu
    • 96 Cores (node-018)
    • ~8 GB / core
    • ~GPU:l40s:1
  • LifesciGpu
    • 256 Cores (node-019)
    • ~4.3 GB / core
    • ~GPU:h100:2

It is worth noting that jobs which span multiple NUMA zones or sockets may see a notable decrease in performance if memory bandwidth limited. As such, it is rarely advisable to request 25 cores where 24 is sufficient.

Key Points:

  • All compute resources are currently accessed via the nodes*,chemistry,chemgpu,lifesci & lifescigpu partition, whilst VNC sessions are run via the login node
In [ ]: