The Anatra High Performance Compute (HPC) system provides University of Bath researchers with specialized computing infrastructure tailored to diverse research requirements. The cluster comprises 20 compute nodes organized into dedicated partitions, each optimized for specific research domains and computational workloads.
The system is structured into five distinct partitions, each serving specific research communities:
| Partition Name | Node Range | Purpose | Access |
|---|---|---|---|
| nodes | node001–node008 | General-purpose computing | All users |
| chemistry | node009–node016 | Chemistry computations | Chemistry users |
| lifesci | node017 | Life Sciences workloads | Life Sciences users |
| chemgpu | node018, node020 | GPU-accelerated computing | Chemistry users |
| lifescigpu | node019 | GPU-accelerated computing | Life Sciences users |
After connecting to the Anatra HPC system, users can query nodes and cluster partitions specific detailed information using the following SLURM commands:
sinfo -Nel
Command breakdown:
sinfo — Display node and partition information-N — Show node-specific details rather than partition summaries-e — Include additional node features and attributes-l — Use long listing format for comprehensive outputUse case: Provides a complete overview of all partitions and nodes, including their current states and hardware capabilities.
scontrol show nodes
Command breakdown:
scontrol — SLURM control utility for viewing and modifying cluster configurationshow nodes — Display comprehensive hardware details for each nodeInformation provided:
Use case: Obtain detailed hardware-level information about cluster resources for resource planning and job optimization.
sinfo -o "%P %N %t %C %m %G"
Command breakdown:
sinfo — Display node and partition information-o — Specify custom output format using field specifiersOutput columns:
%P — Partition name%N — Node list%t — Node state (idle, alloc, down, drain, etc.)%C — CPU allocation summary (format: Allocated/Idle/Other/Total)%m — Memory available per node (in MB)%G — GPU type and count (if equipped)Use case: Quickly assess available resources across all partitions in a compact, tabular format for efficient job submission planning.
In order to complete the workshop you should be familiar with:
Approximate timings for the lesson:
| Time | Episode | Description |
|---|---|---|
| -:-- | Setup | Setup for the lesson |
| 0:05 | Accessing Anatra | Logging onto the system |
| 0:20 | Slurm | A brief overview of slurm |
| 0:30 | Hardware | Overview of available hardware |
| 0:40 | Storage | Storage set-up and where to keep your data |
| 0:50 | Software | Using software modules |
| 0:55 | Running Gaussian Jobs | Submitting Gaussian Jobs |
| 0:55 | Starting a VNC session | Running VNC Sessions |
| 0:55 | Running GPU Jobs | Submitting GPU jobs for CUDA applications |
| 0.55 | Apptainer Containers | Running Containerized Jobs with Apptainer |
| 0.55 | Running ORCA Jobs | Submitting ORCA Jobs |