University of Bath

Research Computing Team (DDaT)

Anatra HPC Documentation

Introduction to the Anatra HPC service

Anatra High Performance Computing System

Overview

The Anatra High Performance Compute (HPC) system provides University of Bath researchers with specialized computing infrastructure tailored to diverse research requirements. The cluster comprises 20 compute nodes organized into dedicated partitions, each optimized for specific research domains and computational workloads.


Partition Architecture

The system is structured into five distinct partitions, each serving specific research communities:

Partition Name Node Range Purpose Access
nodes node001–node008 General-purpose computing All users
chemistry node009–node016 Chemistry computations Chemistry users
lifesci node017 Life Sciences workloads Life Sciences users
chemgpu node018, node020 GPU-accelerated computing Chemistry users
lifescigpu node019 GPU-accelerated computing Life Sciences users

Partition Descriptions

  • nodes: General-purpose partition available to all researchers across the university for standard computational tasks
  • chemistry: Dedicated resources for computational chemistry applications and molecular simulations
  • lifesci: Specialized node for life sciences research, including bioinformatics and genomics workflows
  • chemgpu: GPU-enabled computing for accelerated chemistry calculations and molecular dynamics
  • lifescigpu: GPU-enabled computing for life sciences applications requiring graphics processing capabilities

System Information Commands

After connecting to the Anatra HPC system, users can query nodes and cluster partitions specific detailed information using the following SLURM commands:

Display All Partitions and Nodes

sinfo -Nel

Command breakdown:

  • sinfo — Display node and partition information
  • -N — Show node-specific details rather than partition summaries
  • -e — Include additional node features and attributes
  • -l — Use long listing format for comprehensive output

Use case: Provides a complete overview of all partitions and nodes, including their current states and hardware capabilities.


View Detailed Node Hardware Specifications

scontrol show nodes

Command breakdown:

  • scontrol — SLURM control utility for viewing and modifying cluster configuration
  • show nodes — Display comprehensive hardware details for each node

Information provided:

  • CPU architecture, core count, and socket configuration
  • Total and available memory
  • Node operational state (idle, allocated, drained, down)
  • GPU specifications (where applicable)
  • Real memory, temporary disk space, and other hardware attributes

Use case: Obtain detailed hardware-level information about cluster resources for resource planning and job optimization.


Generate Resource Summary Report

sinfo -o "%P %N %t %C %m %G"

Command breakdown:

  • sinfo — Display node and partition information
  • -o — Specify custom output format using field specifiers

Output columns:

  • %P — Partition name
  • %N — Node list
  • %t — Node state (idle, alloc, down, drain, etc.)
  • %C — CPU allocation summary (format: Allocated/Idle/Other/Total)
  • %m — Memory available per node (in MB)
  • %G — GPU type and count (if equipped)

Use case: Quickly assess available resources across all partitions in a compact, tabular format for efficient job submission planning.


Prerequisites

In order to complete the workshop you should be familiar with:

  • The linux command line
  • Accessing and submitting jobs to High Performance Computing clusters as a user

Schedule

Approximate timings for the lesson:

Time Episode Description
-:-- Setup Setup for the lesson
0:05 Accessing Anatra Logging onto the system
0:20 Slurm A brief overview of slurm
0:30 Hardware Overview of available hardware
0:40 Storage Storage set-up and where to keep your data
0:50 Software Using software modules
0:55 Running Gaussian Jobs Submitting Gaussian Jobs
0:55 Starting a VNC session Running VNC Sessions
0:55 Running GPU Jobs Submitting GPU jobs for CUDA applications
0.55 Apptainer Containers Running Containerized Jobs with Apptainer
0.55 Running ORCA Jobs Submitting ORCA Jobs