University of Bath

Research Computing Team (DDaT)

Anatra HPC Documentation

Running Gaussian Jobs

Gaussian 09

Below is an example script for an Gaussian09 job, which should be submitted from the /scratch/projects/[project-code]/ storage area.

#!/bin/bash

#SBATCH --job-name=g09-test
#SBATCH --account=[your-account]

#SBATCH --partition=nodes
#SBATCH --output=%j.out
#SBATCH --error=%j.err

# Job time
#SBATCH --time=1-0            # 1 day. Max wall time is 5 days

#SBATCH --mem-per-cpu=2700    # Requesting more memory than this will
                              # allocate more CPUS to the job

## Gaussian uses single nodes
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=12

module purge
module load gaussian/09/A.02

source $g09profile

export GAUSS_PDEF=${SLURM_CPUS_PER_TASK}
export GAUSS_MDEF=$((SLURM_MEM_PER_CPU*SLURM_CPUS_PER_TASK))MB

echo $GAUSS_SCRDIR
mkdir -p $GAUSS_SCRDIR
chmod 700 $GAUSS_SCRDIR

infile="inputs/example.gjf"


g09 < $(basename $infile) > output.log

#Clean Up
rm -rf $GAUSS_SCRDIR

Gaussian 16

Below is an example script for an Ansys run, which should be submitted from the /scratch/projects/[project-code]/ storage area.

#!/bin/bash

#SBATCH --job-name=g16-test
#SBATCH --account=[your-account]

#SBATCH --partition=chemistry
#SBATCH --output=%j.out
#SBATCH --error=%j.err

# Job time
#SBATCH --time=1-0            # 1 day. Max wall time is 3 days

#SBATCH --mem-per-cpu=2700    # Requesting more memory than this will
                              # allocate more CPUS to the job

## Gaussian uses single nodes
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=12

module purge
module load gaussian/16/C.01

source $g16profile

export GAUSS_PDEF=${SLURM_CPUS_PER_TASK}
export GAUSS_MDEF=$((SLURM_MEM_PER_CPU*SLURM_CPUS_PER_TASK))MB

echo $GAUSS_SCRDIR
mkdir -p $GAUSS_SCRDIR
chmod 700 $GAUSS_SCRDIR

infile="inputs/example.gjf"


g16 < $(basename $infile) > output.log

# Clean Up
rm -rf $GAUSS_SCRDIR

Assuming these files are saved as example.slm then they can be submitted with the command:

sbatch example.slm

GAUSS_SCRDIR

As part of the prejob commands for slurm, a temporary directory is created in /tmp/<user_id>/[id], where [id] is the slurm id of the job. This directory is automatically exported as $GAUSS_SCRDIR and should not be altered. This both improves performance of your job by performing write-heavy operations on the local NVMe drives of the compute node, prevents /scratch from being slowed down, and cleans up files for you after jobs have completed.

If for some reason you need these files, you have read access to this directory during the job but files will be deleted at the end of the slurm job.