Slurm

From Cncz
Revision as of 14:00, 6 October 2015 by Polman (talk | contribs) (Nieuwe pagina aangemaakt met '=== Slurm batch software === The science cluster is switching to slurm for batch management. At the moment the TCM and HEF nodes are all switched to slurm. The oth...')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Slurm batch software

The science cluster is switching to slurm for batch management. At the moment the TCM and HEF nodes are all switched to slurm. The other nodes will follow.


Submitting your first job

To execute your job on the cluster you need to write it in the form of a shell script (don't worry this is easier than it sounds). A shell script in its most basic form is just a list of commands that you would otherwise write on the command line. The first line of the script should contain an instruction telling the system which type of shell is supposed to execute it. Unless you really need something else you should always use bash. So without further ado here is your first shell script.

 #! /bin/bash
 echo "Hello world!"

Type this into an editor (such as nano or vi) and save it as hello.sh. To execute this on the headnode give it executable permissions (not needed when submitting)

 $ chmod u+x hello.sh

and run it.

 $ ./hello.sh

It will print (//echo//) Hello world to the screen (also called //standard out// or stdout). If anything goes wrong an error message will be send to //standard error// or stderr which in this case is also the screen.

To execute this script as a //job// on one of the compute nodes we submit it to the cluster //scheduler//. This is done with the command sbatch.

 $ sbatch hello.sh

The scheduler will put your job in the default job //partition// and respond by giving you the //job number// (10 in this example).

 Submitted batch job 10

Your job will now wait until a slot on one of the compute nodes becomes available. Then it will be executed and the output is written to a file slurm-10.out in your home directory (unless you specify otherwise as explained later).