Job Options
Account to be charged for resources used -A, --account=<account>
|
Job array specification (sbatch only)
|
Initiate job after specified time
|
Required node features -C, --constraint=<features>
|
Bind tasks to specific CPUs (srun only)
|
Number of CPUs required per task -c, --cpus-per-task=<count>
|
Defer job until specified jobs reach specified state -d, --dependency=<state:jobid>
|
Specify distribution methods for remote processes -m, --distribution=<method[:method]>
|
File in which to store job error messages (sbatch and srun only) -e, --error=<filename>
|
Specify host names to exclude from job allocation
|
Reserve all CPUs and GPUs on allocated nodes
|
Export specified environment variables (e.g., all, none) --export=<name=value>
|
Number of GPUs required per task --gpus-per-task=<list>
|
Job name -J, --job-name=<name>
|
Prepend task ID to output (srun only)
|
E-mail notification type (e.g., begin, end, fail, requeue, all)
|
E-mail address --mail-user=<address>
|
Memory required per allocated node (e.g., 16GB)
|
Memory required per allocated CPU (e.g., 2GB) --mem-per-cpu=<size>[units]
|
Specify host names to include in job allocation -w, --nodelist=<hostnames>
|
Number of nodes required for the job
|
Number of tasks to be launched
|
Number of tasks to be launched per node --ntasks-per-node=<count>
|
File in which to store job output (sbatch and srun only) -o, --output=<filename>
|
Partition in which to run the job -p, --partition=<names>
|
Signal job when approaching time limit --signal=[B:]<num>[@time]
|
Limit for job run time
|
The options can be used on the command line or in the case of a submission script they must be preceed by #SBATCH
Job submission
|
Submit a batch script |
|
Request allocation for interactive job |
|
Request allocation and run an application |
sbatch and salloc examples
# Request interactive job on debug node with 4 CPUs
salloc -p standby -c 4
# Request interactive job with V100 GPU
salloc -p comm_gpu_inter --ntasks=1 --gpus=3
# Submit batch job
sbatch runjob.slurm
|
|
|
sprio options
Output format to display -o, --format=<options>
|
Filter by job IDs (csl) `-j, --jobs=<job_id_list>`
|
Show more available information
|
Show the normalized priority factors
|
Filter by partitions (csl) -p, --partition=<partition_list>
|
Filter by users (csl) -u, --user=<user_list>
|
csl = comma-separated list
sprio examples
# View normalized job priorities for your own jobs
sprio -nu $USER
# View normalized job priorities for specified partition
sprio -nlp standby
|
scancel examples
# Cancel specific job
scancel 314159
# Cancel all your own jobs
scancel -u $USER
# Cancel your own jobs on specified partition
scancel -u $USER -p standby
# Cancel your own jobs in specified state
scancel -u $USER -t pending
|
scancel options
Restrict to the specified account -A, --account=<account>
|
Restrict to jobs with specified name -n, --name=<job_name>
|
Restrict to jobs using the specified host names (csl) -w, --nodelist=<hostnames>
|
Restrict to the specified partition -p, --partition=<partition>
|
Restrict to the specified user -u, --user=<username>
|
csl = comma-separated list
squeue examples
# View your own job queue with estimated start times
squeue --me
# View own job queue with estimated start times for pending jobs
squeue --me --start
# View job queue on specified partition in long format
squeue -lp epyc-64
|
squeue options
Filter by accounts (csl) `-A, --account=<account_list>`
|
Output format to display `-o, --format=<options>`
|
Filter by job IDs (csl) `-j, --jobs=<job_id_list>`
|
Show more available information `-l, --long`
|
Filter by your own jobs `--me`
|
Filter by job names (csl) `-n, --name=<job_name_list>`
|
Filter by partitions (csl) `-p, --partition=<partition_list>`
|
Sort jobs by priority `-P, --priority`
|
Show the expected start time and resources to be allocated for pending jobs `--start`
|
Filter by states (csl) `-t, --states=<state_list>`
|
Filter by users (csl) `-u, --user=<user_list>`
|
csl = comma-separated list
Job Management
|
View information about jobs in queue |
|
Signal or cancel jobs, job arrays, or job steps |
|
View job scheduling priorities |
|
|
Partition and node information
|
View information about nodes and partitions |
|
View or modify configuration and state |
sinfo options
Output format to display -o, --format=<options>
|
Show more available information
|
Show information in a node-oriented format
|
Filter by host names (comma-separated list) -n, --nodes=<hostnames>
|
Filter by partitions (comma-separated list) -p, --partition=<partition_list>
|
Filter by node states (comma-separated list) -t, --states=<state_list>
|
Show summary information
|
sinfo examples
# View all partitions and nodes by state
sinfo
# Summarize node states by partition
sinfo -s
# View nodes in idle state
sinfo --states=idle
# View nodes for specified partition in long, node-oriented format
sinfo -lNp standby
|
scontrol actions and options
Show more details
|
Show information on one line
|
Show partition show partition <partition>
|
Show node
|
Show job
|
Hold Jobs
|
Release Jobs
|
Show Hostnames
|
scontrol examples
# View information for specified partition
scontrol show partition standby
# View information for specified node
scontrol show node tcocs002
# View detailed information for running job
scontrol show job 314159 -d
# View hostnames for job (one name per line)
scontrol show hostnames
|
Slurm environment variables
Number of tasks in job array SLURM_ARRAY_TASK_COUNT
|
Job array task ID
|
Number of CPUs requested per task
|
Account used for job
|
Job ID
|
Job Name
|
List of nodes allocated to job
|
Number of nodes allocated to job
|
Partition used for job
|
Number of job tasks
|
MPI rank of current process
|
Directory from which job was submitted
|
Number of job tasks per node
|
Slurm Environment Variables examples
# Specify OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Specify MPI tasks
srun -n $SLURM_NTASKS ./mpi_program
|
|