Batch Scheduler Rosetta
User Commands | PBS | Slurm | LSF |
---|---|---|---|
Job Submission | qsub [script_file] | sbatch [script_file] | bsub < [script_file] |
Job Deletion | qdel [job_id] | scancel [job_id] | bkill [job_id] |
Job status (by job) | qstat [job_id] | squeue [job_id] | bjobs [job_id] |
Job status (by user) | qstat -u [user_name] | squeue -u [user_name] | bjobs -u [user_name] |
Job hold | qhold [job_id] | scontrol hold [job_id] | bstop [job_id] |
Job release | qrls [job_id] | scontrol release [job_id] | bresume [job_id] |
Queue list | qstat -Q | squeue | bqueues |
Node list | pbsnodes -l | sinfo -N OR scontrol show nodes | bhosts |
Cluster status | qstat -a | sinfo | bqueues |
GUI | xpbsmon | sview | xlsf OR xlsbatch |
Environment | PBS | Slurm | LSF |
Job ID | $PBS_JOBID | $SLURM_JOBID | $LSB_JOBID |
Submit Directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR | $LSB_SUBCWD |
Submit Host | $PBS_O_HOST | $SLURM_SUBMIT_HOST | $LSB_SUB_HOST |
Node List | $PBS_NODEFILE | $SLURM_JOB_NODELIST | $LSB_HOSTS/LSB_MCPU_HOST |
Job Array Index | $PBS_ARRAYID | $SLURM_ARRAY_TASK_ID | $LSB_JOBINDEX |
Job Specification | PBS | Slurm | LSF |
Script Directive | #PBS | #SBATCH | #BSUB |
Queue | -q [queue] | -p [queue] | -q [queue] |
Node Count | -l nodes=[count] | -N [min[-max]] | -n [count] |
CPU Count | -l ppn=[count] OR -l mppwidth=[PE_count] | -n [count] | -n [count] |
Wall Clock Limit | -l walltime=[hh:mm:ss] | -t [min] OR -t [days-hh:mm:ss] | -W [hh:mm] |
Standard Output File | -o [file_name] | -o [file_name] | -o [file_name] |
Standard Error File | -e [file_name] | e [file_name] | -e [file_name] |
Combine stdout/err | -j oe (both to stdout) OR -j eo (both to stderr) |
(use -o without -e) | (use -o without -e) |
Copy Environment | -V | --export=[ALL | NONE | variables] | |
Event Notification | -m abe | --mail-type=[events] | -B or -N |
Email Address | -M [address] | --mail-user=[address] | -u [address] |
Job Name | -N [name] | --job-name=[name] | -J [name] |
Job Restart | -r [y|n] | --requeue OR --no-requeue (NOTE: configurable default) |
-r |
Working Directory | N/A | --workdir=[dir_name] | (submission directory) |
Resource Sharing | -l naccesspolicy=singlejob | --exclusive OR--shared | -x |
Memory Size | -l mem=[MB] | --mem=[mem][M|G|T] OR --mem-per-cpu=[mem][M|G|T] |
-M [MB] |
Account to charge | -W group_list=[account] | --account=[account] | -P [account] |
Tasks per Node | -l mppnppn [PEs_per_node] | --tasks-per-node=[count] | |
CPUs per task | --cpus-per-task=[count] | ||
Job Dependency | -d [job_id] | --depend=[state:job_id] | -w [done | exit | finish] |
Job Project | --wckey=[name] | -P [name] | |
Job host preference | --nodelist=[nodes] AND/OR --exclude=[nodes] |
-m [node type "inference", "training", "visualization"] | |
Quality of Service | -l qos=[name] | --qos=[name] | |
Job Arrays | -t [array_spec] | --array=[array_spec] (Slurm version 2.6+) | J "name[array_spec]" |
Generic Resources | -l other=[resource_spec] | --gres=[resource_spec] | |
Licenses | --licenses=[license_spec] | -R "rusage[license_spec]" | |
Begin Time | -A "YYYY-MM-DD HH:MM:SS" | --begin=YYYY-MM-DD[THH:MM[:SS]] | -b[[year:][month:]daty:]hour:minute |