The BULLx DLC cluster uses so called front-end nodes or login nodes running Linux for interactive access and the submission of batch jobs (SLURM). Parallel applications have to be cross-compiled on the front-end nodes and can only be executed on the partition residing on the BULLx/B720 Compute Nodes. Access is automatically controlled by the SLURM batch system, which chooses the appropriate partition depending on the requested resources. Serial jobs are executed on the front-end node.
Will soonly be available Please feel free to contact us.
The batch system on LIGER is SLURM, an open source fault-tolerant, and highly scalable cluster management and job scheduling system. The batch system is responsible for managing jobs on the machine, returning job output to the user and provides for job control on the user's or system administrator's request. SLURM has three main key functions:
A SLURM quick start explanation here
All jobs must be run through the Slurm scheduler. If a job would exceed any of the limits below, it will be held until it is eligible to run. Priority for new users submitting jobs from users already have their jobs running. Consumable Resource configured is Core
Jobs are further prioritized through the Slurm scheduler based on a multifactors priority policies and on liger we put high priority in this order:
See → Compiling and Tuning Applications on how to compile Fortran, C or C++ programs.