Interaction with the supercomputer is typically performed with command line tools. The command line tools can be run via a command prompt, also known as a shell. SSH is used to establish a secure shell with the supercomputer. In general, users should log in via the hostname
Programs can be tested from the interactive nodes, but anything left running for more than an hour will be killed automatically. Read the batch job submission tutorial when you are ready to run your jobs.
Windows does not have SSH capabilities built-in. Download PuTTY, Bitvise Tunnelier or MobaXterm. Enter hostname: liger.ec-nantes.fr. Click connect and enter your username and password when prompted. Once connected you can run commands.
Linux and Mac OS have SSH built-in. Use a terminal that opens a command prompt on the local system, you must now connect to liger.ec-nantes.fr Run
ssh -X <login>@liger.ec-nantes.fr
<login> is your username. Once connected you can run commands.
Note that you may need to add the -X option to the above ssh command in order to enable transparent forwarding of X applications to your local screen (this assumes that you have an X server running on your local machine). The OpenSSH version of ssh sometimes requires that you use -Y instead of -X (try -Y if applications don't appear or die with errors). These options to ssh direct the X communication through an encrypted tunnel, and should “just work”. In the bad old days before ssh the highly insecure method of persuading X windows to display on remote screens involved commands such as
xhost - you do not and never will need to use either of these with ssh, and if you try to use them you may open up everything you type (including passwords) to be read, and even changed, by evil-doers.
The extreme factory computing studio (XCS) secure portal gives users and system administrators alike direct access to all resources and applications. The user interface is born out of a continual dialogue between Bull and its customers, so extreme factory computing studio truly reflects users’ expectations when it comes to HPC applications. Once a user is logged on, he or she can access a complete work environment, customized for their job role and applications. From there, they can load and manage data, set parameters for the simulation, run the calculation, track its progress, and then proceed to post-processing and visualization of results.
Use the same username/password credential to access to XCS portal when connecting to the login SSH node for interactive session. To connect you browse the following url link:
As a single gateway to the HPC environment, extreme factory computing studio simplifies the handling and management of an HPC cluster, and lets you focus on the actual calculation and results.
It is possible to change your initial password on the XCS web portal or using the script
Ichpasswd on a login node. Note that the security of both users' data and the service itself depends strongly on choosing the password sensibly, which in the age of automated cracking programs unfortunately means the following:
It is also possible to connect to a remote VNC desktop session on a Liger login node. Please see the page on XCS Web Portal for details.
Please refer to support before transfering big data from login nodes.
ICI-SC does not have a dedicated transfer data server with high rate bandwidth
Any method of file transfer that operates over SSH (e.g. scp, sftp, rsync) should work to or from liger, provided SSH access works in the same direction. Thus systems from which it is possible to login should likewise have no difficulty using scp/sftp/rsync, and from Liger out to remote machines such connections should also work provided the other system does not block SSH (unfortunately, some sites do, and even more unfortunately, some even block SSH coming out). In whichever direction the initial connection is made, files can then be transferred in either direction. Note that obsolete and insecure methods such as ftp and rcp will not work (nor should you wish to use such things).
Any UNIX-like system (such as a Linux or MacOSX machine) should already have scp, sftp or rsync (or be able to install them from native media). Similarly these tools can be installed on Windows systems as part of the Cygwin environment. An alternative providing drag-and-drop operation under Windows is WinSCP, and in the same vein MacOSX or Windows users might consider cyberduck.
Of the command-line tools mentioned here, rsync is possibly the fastest, the most sophisticated and also the most dangerous. The man page is extensive but for example the following command will copy a directory called results in your home directory on liger to the directory from_liger/results on the local side (where rsync is being run on your local machine and your username is assumed to be abc123):
rsync -av email@example.com:results from_liger
Note that a final / on the source directory is significant for rsync - it would indicate that only the contents of the directory would be transferred (so specifying results/ in the above example would result in the contents being copied straight to from_darwin instead of to
from_liger/results). A pleasant feature of rsync is that repeating the same command will lead to only files which appear to have been updated (based on the size and modification timestamp) being transferred. Rsync also validates each actual transfer by comparing checksums.
On directories containing many files rsync can be slow (as it has to examine each file individually). A less sophisticated but faster way to transfer such things may be to pipe tar through ssh, although the final copy should probably be verified by explicitly computing and comparing checksums, or perhaps by using
rsync -avc between the original and the copy (which will do the equivalent thing and automatically re-transfer any files which fail the comparison). For example, here is the same copy of
/home/abc123/results on liger copied to
from_liger/results on the local machine using this method:
cd from_liger ssh -e none -o cipher=arcfour firstname.lastname@example.org 'cd /home/abc123 ; tar -cf - results' | tar -xvBf -
In the above the cd command is not actually necessary, but serves to illustrate how to navigate to a transfer directory in a different location to the
We use the modules environment extensively. A module can for instance be associated with a particular version of Intel compiler, or different MPI libraries etc. Loading a module establishes the environment required to find the related include and library files at compile-time and run-time.
By default the environment is such that the most commonly required modules are already loaded. It is possible to see what modules are loaded by using the command
Currently Loaded Modulefiles: 1) python/3.4.3/gcc/4.4.7
The above shows that python compiler is loaded (these are actually loaded as a result of loading the default- module, which is loaded automatically on login) and the version 3.4.4 has been compiled by a 4.4.7 gcc version. To permanently change what modules are loaded by default, edit your
~/.bashrc file, e.g. adding
module load python/3.4.3/gcc/4.4.7
module load <module> -> load module module unload <module> -> unload module module purge -> unload all modules module list -> show currently loaded modules module avail -> show available modules module whatis -> show available modules with brief explanation
Your account has membership in one or more Unix groups. On Liger, groups are usually (but not always) organized by lab group and project named. The primary purpose of these groups is to facilitate sharing of files with other users, through the Unix permissions system. To see your Unix groups, try the following command:
user1# groups group1 group2 L121212
In the example above, the user1 is a member of 3 groups - one of them is a project group.
The default environment should be correctly established automatically via the modules system and the shell initialization scripts. For example, essential system software for compilation, credit and quota management, job execution and scheduling, error-correcting wrappers and MPI recommended settings are all applied in this way. This works by setting the PATH and LD_LIBRARY_PATH environment variables, amongst others, to particular values. Please be careful when editing your ~/.bashrc file, if you wish to do so, as this can wreck the default settings and create many problems if done incorrectly, potentially rendering the account unusable until administrative intervention. In particular, if you wish to modify PATH or LD_LIBRARY_PATH please be sure to preserve the existing settings, e.g. do
export PATH=/your/custom/path/element: export LD_LIBRARY_PATH=/your/custom/librarypath/element:
and don't simply overwrite the existing values, or you will have problems. If you are trying to add directories relating to centrally-installed software, please note that there is probably a module available which can be loaded to adjust the environment correctly.
Note that no module by default is loaded, you must load modules you need listed by this command
module avail ----------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------- cmake/2.8.9/gcc/4.4.7 hdf5/1.8.15/intel-2015.3.187 petsc/intelmpi-5.0.3/3.1-p8 cmake/3.1.0/gcc/4.4.7 iciplayer/1.2 petsc/intelmpi-5.0.3/3.1-p8-debug cmake/3.2.3/gcc/4.4.7 intel/2015.3.187 python/2.7.10/gcc/4.4.7 gcc/4.9.3 intelmpi/5.0.3.048 python/3.4.3/gcc/4.4.7 hdf5/1.8.15/gcc-4.9.3 lapack/3.5.0/gcc/4.9.3 ---------------------------------------- /opt/mellanox/bupc/2.20.2/modules --------------------------------------- bupc/2.20.2
When compiling code, it is usually possible to remove any direct MPI library references in your Makefile as mpicc & mpif90 will take care of these details. In the Makefile, simply set
etc, or define
etc before compilation.
If some required libraries are missing, please let us know and we can try to install them centrally (as a module).
A computing task submitted to a batch system is called a job. Job can be submitted in two ways:
Liger runs SLURM batch system.
To create a new job script (also called a submit script or a submit file) you need to:
There are several examples and more information about using the batch system and writing scripts in the subsection for the batch system.
Here we will demonstrate the usage of the batch system directives on the following simple submit file example.
You can also put these options inside your slurm job scripts in the following form
#!/bin/sh #SBATCH [option 1] #SBATCH [option 2] ... #SBATCH [option N] ... # put here shell commands and variables ... srun name_of_exectuable
#!/bin/bash #SBATCH -A <account> #SBATCH --ntasks=8 # use --exclusive to get the whole nodes exclusively for this job #SBATCH --exclusive #SBATCH --time=01:00:00 # Load module for MPI and compiler, as needed/desired. module purge module add intel module add intelmpi srun -n 8 ./mpi_program
Batch system directives start with
#SBACTH. The first line says that Linux shell bash will be used to interpret the job script.
There is a set of batch system commands available to users for managing their jobs. The following is a list of commands useful to end-users:
sbatch <submit_file>submits a job to the batch system (if there were no syntax errors in the submit file the job is processed and inserted into the job queue, the integer job ID is printed on the screen);
squeueshows the current job queue (grouped by running, idle and blocked jobs); shown columns are:
JOBID PARTITION NAME USER STATE TIME NODES QOS PRIORITY NODELIST(REASON)
scontrol show job <jobid>shows detailed information about a specific job;
scancel <jobid>deletes a job from the queue;
sinfo -sshows you Liger resources queues available.
You can find here everyday useful commands.
Run the user-command
This reports the current balance of credits, which equates to the number of core hours still available in the current accounting since the beginning of the current year (including expired credits if present).