About Hardware Projects Links Status Support and Documentation News
Photos Software Publications Thanks Access to LASC Downloads Contact Administrator




Registration

If you are eligible to use the LASC, you must register first to obtain an account. After registration, you will be provided with personal username and password.


System Access

To access the cluster, use the LASC hostname IP address: 5.179.5.87. Note that cluster is accessible only via SSH (or Secure SHell) protocol version 2. In UNIX/LINUX environment, you can connect to the cluster using ssh:

    ssh  username@5.179.5.87       or       ssh   -l   username   5.179.5.87
The file transfer between your computer and the cluster can be done using sftp.

Software Environment

After logging in, you will be at a Red Hat Linux shell prompt. Users type commands at a shell prompt, the shell interprets these commands, and then the shell tells the operating system what to do. Experienced users can write shell scripts to expand their capabilities even more.

A help on the command use can be obtained by reading the man pages, just type

    man command_name
at a shell prompt.

The default shell for Red Hat Linux is the Bourne Again Shell, or bash. You can learn more about bash by reading the bash man page (type man bash at a shell prompt).

Several often used commands are described below:
    To login to another cluster node, use the rsh command.
    To see the cluster status, use the clrun -a command.
    To change directories, use the cd command.
    Using the ls command, you can display the contents of your current directory.
    You can compress/uncompress files with the compression tools gzip/gunzip, bzip2/bunzip2, or zip/unzip.
    A tar command allows to collect several files and/or directories in one file. This is a good way to create backups and archives.
    To copy files, use the cp command.
    To move files, use the mv command.
    You can create directories with the mkdir command.
    To delete files or directories, use the rm command.
    To close prompt window (exit from the system), use the exit command.


Setting up SSH Environment for MPI use

Before using MPI demanding programs, you must first set up the SSH environment to be able to connect to any cluster node without password. This can be done following these steps:

    1) ssh-keygen -t dsa
    2) cd .ssh
    3) cp id_dsa.pub authorized_keys
    4) cp id_dsa.pub authorized_keys2
    5) ssh mpich* and answer "yes" (here * means a node number from 1 to the last one)


Managing Your Allocation

To run your application in interactive mode, simply type at a shell prompt

    application_name

To start application in background mode, type
    application_name &

If you need that your application will continue to work in background mode after you logout from the system, type
    nohup full_application_path/application_name &

To measure the run time of your application, use the time command (see man time for details).
For example, use the following command to run time consuming applications from your home directory:
    nohup time -p -o time.lst $HOME/application_name &
The file time.lst will contain the information (in seconds) on:
1) elapsed real time,
2) total number of CPU-seconds that the process spent in user mode;
3) total number of CPU-seconds that the process spent in kernel mode.


System Resources Available to Users

When you are logged into the cluster (see System Access above), you can access all cluster nodes via Gigabit Ethernet network (192.168.2.*) using RSH and SSH protocols.

The names and IP addresses of the nodes are (all in lowercase !):
lasc1 192.168.2.1
... ...
lasc147 192.168.2.89
gateway 192.168.2.254

Here gateway means the Firewall used to connect the cluster to the Internet. The gateway is "transparent" for users, that means you cannot logon to the gateway. The node lasc1 is the one you are logged in first. To connect to other nodes, you must use rsh (or ssh) command. For example, use the following command to connect to node lasc50:

    rsh  lasc50

N.B. Always use the above lasc* names to log into the nodes: they are automatically recognized.

After login, you will have an access to your /home directory. The /home directory is exported to other nodes via 6-link aggregated Gigabit Ethernet channels using NFS.

Besides the /home directory, several other directories are also accessible for all users on all nodes.

There is a local /work directory on each node for use by MPI. It has different size on different nodes.
Please clean it from your own files, if not used.

Additionally, a local /public directory is available on each node, which is accessible to all users. The /public directory is intended for data exchange between users within the node. Please don't use it for MPI or similar applications, since this directory is located on the system hard disk.


OpenMPI is available on all nodes. It uses dedicated Gigabit Ethernet network (192.168.1.*) with the following node names and IP addresses (all in lowercase !):
mpich1 192.168.1.1
... ...
mpich147 192.168.1.147
Please use the above names or IP addresses in your MPI "machines" files.



These pages are maintained by Alexei Kuzmin ([email protected]). Comments and suggestions are welcome.