Requesting an Account

If you would like an account on the HPC cluster, please send an email with your request to help@lafayette.edu.

Logging on to the Cluster

Once you have an account, the primary means to connect to the cluster is SSH (Secure Shell). If you are off-campus, you must be connected to the VPN using GlobalProtect in order to access the cluster! On Mac and Linux, SSH is built in to the system and is accessed via the Terminal program; once running, this is how you connect to the cluster:

ssh hpc.lafayette.edu

On Windows, most users tend to use PuTTY or the Bitvise SSH Client, either of which can act as an SSH client. Installing and configuring these is beyond the scope of this document but is generally straightforward. Reach out via help@lafayette.edu if you require further assistance.

This is sufficient if you intend to use command-line, non-graphical software or scripting. If, however, you want to use interactive GUI versions of software that offers such an interface (such as Stata, Mathematica, MATLAB, etc.), you must have an X server running on your local system. On Macs, you need to install XQuartz, and for Windows, we recommend MobaXterm, which acts both as an SSH client and as an X server. In addition, you need to forward X commands from the cluster to your local system over SSH. This is done by adding the -Y flag to SSH when you connect:

ssh -Y hpc.lafayette.edu

In either case, once connected to the cluster, you will be prompted for your regular NetID password.

The Login Node

Once a connection has been established with the cluster, you’ll be presented with the below notice. As the name suggests, the “cluster” is a collection of computers or nodes.  These comprise a login node and multiple compute nodes.  Intensive computational workloads are only to be run on the compute nodes so as not to impact the ability for others to access and work with the login node.

NOTICE:
This login node is not to be used for running resource-intensive
processing tasks. You may use it for short test runs to assist in
code optimization, or to compile your code, edit and move files,
etc. All processing jobs should be submitted as jobs to the batch
scheduler. If you don’t know how to do that, please see the Slurm
user guide:

Any resource-intensive tasks found running on the login node
may be terminated immediately without notice.

Working within Linux

The cluster runs on the Linux operating system. If you are unfamiliar with it, it can appear archaic and frustrating. In reality, however, it is extremely efficient and provides an extraordinary amount of flexibility and power. A complete tutorial is beyond the scope of this document, but several excellent resources are available online. While not exhaustive, here are a couple that may be useful:

Transferring files

You can copy data to and from your local system to the cluster using scp (secure copy). To transfer files from your local system to your home directory on the cluster:

scp myfilename hpc.lafayette.edu:~

Note that the tilde (~) is a shortcut that represents the path to “my home directory,” so the above command will copy myfilename from the local system into your home directory on the cluster. To copy files from your home directory on the cluster to your local system, run the following command from your local system (not from the cluster):

scp hpc.lafayette.edu:~/myfilename .

The period (.) above is another shortcut that represents “the current directory,” so the above command will copy myfilename from within your home directory on the cluster and save it in whichever directory you are currently in on your local system.

Using software

The following instructions discuss how to create a basic shell to do “work” on the cluster (which typically means using some kind of software package). For more information and instructions related to the various software packages available for use, please see the hpc software page.

You should not generally run software directly from the login node! While it is acceptable in some cases to do so for limited testing purposes, in general you must connect to one of the computational nodes through Slurm before running any software for an extended time. The computational nodes provide a greater level of processing and memory resources than the are available on the login node. To connect to a compute node, you will use Slurm; this is typically used to submit batch jobs to the computational nodes, but it can also be used to initialize an interactive shell on them:

srun -t 240 --mem=16gb --pty /bin/bash

The above command will allocate a bash shell on one of the computational nodes, providing 16gb of available memory and a single processor core for 4 hours. This is simply an example, however, and can be customized. For instance, the -t flag takes time in minutes by default, so you can adjust that to whatever time you require (the interactive shell session will terminate when the time runs out killing any tasks), and you can likewise customize the amount of memory you need. Note also that this is a very basic example; for more information, see our dedicated Slurm tutorial.

Tagged in: