If you would like an account on the HPC cluster, please send an email with your request to email@example.com.
Once you have an account, the primary means to connect to the cluster is SSH. If you are off-campus, you must be connected to the VPN using GlobalProtect in order to access the cluster! On Mac and Linux, SSH is built in to the system and is accessed via the Terminal program; once running, this is how you connect to the cluster:
On Windows, most users tend to use PuTTY or Bitvise SSH Client, either of which can act as an SSH client. Installing and configuring these is beyond the scope of this document but is generally straightforward. Reach out via firstname.lastname@example.org if you require further assistance.
This is sufficient if you intend to use command-line, non-graphical software or scripting. If, however, you want to use interactive GUI versions of software that offers such an interface (such as Stata, Mathematica, MATLAB, etc.), you must have an X server running on your local system. On Macs, you need to install XQuartz, and for Windows, we recommend MobaXterm, which acts both as an SSH client and as an X server. In addition, you need to forward X commands from the cluster to your local system over SSH. This is done by adding the
-Y flag to SSH when you connect:
ssh -Y hpc.lafayette.edu
In either case, once connected to the cluster, you will be prompted for your regular password.
The cluster runs on the Linux operating system. If you are unfamiliar with it, it can appear archaic and frustrating. In reality, however, it is extremely efficient and provides an extraordinary amount of flexibility and power. A complete tutorial is beyond the scope of this document, but several excellent resources are available online. While not exhaustive, here are a few that may be useful:
You can also copy over data from your local system to the cluster using
scp (secure copy). To transfer files from your local system to your home directory on the cluster:
scp myfilename hpc.lafayette.edu:~
Note that the tilde (~) is simply a shortcut that means “my home directory,” so the above command will copy
myfilename from the local system into your home directory on the cluster. To copy files from your home directory on the cluster to your local system, run the following command from your local system (not from the cluster):
scp hpc.lafayette.edu:~/myfilename .
The period (.) above is just another shortcut that means “the current directory,” so the above command will copy
myfilename from within your home directory on the cluster and save it in whichever directory you are currently in on your local system.
The following instructions discuss how to create a basic shell to do “work” on the cluster (which typically means using some kind of software package). For more information and instructions related to the various software packages available for use, please see the software page.
You should not generally run software directly from the login node! While it is acceptable in some cases to do so for testing purposes, in general you must connect to one of the computational nodes before running any software for an extended time (plus, there are greater resources available on the compute nodes anyway). To connect to a compute node, you will use Slurm; this is typically used to submit batch jobs to the computational nodes, but it can also be used to initialize an interactive shell on them:
srun -t 240 --mem=16gb --pty /bin/bash
The above command will allocate a bash shell on one of the computational nodes, providing you 16gb of available memory and a single processor core for 4 hours. This is simply an example, however, and can be customized. For instance, the
-t flag takes time in minutes by default, so you can adjust that to whatever time you require (note that your session will terminate when the time runs out), and you can likewise customize the amount of memory you need. Note also that this is a very basic example; for more information, see our dedicated Slurm tutorial.