Lafayette CollegeTechnology Help
If you would like an account on the Firebird HPC cluster, please send an email with your request to help@lafayette.edu.
Once you have an account, the primary means to connect to the cluster is SSH (Secure Shell). If you are off-campus, you must be connected to the VPN using GlobalProtect in order to access the cluster. On Mac and Linux, SSH is built into the system and is accessed via the Terminal program; once running, this is how you connect to the cluster:
ssh netID-laf@firebird.lafayette.edu
On Windows, several different terminal programs are available with two common options being PuTTY or the Bitvise SSH Client. Installing and configuring these is beyond the scope of this document but is generally straightforward. Reach out via help@lafayette.edu if further assistance is needed.
A suitable terminal program is sufficient to access Firebird if use cases involve command-line, non-graphical software, or scripting. If, however, you want to use interactive GUI versions of software that offer such an interface (such as Stata, Mathematica, MATLAB, etc.), you must have an X server running on your local system. On Macs, install XQuartz, and for Windows, we recommend MobaXterm. The latter is able to act as both an SSH client and as an X server.
With the required software in place, to run GUI versions of software, you will need to forward X commands from the cluster to your local system over SSH. This is done by adding the -Y
flag to SSH when you connect:
ssh -Y netID-laf@firebird.lafayette.edu
In either case, to connect to the cluster, the local system used will need to have the private SSH key in place in order for you to successfully connect.
Once a connection has been established with the cluster, you’ll be presented with the below notice. As the name suggests, the “cluster” is a collection of computers or nodes. These comprise a login node and multiple compute nodes. Intensive computational workloads are only to be run on the compute nodes so as not to impact the ability of others to access and work with the login node.
NOTICE:
This login node is not to be used for running resource-intensive
processing tasks. You may use it for short test runs to assist in
code optimization, or to compile your code, edit and move files,
etc. All processing tasks should be submitted as jobs to the batch
scheduler (Slurm).
Any resource-intensive tasks found running on the login node
may be terminated immediately without notice.
The Firebird cluster runs on the Linux operating system. If you are unfamiliar with it, it can appear archaic and frustrating. In reality, however, it is extremely efficient and provides an extraordinary amount of flexibility and capability. A complete tutorial is beyond the scope of this document, but several excellent resources are available online. While not exhaustive, here are a couple that may be useful:
You can copy data to and from your local system to the cluster using scp
(secure copy). To transfer files from your local system to your home directory on the cluster:
scp myfilename firebird.lafayette.edu:~
Note that the tilde (~) is a shortcut that represents the path to “my home directory,” so the above command will copy myfilename
from the local system into your home directory on the cluster. To copy files from your home directory on the cluster to your local system, run the following command from your local system (not from the cluster):
scp firebird.lafayette.edu:~/myfilename .
The period (.) above is another shortcut that represents “the current directory,” so the above command will copy myfilename
from within your home directory on the cluster and save it in whichever directory you are currently in on your local system.
The following instructions discuss how to create a basic shell to do “work” on the cluster (which typically means using some kind of software package). For more information and instructions related to the various software packages available for use, please see the hpc software page.
You should not generally run software directly from the login node! While it is acceptable in some cases to do so for limited testing purposes, in general you must connect to one of the computational [worker] nodes through Slurm before running any software for an extended time. The computational nodes provide a greater level of processing and memory resources than the are available on the login node. To connect to a compute node, you will use Slurm; this is typically used to submit batch jobs to the computational nodes, but it can also be used to initialize an interactive shell on them:
srun -t 240 --mem=16gb --pty /bin/bash
The above command will allocate a bash shell on one of the computational nodes, providing 16gb of available memory and a single processor core for 4 hours. This is simply an example, however, and can be customized. For instance, the -t
flag takes time in minutes by default, so you can adjust that to whatever time you require (the interactive shell session will terminate when the time runs out killing all tasks), and you can likewise customize the amount of memory you need. Note also that this is a very basic example; for more information, see our dedicated Slurm tutorial.