Quick Start Guide

Requesting access

The Kamiak high-performance computing (HPC) cluster is available for use in research by all faculty, students, and staff at any of WSU’s campuses, at no cost.

User accounts. In order to use Kamiak, you first need to submit a request to create a Kamiak login account at Service Requests, Every user account is associated with a faculty sponsor.  Before the request is granted, you will need to get approval from a faculty member to sponsor your account.

Logging into Kamiak

Once you have access to Kamiak, you can log in from a terminal window using your WSU netid and password. This will connect you to one of the Kamiak login nodes. From there you can manage your files and submit jobs to the compute nodes.

Okta is used for authentication when logging into Kamiak. You will need to install a client-side program on your laptop or endpoint, called opkssh, to launch the authentication into Okta. You will only need to authenticate once every 24 hours. You then use ssh and scp as normal. Under the hood opkssh uses short-lived ssh certificates to log into Kamiak. Below are the instructions to install opkssh, and use it to log into Kamiak.

Install opkssh on your laptop or endpoint.

Run the below installation script as the user who will be logging into Kamiak. You do not need to be an administrator to install or use opkssh.

Windows.

MacOS or Linux.

  • Install opkssh by downloading the install script and running it in a terminal window. It installs the opkssh binary and creates the configuration file.
    Download the script to install opkssh on MacOS or Linux
    chmod +x install_opkssh_mac_or_linux.sh
    ./install_opkssh_mac_or_linux.sh
Use opkssh to log into Kamiak.
  1. Authenticate using Okta.
    You only need to do this once every 24 hours.
    opkssh login kamiak
  2. Run ssh or scp as normal.
    ssh as many times as you wish. You will not be prompted to authenticate again.
    ssh your.name@kamiak.wsu.edu

Job submission

Kamiak uses the Slurm job scheduler to coordinate the use of resources on Kamiak. To run a job on Kamiak you will first need to create a Slurm job submission script describing the job’s resource requirements. Jobs are then submitted using the sbatch command. Alternatively, you can launch an interactive session on a compute node using idev, and see shell commands executed as you type them. The status of your job can be monitored using the squeue command. Submitted jobs can be killed using the command scancel followed by the Job ID.

As a courtesy to other users, please do not run applications or software installs on the login nodes, use sbatch or idev to run them on the compute nodes. Also, please note that due to security reasons, users are not allowed to have superuser privileges on the login or compute nodes.

For more information on how to submit jobs to the Kamiak HPC cluster, please see the Training Slides, the Training Video, the Cheat Sheet, the Quick Start Guide, or the more extensive User’s Guide.

Partitions and job queues

Kamiak is an HPC cluster housing compute nodes which are purchased by investors and colleges. The nodes are grouped into partitions named after their lab group or college. Jobs submitted to a partition are placed into a “queue” of the same name, and will wait to run until resources requested by the job are available, using fairshare scheduling. The allocation of resources and running of jobs is performed by the Slurm job scheduler.

In addition to the investor and college partitions, there is an overarching backfill partition named “kamiak” that contains all nodes, and is available for use by the entire research community. The “kamiak” partition allows users to utilize idle compute resources across all compute nodes.

Investors have priority access to the nodes they own. Any backfill job running on an investor node will be preempted if the investor’s job needs its cores in order to be able to run. Preempted means the job will be canceled and automatically resubmitted to the backfill queue.

For more information on the partitions on Kamiak see the partition list.

Storage

Home storage

Each user is provided a home folder of 100GB at no cost, and given access to the shared “kamiak” partition that includes all nodes.

Lab storage

Each faculty sponsor in addition is provided a lab folder of 500GB at no cost, at /data/lab/myLabName. Additional lab storage beyond the no-cost allocation can be rented through the CIRC service desk. Compute nodes are also available for purchase, on which the investors are given prioritized access.

Scratch storage

Users may also take advantage of temporary scratch storage with a 10TB per user limit and two week lifetime of each scratch folder.

Software

Modules

Kamiak uses environment modules to control the Linux software environment. Users can run the module avail command to see a list of available software modules. Software can be added to the linux environment using the command module load <package name>. For more information see environment modules in the Kamiak users guide.

Conda

Conda can also be used to manage software environments and install packages. Kamiak provides both anaconda3 and miniconda3 modules, which can be used to create and manage conda environments. For more information see python_and_conda in the Kamiak users guide.

Apptainer

Apptainer is another tool for managing sotware environments using containers. It is the successor to Singularity, and similar in concept to Docker and Podman. For more information see Apptainer in the Kamiak users guide.

Getting help

For any questions or issues regarding Kamiak, please submit a service request at Service Requests. In addition, CIRC hosts an interactive help desk on Tuesdays and Thursdays via Zoom video conferencing. Appointments can be made in advance at Help Desk Appointments.