Cluster Resources

The Center for Institutional Research Computing (CIRC) provides high-performance computing (HPC) resources for the WSU research community. These resources are available for use in research by all faculty, students, and staff at any of WSU’s campuses, at no cost.

All user accounts must be sponsored by a faculty member, for further details please see Requesting Access.

The Kamiak HPC Cluster

Kamiak is a high-performance computing (HPC) cluster that follows a condominium model in which faculty investors as well as colleges purchase nodes that provide the resources for research computing. The nodes, which are high-powered computers, are grouped into partitions, each owned by a faculty or college. There is also an overarching shared partition named “kamiak” that includes all nodes, which provides open access to idle compute resources for use by the entire WSU research community. Note, however, that jobs submitted to the shared kamiak partition are subject to preemption, i.e., cancel and requeue, by faculty investor or college jobs if they need their nodes in order to be able to run.

Access to faculty investor or college partitions is restricted to users who are affiliated with the owner of the resources.  To gain access to a partition, please contact the owner for approval.

Kamiak Cluster Configuration

Kamiak has a total of 138 compute nodes, of which 13 have Nvidia GPU accelerators. The compute nodes range from having 20 to 76 cores, and memory ranging from 128G to 2TB, with Intel Xeon dual-socket processors spanning Ivy Bridge to the latest Intel generation. The GPU nodes include Nvidia K80’s, V100’s, A100’s, and soon the latest Nvidia H100’s. 

The HPC infrastructure includes 4 fully virtualized head nodes, an HDR100 InfiniBand network for fast storage access and inter-node communication, and 1 PB of all-flash NVMe shared data storage with a state-of-the-art WekaIO parallel file system.

The compute nodes include:

Node CountCoresMemoryModelGPU
7820128G to 256GIvy Bridge/Haswell
424192G to 256GHaswell/Skylake
1828128G to 512GBroadwell
1440192G to 386GSkylake/Cascade Lake
1602TBIvy Bridge
564512G to 1TBIce Lake
5761TBIce Lake
1024256GHaswellTotal of 44 Nvidia K80's
140192GSkylake1 Nvidia V100
156386GCascade Lake2 Nvidia A100's
1762TBIce Lake4 Nvidia A100's

For detailed cluster information, please use the “sinfo”, “scontrol show partition”, and “scontrol show node” commands.