Cluster Resources
The Center for Institutional Research Computing (CIRC) provides high-performance computing (HPC) resources for the WSU research community. These resources are available for use in research by all faculty, students, and staff at any of WSU’s campuses, at no cost.
All user accounts must be sponsored by a faculty member, for further details please see Requesting Access.
The Kamiak HPC Cluster
Kamiak is a high-performance computing (HPC) cluster that follows a condominium model in which faculty investors as well as colleges purchase nodes that provide the resources for research computing. The nodes, which are high-powered computers, are grouped into partitions, each owned by a faculty or college. There is also an overarching shared partition named “kamiak” that includes all nodes, which provides open access to idle compute resources for use by the entire WSU research community. Note, however, that jobs submitted to the shared kamiak partition are subject to preemption, i.e., cancel and requeue, by faculty investor or college jobs if they need their nodes in order to be able to run.
Access to faculty investor or college partitions is restricted to users who are affiliated with the owner of the resources. To gain access to a partition, please contact the owner for approval.
Kamiak Cluster Configuration
Kamiak has a total of 138 compute nodes, of which 13 have Nvidia GPU accelerators. The compute nodes range from having 20 to 76 cores, and memory ranging from 128G to 2TB, with Intel Xeon dual-socket processors spanning Ivy Bridge to the latest Intel generation. The GPU nodes include Nvidia K80’s, V100’s, A100’s, and soon the latest Nvidia H100’s.
The HPC infrastructure includes 4 fully virtualized head nodes, an HDR100 InfiniBand network for fast storage access and inter-node communication, and 1 PB of all-flash NVMe shared data storage with a state-of-the-art WekaIO parallel file system.
The compute nodes include:
Node Count | Cores | Memory | Model | GPU |
---|---|---|---|---|
78 | 20 | 128G to 256G | Ivy Bridge/Haswell | |
4 | 24 | 192G to 256G | Haswell/Skylake | |
18 | 28 | 128G to 512G | Broadwell | |
14 | 40 | 192G to 386G | Skylake/Cascade Lake | |
1 | 60 | 2TB | Ivy Bridge | |
5 | 64 | 512G to 1TB | Ice Lake | |
5 | 76 | 1TB | Ice Lake | |
10 | 24 | 256G | Haswell | Total of 44 Nvidia K80's |
1 | 40 | 192G | Skylake | 1 Nvidia V100 |
1 | 56 | 386G | Cascade Lake | 2 Nvidia A100's |
1 | 76 | 2TB | Ice Lake | 4 Nvidia A100's |
For detailed cluster information, please use the “sinfo”, “scontrol show partition”, and “scontrol show node” commands.