Cluster Resources
The Center for Institutional Research Computing (CIRC) provides high-performance computing (HPC) resources for the WSU research community. These resources are available for use in research by all faculty, students, and staff at any of WSU’s campuses, at no cost.
All user accounts must be sponsored by a faculty member, for further details please see Requesting Access.
The Kamiak HPC Cluster
Kamiak is a high-performance computing (HPC) cluster that follows a condominium model in which faculty investors as well as colleges purchase nodes that provide the resources for research computing. The nodes, which are high-powered computers, are grouped into partitions, each owned by a faculty or college. There is also an overarching shared partition named “kamiak” that includes all nodes, which provides open access to idle compute resources for use by the entire WSU research community. Note, however, that jobs submitted to the shared kamiak partition are subject to preemption, i.e., cancel and requeue, by faculty investor or college jobs if they need their nodes in order to be able to run.
Access to faculty investor or college partitions is restricted to users who are affiliated with the owner of the resources. To gain access to an investor partition, please contact the owner for approval.
Kamiak Cluster Resources
Kamiak has a total of 143 compute nodes, of which 16 have Nvidia GPU accelerators. The compute nodes range from having 20 to 80 cores, and memory ranging from 128G to 2TB, with Intel Xeon dual-socket processors spanning Ivy Bridge to the latest Intel generation. The GPU nodes include Nvidia K80’s, V100’s, A100’s, and H100’s.
The HPC infrastructure includes 4 fully virtualized head nodes, an HDR100 InfiniBand network for fast storage access and inter-node communication, and 1 PB of all-flash NVMe shared data storage with a state-of-the-art WekaIO parallel file system.
For detailed cluster information, please use the “sinfo”, “scontrol show partition”, and “scontrol show node” commands.
Shared Backfill Partition
Partition | Node Count | Cores | Memory | Model | GPU |
---|---|---|---|---|---|
kamiak | 3 | 80 | 1TB | Sapphire Rapids | |
3 | 64 | 512GB | Sapphire Rapids | ||
1 | 76 | 2TB | Ice Lake | 4 Nvidia A100's | |
5 | 76 | 1TB | Ice Lake | ||
5 | 64 | 512GB (4), 1TB (1) | Ice Lake | ||
1 | 56 | 386GB | Cascade Lake | 2 Nvidia A100's | |
10 | 40 | 192GB (2), 386GB (8) | Cascade Lake | ||
4 | 40 | 386GB | Skylake | ||
1 | 40 | 192GB | Skylake | 1 Nvidia V100 | |
3 | 24 | 192GB | Skylake | ||
18 | 28 | 128GB (9), 256GB (6), 512GB (3) | Broadwell | ||
1 | 28 | 256GB | Broadwell | 2 Nvidia K80 | |
9 | 24 | 256GB | Haswell | 20 Nvidia K80, total | |
1 | 24 | 256GB | Haswell | ||
47 | 20 | 128GB (41), 256GB (6) | Haswell | ||
30 | 20 | 256GB | Ivy Bridge | ||
1 | 60 | 2TB | Ivy Bridge |
College Partitions
Partition | Node Count | Cores | Memory | Model | GPU |
---|---|---|---|---|---|
cahnrs (College of Agricultural, Human, and Natural Resource Sciences) | 11 | 20 | 256GB | Ivy Bridge | |
1 | 60 | 2TB | Ivy Bridge | ||
1 | 24 | 256GB | Haswell | 2 Nvidia K80 | |
cas (College of Arts and Sciences) | 4 | 76 | 1TB | Ice Lake | |
1 | 76 | 2TB | Ice Lake | 4 Nvidia A100 | |
1 | 64 | 1TB | Ice Lake | ||
1 | 64 | 512GB | Ice Lake | ||
11 | 20 | 256GB | Ivy Bridge | ||
coe (College of Education) | 1 | 40 | 384GB | Skylake | |
vcea (Voiland College of Engineering and Architecture) | 3 | 20 | 256GB | Ivy Bridge |
GPU Partition
There is also a GPU partition named “camas” sponsored by an NSF MRI grant. As with the shared kamiak partition, it is open to all researchers. However, jobs submitted there that use GPU’s are not subject to preemption. Fairshare scheduling based on usage history ensures that all researchers receive an equitable share of compute resources.
Partition | Node Count | Cores | Memory | Model | GPU |
---|---|---|---|---|---|
camas | 3 | 80 | 1TB | Sapphire Rapids | 4 Nvidia H100 |