Cluster Resources
The Center for Institutional Research Computing (CIRC) provides high-performance computing (HPC) resources for the WSU research community. These resources are available for use in research by all faculty, students, and staff at any of WSU’s campuses, at no cost.
All user accounts must be sponsored by a faculty member, for further details please see Requesting Access.
The Kamiak HPC Cluster
Kamiak is a high-performance computing (HPC) cluster that follows a condominium model in which faculty investors as well as colleges purchase nodes that provide the resources for research computing. The nodes, which are high-powered computers, are grouped into partitions, each owned by a faculty or college. There is also an overarching shared partition named “kamiak” that includes all nodes, which provides open access to idle compute resources for use by the entire WSU research community. Note, however, that jobs submitted to the shared kamiak partition are subject to preemption, i.e., cancel and requeue, by faculty investor or college jobs if they need their nodes in order to be able to run. Lastly, fairshare scheduling determines a job’s priority in the queue based on usage history, and so ensures that all researchers receive an equitable share of the compute resources.
Access to faculty investor or college partitions is restricted to users who are affiliated with the owner of the resources. To gain access to an investor partition, please contact the owner for approval.
Kamiak Cluster Resources
Kamiak has a total of 148 compute nodes, of which 6 have Nvidia GPU accelerators. The compute nodes range from having 20 to 128 cores, and memory ranging from 128G to 2TB, with Intel Xeon dual-socket processors spanning Ivy Bridge to the latest Intel generation. The GPU nodes include A100’s, and H100’s.
The HPC infrastructure includes 4 fully virtualized head nodes, an HDR100 InfiniBand network for fast storage access and inter-node communication, and 1 PB of all-flash NVMe shared data storage with a state-of-the-art WekaIO parallel file system.
For detailed cluster information, please use the “sinfo”, “scontrol show partition”, and “scontrol show node” commands.
Shared Backfill Partition
| Partition | Node Count | Cores | Memory | Model | GPU |
|---|---|---|---|---|---|
| kamiak | 3 | 80 | 1TB | Sapphire Rapids | 4X Nvidia H100 80GB |
| 1 | 64 | 512GB | Sapphire Rapids | 2X Nvidia H100 NVL 84GB | |
| 11 | 128 | 1TB(5), 1.5TB(6) | Granite Rapids | ||
| 33 | 64 | 512GB(17), 1TB(16) | Sapphire Rapids | ||
| 1 | 76 | 2TB | Ice Lake | 4X Nvidia A100 | |
| 5 | 76 | 1TB | Ice Lake | ||
| 5 | 64 | 512GB (4), 1TB (1) | Ice Lake | ||
| 1 | 56 | 384GB | Cascade Lake | 2X Nvidia A100 | |
| 10 | 40 | 192GB (2), 384GB (8) | Cascade Lake | ||
| 4 | 40 | 386GB | Skylake | ||
| 3 | 24 | 192GB | Skylake | ||
| 19 | 28 | 128GB (9), 256GB (7), 512GB (3) | Broadwell | ||
| 3 | 24 | 256GB | Haswell | ||
| 46 | 20 | 128GB (40), 256GB (6) | Haswell |
College Partitions
| Partition | Node Count | Cores | Memory | Model | GPU |
|---|---|---|---|---|---|
| cahnrs (College of Agricultural, Human, and Natural Resource Sciences) | 1 | 24 | 256GB | Haswell | |
| cas (College of Arts and Sciences) | 1 | 64 | 512GB | Sapphire Rapids | |
| 4 | 76 | 1TB | Ice Lake | ||
| 1 | 76 | 2TB | Ice Lake | 4 Nvidia A100 | |
| 1 | 64 | 1TB | Ice Lake | ||
| 1 | 64 | 512GB | Ice Lake | ||
| 1 | 28 | 256GB | Broadwell | ||
| 3 | 20 | 128GB | Haswell | ||
| 1 | 20 | 256GB | Haswell | ||
| coe (College of Education) | 1 | 40 | 384GB | Skylake | |
| vcea (Voiland College of Engineering and Architecture) | 3 | 64 | 512GB | Sapphire Rapids | |
| 1 | 64 | 384GB | Cascade Lake | 2 Nvidia A100 |
GPU Partition
There is also a GPU partition named “camas”, sponsored by an NSF MRI grant, that is open to all researchers. Jobs submitted to “camas” that use GPU’s have priority over non-GPU jobs and are not subject to preemption. Non-GPU jobs submitted to “camas” will be redirected to the preemptable kamiak backfill partition.
| Partition | Node Count | Cores | Memory | Model | GPU |
|---|---|---|---|---|---|
| camas | 3 | 80 | 1TB | Sapphire Rapids | 4 Nvidia H100's 80GB |