Cluster Resources

The Center for Institutional Research Computing (CIRC) provides high-performance computing (HPC) resources for the WSU research community. These resources are available for use in research by all faculty, students, and staff at any of WSU’s campuses, at no cost.

All user accounts must be sponsored by a faculty member, for further details please see Requesting Access.

The Kamiak HPC Cluster

Kamiak is a high-performance computing (HPC) cluster that follows a condominium model in which faculty investors as well as colleges purchase nodes that provide the resources for research computing. The nodes, which are high-powered computers, are grouped into partitions, each owned by a faculty or college. There is also an overarching shared partition named “kamiak” that includes all nodes, which provides open access to idle compute resources for use by the entire WSU research community. Note, however, that jobs submitted to the shared kamiak partition are subject to preemption, i.e., cancel and requeue, by faculty investor or college jobs if they need their nodes in order to be able to run.  Lastly, fairshare scheduling determines a job’s priority in the queue based on usage history, and so ensures that all researchers receive an equitable share of the compute resources.

Access to faculty investor or college partitions is restricted to users who are affiliated with the owner of the resources.  To gain access to an investor partition, please contact the owner for approval.

Kamiak Cluster Resources

Kamiak has a total of 151 compute nodes, of which 16 have Nvidia GPU accelerators. The compute nodes range from having 20 to 80 cores, and memory ranging from 128G to 2TB, with Intel Xeon dual-socket processors spanning Ivy Bridge to the latest Intel generation. The GPU nodes include Nvidia K80’s, V100’s, A100’s, and H100’s. 

The HPC infrastructure includes 4 fully virtualized head nodes, an HDR100 InfiniBand network for fast storage access and inter-node communication, and 1 PB of all-flash NVMe shared data storage with a state-of-the-art WekaIO parallel file system.

For detailed cluster information, please use the “sinfo”, “scontrol show partition”, and “scontrol show node” commands.

Shared Backfill Partition

PartitionNode CountCoresMemoryModelGPU
kamiak3801TBSapphire Rapids4 Nvidia H100's 80GB
664512GBSapphire Rapids
1762TBIce Lake4 Nvidia A100's
5761TBIce Lake
564512GB (4), 1TB (1)Ice Lake
156386GBCascade Lake2 Nvidia A100's
1040192GB (2), 386GB (8)Cascade Lake
440386GBSkylake
140192GBSkylake1 Nvidia V100
324192GBSkylake
1828128GB (9), 256GB (6), 512GB (3)Broadwell
128256GBBroadwell2 Nvidia K80
924256GBHaswell20 Nvidia K80, total
124256GBHaswell
4720128GB (41), 256GB (6)Haswell
3020256GBIvy Bridge
1602TBIvy Bridge

College Partitions

PartitionNode CountCoresMemoryModelGPU
cahnrs (College of Agricultural, Human, and Natural Resource Sciences)1120256GBIvy Bridge
1602TBIvy Bridge
124256GBHaswell2 Nvidia K80
cas (College of Arts and Sciences)4761TBIce Lake
1762TBIce Lake4 Nvidia A100
1641TBIce Lake
164512GBIce Lake
1120256GBIvy Bridge
coe (College of Education)140384GBSkylake
vcea (Voiland College of Engineering and Architecture)320256GBIvy Bridge

GPU Partition

There is also a GPU partition named “camas”, sponsored by an NSF MRI grant, that is open to all researchers.  Jobs submitted to “camas” that use GPU’s have priority over non-GPU jobs and are not subject to preemption.   Non-GPU jobs submitted to “camas” will be redirected to the preemptable kamiak backfill partition. 

PartitionNode CountCoresMemoryModelGPU
camas3801TBSapphire Rapids4 Nvidia H100's 80GB