Cluster Resources

Kamiak is a condominium-style high-performance computing (HPC) cluster in which investors can purchase nodes on which they receive non-preemtable service. An overarching “backfill” partition also provides open access to idle resources for use by the entire WSU research community.

Access to each sponsor and investor queue is restricted to users who are associated with the owner of the resources in the queue.  To gain access to a queue, contact the owner of the queue for approval as described in Kamiak’s FAQ.

Below we describe the architecture of the backfill and investor partitions.

Definitions:

Max CPU Cores per User: The maximum number of CPU cores across all running jobs of a user. Newly submitted jobs which would exceed this number are automatically queued, pending the completion of the user’s older jobs.

Max Memory per User: The maximum amount of memory across all running jobs of a user. Newly submitted jobs which would exceed this number are automatically queued, pending the completion of the user’s older jobs.

Scheduler Feature Tags: Tags which can be used in job submissions (with –constraint=tag) to specify the node a job should run on.

Backfill Partition

Queue NameNode CountGPUs
Total
CPU Cores
Total
Memory
Total
Wall Clock Limit
per Job
Max CPU Cores
per User
Max Memory
per User
kamiak13848386038970 GB7-00:00:001201.5TB

The following nodes are contained in the “kamiak” backfill partition.

Node CountAccelerators
per Node
CPU ModelCPU Cores
per Node
Memory
per Node
Scheduler
Feature Tags
40NoneIntel Xeon E5-2660 v3 @ 2.60GHz20128GBhaswell, e5-2660-v3-2.60ghz, avx2
33NoneIntel Xeon E5-2680 v2 @ 2.80GHz20256GBivybridge, e5-2680-v2-2.80ghz, avx
10NoneIntel Xeon E5-2660 v4 @ 2.00GHz28128GBbroadwell, e5-2660-v4-2.00ghz, avx2
82 NVIDIA Tesla K80 (4 GPUs total)Intel Xeon E5-2670 v3 @ 2.30GHz24256GBhaswell, e5-2670-v3-2.30ghz, avx2
6NoneIntel Xeon E5-2660 v3 @ 2.60GHz20256GBhaswell, e5-2660-v3-2.60ghz, avx2
6NoneIntel Xeon E5-2660 v4 @ 2.00GHz28256GBbroadwell,e5-2660-v4-2.00ghz, avx2
3NoneIntel Xeon Gold 6230 @ 2.10GHz40384GBcascadelake,gold-6230-2.10ghz,avx-512,connectX-6
5NoneIntel Xeon Platinum 8368 CPU @ 2.40GHz761TBicelake,platinum-8368-2.40ghz,avx-512,connectX-6,mem1TB,cores76
4
NoneIntel Xeon Gold 6230 @ 2.10GHz40192GBcascadelake,gold-6230-2.10ghz,avx-512,connectX-6
3NoneIntel Xeon E5-2660 v4 @ 2.00GHz28512GBbroadwell, e5-2660-v4-2.00ghz, avx2
3NoneIntel Xeon Gold 6338 CPU @ 2.00GHz64512GB
icelake,gold-6338-2.00ghz,avx-512,connectX-6,mem512GB,cores64
2NoneIntel Xeon Gold 6138 @ 2.00GHz40384GBskylake, gold-6138-2.00ghz, avx2
2NoneIntel Xeon Silver 4116 @2.10GHz24192GBskylake,silver-4116-2.10ghz,avx-512,connectX-5
1NoneIntel Xeon Gold 6230 @ 2.10GHz40384GBcascadelake,gold-6230-2.10ghz,avx-512,connectX-6,nvme
2NoneIntel Xeon Gold 6138 @ 2.00GHz40384GBskylake,gold-6138-2.00ghz,avx-512,connectX-5
2NoneIntel Xeon Gold 6338 @ 2.00GHz64512GBicelake,platinum-8368-2.40ghz,avx-512,connectX-6
1NoneIntel Xeon E7-4880 v2 @ 2.50GHz602TBivybridge, e7-4880-v2-2.50ghz, avx
14 NVIDIA Tesla K80 (8 GPUs total)Intel Xeon E5-2670 v3 @ 2.30GHz24256GBhaswell, e5-2670-v3-2.30ghz, avx2
12 NVIDIA Tesla K80 (4 GPUs total)Intel Xeon E5-2660 v4 @ 2.00GHz28256GBbroadwell, e5-2660-v4-2.00ghz, avx2
1Intel Xeon Phi coprocessorIntel Xeon E5-2670 v3 @ 2.30GHz24256GBhaswell, e5-2670-v3-2.30ghz, avx2
1NoneIntel Xeon Silver 4116 @ 2.10GHz24192GBskylake, silver-4116-2.10ghz, avx2
1NoneIntel Xeon Gold 6138 @ 2.00GHz40384GBskylake,gold-6138-2.00ghz,avx-512,connectX-5,nvme
11 NVIDIA V100 Tensor Core CPUIntel Xeon Gold 6138 @ 2.00GHz40192GBskylake,gold-6138-2.00ghz,avx-512,v100,volta
1NoneIntel Xeon Gold 6338 CPU @ 2.00GHz641TBicelake,gold-6338-2.00ghz,avx-512,connectX-6,mem1TB,cores64
14 NVIDIA A100Intel Xeon Platinum 8368 CPU @ 2.40GHz762TBicelake,platinum-8368-2.40ghz,avx-512,connectX-6,a100,ampere,mem2TB,cores76

College Partitions

Queue NameNode CountCPU Cores
per Node
Memory
per Node
Accelerators
per Node
Wall Clock Limit
per Job
Max CPU Cores
per User
Max Memory
per User
cahnrs1120256GB7-00:00:001201.5TB
cahnrs_bigmem1602TB7-00:00:00
cahnrs_gpu124256GB2 NVIDIA Tesla K80 (4 GPUs total)7-00:00:00
cas1020256GB7-00:00:00
cas5761TB7-00:00:00
cas1762TB4 NVIDIA A1007-00:00:00
cas1641TB7-00:00:00
coe140384GB7-00:00:00
free324256GB2 NVIDIA Tesla K80 (4 GPUs total)7-00:00:00
vcea320256GB7-00:00:00

Investor Partitions

Queue NameNode CountCPU Cores
per Node
Memory
per Node
Accelerators
per Node
Wall Clock Limit
per Job
Max CPU Cores
per User
Max Memory
per User
adam140384GB14-00:00:00
adam228128GB14-00:00:00
awn140384GB7-00:00:00
beckman3520128GB7-00:00:00
catalysis_gpu124256GB2 NVIDIA Tesla K80 (4 GPUs total)7-00:00:00
catalysis_long320256GB7-00:00:00
catalysis_long540384GB7-00:00:00
clark128256GB7-00:00:00
clark320128GB7-00:00:00
clark164512GB7-00:00:00
cook124256GB4 NVIDIA Tesla K80 (8 GPUs total)7-00:00:00
fernandez140192GB7-00:00:00
ficklin140192GB1 NVIDIA V100 Tensor Core GPU30-00:00:00
ficklin524256GB2 NVIDIA Tesla K80 (4 GPUs total)30-00:00:00
hipps52482176GB7-00:00:00
hpc_club128256GB2 NVIDIA Tesla K80 (4 GPUs total)7-00:00:00
hpc_club328256GB7-00:00:00
katz120256GB7-00:00:00
lee128256GB7-00:00:00
lee228128GB7-00:00:00
lofgren120128GB14-00:00:00
lofgren224192GB14-00:00:00
mainlab220256GB7-00:00:00
neibergs140192GB7-00:00:00
pacbio340384GB7-00:00:00
pddms528128GB7-00:00:00
peters128256GB7-00:00:00
peters128512GB7-00:00:00
popgenom320256GB14-00:00:00
rajagopalan140384GB7-00:00:00
ssl220128GB7-00:00:00
stockle220256GB7-00:00:00
storfer228512GB7-00:00:00
tanner124192GB7-00:00:00