Storage

Overview

Kamiak provides several types of storage that is available for users and lab groups.

NameLocationSnapshots?Backed up?Description
Home/homeYesYes, semiannuallyStorage provided to all users (100GB per user).
Lab/data/labYesYes, semiannuallyStorage provided to all PI's (500GB per PI). Additional lab storage is available for annual rental in 500GB units.
Scratch/scratchNoNoScratch storage, with 2 week folder lifetime, limited to 10TB per user.

All storage is a shared resource that can be accessed on every login and compute node. If you create a file in your home, lab, or scratch directories on a login or compute node, that same file is also available on all other login and compute nodes.

The shared storage system is implemented using the WekaIO parallel file system and all-flash commodity hardware, with a total usable size of approximately 1TB. There is no difference in speed of access to home, lab, and scratch storage. In particular scratch performance is no different than home or lab storage, it is intended strictly as a temporary large storage area for the duration of a job.

Data on home and lab folders is protected through daily snapshots retained for 3 days. In addition, home and lab data is backed up twice a year. Beyond that, the data owners (you) are responsible for protecting their own data by copying it as warranted out of Kamiak onto another system.

Please note that the above storage sizes are in gigabytes (GB) which are 10003 bytes, not gibibytes (GiB) which are 10243 bytes and thus slightly larger. Terabytes (TB) are similarly 10004 bytes, as opposed to tebibytes which are 10244 bytes. To see your storage usage be sure to use df -H or du -h --si which shows gigabytes and terabytes, otherwise it will show gibibytes and tebibytes.

Home Storage

All Kamiak users are provided with a home directory. Home directories are named using your WSU netid. Your usage can be viewed using the df -H command, which shows your quota limit and how much you have used.

$ echo $HOME
/home/myWSU.netid
$ cd $HOME
$ df -H .

Detailed usage by subfolder can be viewed using du -h --si

$ cd $HOME
$ du -h --si -d 1 --total .

The size of your home directories cannot be increased. If you need more storage, you can use your PI’s lab folder or scratch.

Lab Storage

Each PI’s lab group also gets a folder in /data/lab named after their lab group, with a default size specified in the table at the top of this page. Additional storage is available for rent on an annual basis through the CIRC service center.

For more information about renting additional lab storage, see the Become an Investor page.

Scratch Storage

The cluster also provides temporary scratch storage that can be used without cost by all users. The scratch storage resides on the cluster’s shared file system. The following guidelines apply on using it:

  • To use scratch, you must create a folder (called a “workspace”) in which to keep your data. You must use “mkworkspace” as described below to create a scratch folder.
  • Each workspace and all data within it has a maximum lifetime of 2 weeks, after which it is removed automatically.
  • Each user is limited to 10TB of scratch space. User quotas are in effect, and will warn you if you exceed the limit.
  • This is a shared resource with finite space. Care should be taken to ensure its appropriate use.

Creating and listing scratch folders

To use scratch storage, you’ll need to create a “workspace” folder in which to store your data. The maximum lifetime of a scratch workspace is 2 weeks. Scratch storage is not intended for permanent data and users should consider lab space for that use.

Let’s look at an example of creating and listing a workspace. Run each command with the option --help to see what options are available.

Use mkworkspace to create a scratch folder. Run the command either on a login node, within an interactive idev session on a compute node, or within a batch job.

$ myscratch="$(mkworkspace)"

$ echo $myscratch
/scratch/user/myWSU.netid/20210422_170154

You can also specify a prefix to the system-generated scratch folder name:

$ myscratch="$(mkworkspace -n myprefix)"
$ echo $myscratch
/scratch/user/myWSU.netid/myprefix_20210422_170154

For example, a prefix can be used to prepend the job number to a scratch folder created inside a job:

$ myscratch="$(mkworkspace -n $SLURM_JOBID)"
$ echo $myscratch
/scratch/user/myWSU.netid/31743729_20210422_170154

where 31743729 is the job number. The other fields are the date and time.

If you need to use a scratch folder across jobs, but don’t want to change your job scripts every time the scratch folder name changes, you can either create a symbolic link to the generated unique name or set an environment variable, and use that in your job scripts. However, remember that the link or variable has to be changed when the scratch folder expires.

Use lsworkspace to list all your workspaces and their expiration dates.

$ lsworkspace
/scratch/user/myWSU.netid/20210422_170154
Created: May 02 2020
Expires: May 16 2020

Use rm to delete the contents of a scratch folder.

rm -r -I /scratch/user/myWSU.netid/20210422_170154/*

While you can delete the contents within a scratch folder, you cannot delete the scratch folder itself.

Shortly after a workspace expires, the folder and all of the contents within it will be automatically deleted.

It is good practice to remove workspace contents as soon as they are no longer needed rather than allowing them to expire. This frees up the storage space for others to use.

Snapshots and Data Recovery

Snapshots are created for both /home and /data folders, but not for scratch. The snapshots are created daily and preserved for 3 days. The snapshots hold a read-only copy of the original data at that point in time, and can be used to recover data that has been deleted or modified since the snapshot was taken.

The snapshots are stored in “/home/.snapshots” and “/data/.snapshots”, in subfolders named by date. For example,

/home/.snapshots/daily.2021-06-30_0000/myWSU.netid

It is imperative that when data loss occurs users act quickly to recover it from a snapshot. Otherwise, the data will be lost permanently when the snapshots expire and are automatically removed.

Listing snapshots

Snapshots are contained in a hidden folder called .snapshots in /home and /data. Note the leading dot and the fact that this directory is invisible under ls (even with the -A option). However, its contents can be accessed explicitly:

$ ls -1 /home/.snapshots
daily.2021-06-28_0000
daily.2021-06-29_0000
daily.2021-06-30_0000
$ ls -1 /data/.snapshots
daily.2021-06-28_0000
daily.2021-06-29_0000
daily.2021-06-30_0000

Recovering data

To recover data from a snapshot, simply identify which snapshot contains the data, then copy it out of the snapshot and into your storage:

$ rm -f test.py # oops
$ ls test.py
ls: cannot access test.py: No such file or directory
$ cp /home/.snapshots/daily.2021-06-30_0000/myWSU.netid/test.py test.py
$ ls test.py
test.py

In the above example “daily.2021-06-30_0000” is the most recent snapshot, so that’s where we copy our data from. It is important to act quickly and copy the data out of the snapshot before it expires and the data is lost forever.

File Permissions

Storage on Kamiak uses Linux file ownership and permissions for data security. An explanation of how ownership and permissions behave in Linux can be found at this site and in various other guides available online.

As data owners on Kamiak, you are responsible for the security of their data and must allow or restrict access to their data as needed. Importantly, allowing “other” the write permission (i.e. chmod 777) is considered dangerous. You are highly encouraged to utilize file permission groups to allow specific additional users or groups to access your data.

Each file and directory has three sets of permissions for it: one for the owning user (u), one for the owning group (g), and one for everyone else (o). Each permission is one or more of read (r), write (w), and execute file/traverse folder (x). Each user belongs to a primary group, which is used as the group owner when creating files, and also belongs to one or more supplemental groups, which can also be used to access files. You can see the groups your are in using id.

Let’s look at an example:

$ cd /data/myLabName/

$ mkdir example

$ ls -ld example/
drwxr-xr-x 2 my.NID its_p_sys_ur_kam-its 4096 Aug 10 08:14 example

In that example we created a new directory in lab space. The file is owned by my user and my lab group. However, the permissions rwxr-xr-x show that group only has read access. Let’s allow our group to write data into the directory:

$ chmod g+w example/

$ ls -ld example/
drwxrwxr-x 2 my.NID its_p_sys_ur_kam-its 4096 Aug 10 08:14 example/

We can also secure the directory by preventing anyone else (users not in our lab group) from accessing data in the directory:

$ chmod o-rwx example/

$ ls -ld example/
drwxrwx--- 2 my.NID its_p_sys_ur_kam-its 4096 Aug 10 08:14 example/

Default file permissions and umask

By default, files and directories you create are readable by your group, but not readable or writable by any other user. To have new files have a different permission, you can set your umask, which determines the permissions of newly created files and folders. To see what umask is in effect, type umask -S. The default permission, which restricts access to members of your lab, is given by umask u=rwx,g=rx,o= which removes “other” access and gives the group owner read and execute access. To change your umask to allow group write permissions, type umask u=rwx,g=rwx,o=. To change your umask to allow “other” read access, type umask u=rwx,g=rx,o=rx

In order for your new umask to be permanent, add one of the above umask commands to the login script in your home directory. This is typically .bash_profile or .bashrc (note the leading dot in the file names) but differs if you use a shell other than bash. For example, to change your umask to allow “other” read access, type the following to append the umask command to your .bashrc:
echo "umask u=rwx,g=rx,o=rx" >> ~/.bashrc