SNIC is building a storage infrastructure to complement the computational resources.
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.
The aim of the nationally accessible storage is to build a robust, flexible and expandable system that can be used in most cases where access to large scale storage is needed. To the user it should appear as a single large system, while it is desirable that some parts of the system are distributed across all SNIC centra to benefit from the advantages of, among other things, locality and cache effects. The system is intended as a versatile long-term storage system.
Supported access protocol
- Today SweStore support this protocols
- srm://, gsiftp://, http:// (ro), https:// (ro), webdav (rw).
- Coming to support this protocols
- NFS4.1, iRODS
- Apply for storage
- Please follow instructions here
- Get a client certificate.
- Follow the instructions here to get your client certificate. For Terena certificates, please make sure you also export the certificate for use with grid tools. For Nordugrid certificates, please make sure to also install your client certificate in your browser.
- Request membership in the SweGrid VO.
- Follow the instructions here to get added to the SweGrid virtual organisation.
Download and upload data
- Browse and download data
- SweStore is accessible from your web browser, here https://webdav.swegrid.se/. To browse private data you must first install your certificate in your browser (see above). Your data is available at
- Upload and delete data
- Use the ARC client. Please see the instructions for Accessing SweStore national storage with the ARC client.
- Use cURL. Please see the instructions for Accessing SweStore national storage with cURL.
- Use lftp. Please see the instructions for Accessing SweStore national storage with lftp.
- Use globus-url-copy. Please see the instructions for Accessing SweStore national storage with globus-url-copy.
Examples of storage projects
Below are some examples of project that are using SweStore today.
|Allocation name||Size in TB||Project full name|
|uppnex||140||UPPmax NExt Generation Sequencing Cluster & Storage|
|brain_protein_atlas||10||Mouse brain protein atlas project|
|scims2lab||20||Identification of novel gene models by matching mass spectrometry data against 6-frame translations of the human genome|
|subatom||Low-energy nuclear theory and experiment|
|genomics-gu||10||Genomics Core Facility, Sahlgrenska academy at University of Gothenburg.|
|Chemo||5TB||Genetic interaction networks in human deseas|
|cesm1_holocene||30||Arctic sea ice in warm climates|
- SweStore introduction
- Per Project Monitoring of Swestore usage
- Accessing SweStore national storage with the ARC client
If you have any issues using SweStore please do not hesitate to contact swestore-support.
Tools and scripts
There exists a number of tools and utilities developed externally that can be useful. Here are some links:
- ARC_Tools - Convenience scripts for the arc client (Only a recursive rmdir so far).
- ARC Graphical Clients - Contains the ARC Storage Explorer (SweStore supported development).
- Transfer script, swetrans_arc, provided by Adam Peplinski / Philipp Schlatter
- Documentation of the ARC Python API (PDF)
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:
SNIC_BACKUP– the user's primary directory at the centre
(the part of the centre storage that is backed up)
SNIC_NOBACKUP– recommended directory for project storage without backup
(also on the centre storage)
SNIC_TMP– recommended directory for best performance during a job
(local disk on nodes if applicable)