From SNIC Documentation
Revision as of 12:18, 18 April 2013 by Lars Viklund (HPC2N) (talk | contribs) (Getting access)
Jump to: navigation, search

SNIC is building a storage infrastructure to complement the computational resources.

Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.

Swestore is in collaboration with ECDS, SND, Bioimage Sweden, BILS, UPPNEX,WLCG, NaturHistoriska RiksMuseet.

National storage

The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the dCache storage system and is distributed across the SNIC centres C3SE, HPC2N, Lunarc, NSC, PDC and Uppmax.

Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple crash of a storage element to losing an entire site while stil providing access to the stored data. To protect against silent data corruption the dCache storage system checksums all stored data and periodically verifies the data using this checksum.

The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.

One of the major advantages to the distributed nature of dCache is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node and having transfers going directly to/from the storage elements if the protocol allows it. The Swestore Nationally Accessible Storage system can achieve aggregated transfer rates in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically max 1 Gbit/s per file/connection).

Access protocols

Currently supported protocols
GridFTP - gsiftp://gsiftp.swestore.se/
Storage Resource Manager - srm://srm.swegrid.se/
Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/
Protocols in evaluation/development

For most of the access protocols the form of authentication is not username/password but X.509 client certificates, typically acquired from TCS eScience.

Getting access

Apply for storage
Please follow instructions here
Get a client certificate.
Follow the instructions here to get your client certificate. For Terena certificates, please make sure you also export the certificate for use with grid tools. For Nordugrid certificates, please make sure to also install your client certificate in your browser.
Request membership in the SweGrid VO.
Follow the instructions here to get added to the SweGrid virtual organisation.
Transmit and prepare the certificate.
In order to use the client certificate on SNIC resources for generating proxy certificates and using command line tools, the certificate needs to be converted into PEM files on the target cluster if not already in that format.

Download and upload data

Interactive browsing and manipulation of single files
SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. To browse private data you must first install your certificate in your browser (see above). Projects are organized under the /snic directory as https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/.
Upload and delete data interactively or with automation

There are several tools that are capable of using the protocols provided by SweStore national storage. For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources. As an integration point for building scripts and automated systems we suggest using the curl program and library.

Use the ARC client. Please see the instructions for Accessing SweStore national storage with the ARC client.
Use lftp. Please see the instructions for Accessing SweStore national storage with lftp.
Use cURL. Please see the instructions for Accessing SweStore national storage with cURL.
Use globus-url-copy. Please see the instructions for Accessing SweStore national storage with globus-url-copy.

More information

If you have any issues using SweStore please do not hesitate to contact swestore-support.

Tools and scripts

There exists a number of tools and utilities developed externally that can be useful. Here are some links:

Center storage

Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.

Unified environment

To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:

  • SNIC_BACKUP – the user's primary directory at the centre
    (the part of the centre storage that is backed up)
  • SNIC_NOBACKUP – recommended directory for project storage without backup
    (also on the centre storage)
  • SNIC_TMP – recommended directory for best performance during a job
    (local disk on nodes if applicable)