|
(Tag: Redirect target changed) |
(104 intermediate revisions by 7 users not shown) |
Line 1: |
Line 1: |
− | [[Category:Storage]] | + | #REDIRECT[[Swestore iRODS is decommissioned]] |
− | [[Category:SweStore]]
| |
− | | |
− | '''This is not official yet'''
| |
− | | |
− | SNIC is building a storage infrastructure to complement the computational resources.
| |
− | | |
− | Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.
| |
− | | |
− | Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet]. | |
− | | |
− | = National storage =
| |
− | The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache] and [http://www.irods.org iRODS]
| |
− | storage systems.
| |
− | | |
− | Swestore is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc], [http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax]. Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple crash of a storage element to losing an entire site while still providing access to the stored data.
| |
− | | |
− | One of the major advantages to the distributed nature of dCache and iRODS is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node and having transfers going directly to/from the storage elements if the protocol allows it. The Swestore Nationally Accessible Storage system can achieve aggregated transfer rates in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically
| |
− | max 1 Gbit/s per file/connection).
| |
− | | |
− | == Support ==
| |
− | | |
− | If you have any issues using SweStore please do not hesitate to contact [mailto:support@swestore.se support@swestore.se].
| |
− | | |
− | == Getting access ==
| |
− | ; Apply for storage
| |
− | : Please follow the instructions on the [[Apply for storage on SweStore]] page.
| |
− | | |
− | === dCache: Acquire an eScience client certificate ===
| |
− | : Follow the instructions on [[Grid_certificates#Requesting_a_certificate|Requesting a certificate]] to get your client certificate. This step can be performed while waiting for the storage application to be approved and processed. Of course, if you already have a valid eScience certificate you don't need to acquire another one.
| |
− | :; For Terena certificates
| |
− | :: If intending to access SweStore from a SNIC resource, please make sure you also [[Exporting_a_client_certificate|export the certificate]], transfer it to the intended SNIC resource and [[Preparing_a_client_certificate|prepare it for use with grid tools]] (not necessarily needed with ARC 3.x, see [[Grid_certificates#Creating_a_proxy_certificate_using_the_Firefox.2FThunderbird_credential_store|proxy certificates using Firefox credential store]]).
| |
− | :; For Nordugrid certificates
| |
− | :: Please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].
| |
− | ; Request membership in the SweGrid VO
| |
− | : Follow the instructions on [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|Requesting membership in the SweGrid VO]] to get added to the SweGrid Virtual Organisation (VO) and request membership to your allocated storage project.
| |
− | | |
− | === iRODS: Acquire a SweStore YubiKey ===
| |
− | | |
− | For authentication in SweStore iRODS [http://www.yubico.com/products/yubikey-hardware/yubikey/ Yubikey] one-time passwords (OTP) are used. With a simple touch of a button, a 44 character one-time password is generated and sent to the system.
| |
− | | |
− | When you apply for storage, please provide your email address and a physical address where the yubikey should be sent.
| |
− | | |
− | ==Difference between dCache and iRODS==
| |
− | | |
− | :dCache uses certificates for authentication.
| |
− | :iRODS uses yubikey for authentication.
| |
− | | |
− | == dCache ==
| |
− | To protect against silent data corruption the dCache storage system checksums all stored data and periodically verifies the data using this checksum.
| |
− | | |
− | The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.
| |
− | | |
− | === Access protocols ===
| |
− | ; Currently supported protocols
| |
− | : GridFTP - gsiftp://gsiftp.swestore.se/
| |
− | : Storage Resource Manager - srm://srm.swegrid.se/
| |
− | : Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/
| |
− | : NFS4.1
| |
− | | |
− | For authentication eScience certificates are used, which provides a higher level of security than legacy username/password schemes.
| |
− | | |
− | === Download and upload data ===
| |
− | ; Interactive browsing and manipulation of single files
| |
− | : SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. '''Note''' that the interactive file manager has a lot of features and functions not supported in SweStore, only the basic file transfer features are supported.
| |
− | : To browse private data you need to have your certificate installed in your browser (default with Terena certificates, see above). Projects are organized under the <code>/snic</code> directory as <code><nowiki>https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/</nowiki></code>.
| |
− | ; Upload and delete data interactively or with automation
| |
− | There are several tools that are capable of using the protocols provided by SweStore national storage.
| |
− | For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.
| |
− | As an integration point for building scripts and automated systems we suggest using the curl program and library.
| |
− | : Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]]. '''Recommended''' method when logged in on SNIC resources.
| |
− | : Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].
| |
− | : Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].
| |
− | : Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].
| |
− | | |
− | === Tools and scripts ===
| |
− | | |
− | There exists a number of tools and utilities developed externally that can be useful. Here are some links:
| |
− | | |
− | * [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).
| |
− | * [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).
| |
− | * Transfer script, [[SweStore/swetrans_arc|swetrans_arc]], provided by Adam Peplinski / Philipp Schlatter
| |
− | * [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]
| |
− | | |
− | === Slides and more ===
| |
− | | |
− | [http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]
| |
− | | |
− | === Usage monitoring ===
| |
− | * [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]
| |
− | | |
− | == iRODS ==
| |
− | | |
− | === Supported clients ===
| |
− | | |
− | : iDrop web - Point your Web browser to [https://iweb.swestore.se iweb.swestore.se]
| |
− | : Command line client [http://eirods.org/download/ eirods icommands] [[SweStore/iRODS_icommand|How to use icommand on SNIC clusters]]
| |
− | | |
− | === The SweStore iRODS system ===
| |
− | | |
− | The SweStore iRODS system at NSC and it is running on two
| |
− | physical servers as a collection of virtual machines.
| |
− | | |
− | The iCAT server is dealing with the metadata. It is running
| |
− | a Postgres database which containts information about where
| |
− | to find any particular file in the system.
| |
− | | |
− | There are four storage servers which got a small amount of
| |
− | local disk space and they use the dCACHE system via NFS4 to
| |
− | store larger amounts of data.
| |
− | | |
− | === Using the SweStore iRODS system ===
| |
− | | |
− | Deailed documentation, papers and resources are available from
| |
− | the e-iRODS web site, http://www.eirods.org.
| |
− | | |
− | Web site for the community iRODS is http://www.irods.org.
| |
− | | |
− | To use the system you need to have the iRODS command line client installed or using iDROP web.
| |
− | For Unix systems the iRODS commandline client is available as an installable package for various
| |
− | Linux platforms from the e-iRODS website downloads section.
| |
− | | |
− | The community iRODS client also should work, but you need to modify configuration (iRODS/config/config.mk):
| |
− | <pre>
| |
− | PAM_AUTH = 1
| |
− | PAM_AUTH_NO_EXTEND = 1
| |
− | USE_SSL = 1
| |
− | </pre>
| |
− | | |
− | ==== Command line client ====
| |
− | | |
− | The command line client is natural to use for Unix users.
| |
− | There are versions of the usual ls, rm, mv, mkdir, pwd, rsync
| |
− | commands prefixed with an i for iRODS, i.e. irm, imv, imkdir etc.
| |
− | | |
− | As expected iput and iget move files to and from the irods system.
| |
− | All these commands print short help when using the -h option.
| |
− | | |
− | To use these first we need to initialize the iRODS environment.
| |
− | There is an environment file .irodsEnv in the .irods subdirectory
| |
− | of the home directory which contains information where and how
| |
− | to access the iRODS metadata (iCAT) server.
| |
− | | |
− | It looks like (placeholders are in <>):
| |
− | <pre>
| |
− | irodsHost 'irods.swestore.se'
| |
− | irodsPort 1247
| |
− | irodsDefResource 'snicdefResc'
| |
− | irodsHome '/snicZone/home/<email address>'
| |
− | irodsCwd '/snicZone/home/<email address>'
| |
− | irodsUserName '<email address>'
| |
− | irodsZone 'snicZone'
| |
− | irodsAuthScheme 'PAM'
| |
− | </pre>
| |
− | | |
− | The iCAT server is irods.swestore.se.
| |
− | The default irods zone name is snicZone.
| |
− | The default resource is snicdefResc.
| |
− | | |
− | With the corrent environment file all we need is a Yubkey and we can run the iinit command to authenticate to the iCAT server. After that we can use the usual iCommands. The ticket is valid 8 hrs.
| |
− | | |
− | More details on the i commands are available at
| |
− | https://www.irods.org/index.php/icommands
| |
− | | |
− | ==== iDROP web client ====
| |
− | | |
− | The web client is accessible via the URL https://iweb.swestore.se/.
| |
− | A login screen will be presented first and your Yubikey should
| |
− | be used to log in.
| |
− | | |
− | = Center storage =
| |
− | Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.
| |
− | | |
− | == Unified environment ==
| |
− | To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:
| |
− | | |
− | * <code>SNIC_BACKUP</code> – the user's primary directory at the centre<br>(the part of the centre storage that is backed up)
| |
− | * <code>SNIC_NOBACKUP</code> – recommended directory for project storage without backup<br>(also on the centre storage)
| |
− | * <code>SNIC_TMP</code> – recommended directory for best performance during a job<br>(local disk on nodes if applicable)
| |