Difference between revisions of "SweStore introduction"

From SNIC Documentation
Jump to: navigation, search
(Technical overview)
(Changed redirect target from Swestore to Swestore Documentation Moved)
(Tag: Redirect target changed)
 
(3 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
+
#REDIRECT[[Swestore Documentation Moved]]
PAGE VERY MUCH IN PROGRESS
 
 
 
 
 
== Technical overview ==
 
 
 
The SweStore National Storage infrastructure is implemented using the
 
distributed storage solution [http://www.dcache.org dCache].
 
 
 
Slides on SweStore National Storage [[file: TekniskbeskrivningSweStore.pdf]]
 
The core services of the system is located at HPC2N at Umeå
 
University. There are over 65 (in Nov 2011) online storage pools
 
attached to the system and they are located at Lunarc, C3SE, NSC, PDC,
 
Uppmax and HPC2N. A file upload usually go directly from the source to
 
one of the storage pools without going through the core service which
 
gives a high aggregated transfer performance for the system. All files
 
in the SNIC part of storage are replicated on a different site for
 
availability reasons. Unless the core services are unreachable, an
 
entire site can be offline without any loss of functionality for
 
SweStore.
 
 
 
There are several access protocols for SweStore National Storage. The
 
primary ones being the SRM and WebDAV.
 
 
 
SweStore currently uses certificates for authentication. Please see,
 
[[Grid certificates]], for information on how to get and manage
 
certificates.
 
 
 
There are no backups of data on SweStore. The files are replicated to
 
minimize the risk of data loss due to hardware problems. But if the
 
end user deletes a file it will be lost.
 
 
 
There are currently no tape backend attached to SweStore, but that may
 
be changed in the future. The tape would be used for archiving data
 
that hasn't been accessed for a long time.
 
 
 
One very nice aspect of SweStore access is that staging data in and
 
out from grid clusters with ARC works really nice. Instead of copying
 
data to the clusters and then copying the results out again you can
 
let ARC do that for you. Just specify the input file URL:s you need
 
and let ARC do the work for you.
 

Latest revision as of 10:24, 14 February 2023