Difference between revisions of "Swestore-irods"

From SNIC Documentation
Jump to: navigation, search
(iRODS)
(Swestore iRODS is decomissioned)
(Tag: Redirect target changed)
 
(59 intermediate revisions by 7 users not shown)
Line 1: Line 1:
[[Category:Storage]]
+
#REDIRECT[[Swestore iRODS is decommissioned]]
[[Category:SweStore]]
 
 
 
'''This is not official yet'''
 
 
 
SNIC is building a storage infrastructure to complement the computational resources.
 
 
 
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.
 
 
 
Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].
 
 
 
= National storage =
 
The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache] and  [http://www.irods.org iRODS]
 
storage systems.
 
 
 
Swestore is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc], [http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax]. Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple crash of a storage element to losing an entire site while still providing access to the stored data.
 
 
 
One of the major advantages to the distributed nature of dCache and iRODS is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node and having transfers going directly to/from the storage elements if the protocol allows it. The Swestore Nationally Accessible Storage system can achieve aggregated  transfer rates in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically
 
max 1 Gbit/s per file/connection).
 
 
 
To protect against silent data corruption the dCache storage system checksums all stored data and periodically verifies the data using this checksum.
 
 
 
The dCache system does NOT yet provide protection against user errors like inadvertent file deletions and so on. The iRODS system provides this protection. Deleted files are moved to a trashcan.
 
 
 
== Getting access ==
 
; Apply for storage
 
: Please follow the instructions on the [[Apply for storage on SweStore]] page.
 
 
 
;Difference between dCache and iRODS user authentication
 
:SweStore's dCache system uses eScience client certificates.
 
:SweStore's iRODS system uses [http://www.yubico.com/products/yubikey-hardware/yubikey/ Yubikey] one-time passwords (OTP). With a simple touch of a button, a 44 character one-time password is generated and sent to the system.
 
 
 
; dCache usage - How to acquire an eScience client certificate
 
: Follow the instructions on [[Grid_certificates#Requesting_a_certificate|Requesting a certificate]] to get your client certificate. This step can be performed while waiting for the storage application to be approved and processed. Of course, if you already have a valid eScience certificate you don't need to acquire another one.
 
:; For Terena certificates
 
:: If intending to access SweStore from a SNIC resource, please make sure you also [[Exporting_a_client_certificate|export the certificate]], transfer it to the intended SNIC resource and [[Preparing_a_client_certificate|prepare it for use with grid tools]] (not necessarily needed with ARC 3.x, see [[Grid_certificates#Creating_a_proxy_certificate_using_the_Firefox.2FThunderbird_credential_store|proxy certificates using Firefox credential store]]).
 
:; For Nordugrid certificates
 
:: Please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].
 
:; Request membership in the SweGrid VO
 
:: Follow the instructions on [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|Requesting membership in the SweGrid VO]] to get added to the SweGrid Virtual Organisation (VO) and request membership to your allocated storage project.
 
 
 
; iRODS usage - How to acquire a SweStore YubiKey
 
 
 
To apply for a SweStore yubikey, please send an email to [mailto:support@swestore.se?subject=Yubikey support@swestore.se] and provide a shipping address to where the yubikey should be sent.
 
 
 
== Support ==
 
 
 
If you have any issues using SweStore please do not hesitate to contact [mailto:support@swestore.se support@swestore.se].
 
 
 
== dCache ==
 
 
 
=== Access protocols ===
 
; Currently supported protocols
 
: GridFTP - gsiftp://gsiftp.swestore.se/
 
: Storage Resource Manager - srm://srm.swegrid.se/
 
: Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/
 
: NFS4.1
 
 
 
For authentication eScience certificates are used, which provides a higher level of security than legacy username/password schemes.
 
 
 
=== Download and upload data ===
 
; Interactive browsing and manipulation of single files
 
: SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. '''Note''' that the interactive file manager has a lot of features and functions not supported in SweStore, only the basic file transfer features are supported.
 
: To browse private data you need to have your certificate installed in your browser (default with Terena certificates, see above). Projects are organized under the <code>/snic</code> directory as <code><nowiki>https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/</nowiki></code>.
 
; Upload and delete data interactively or with automation
 
There are several tools that are capable of using the protocols provided by SweStore national storage.
 
For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.
 
As an integration point for building scripts and automated systems we suggest using the curl program and library.
 
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]]. '''Recommended''' method when logged in on SNIC resources.
 
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].
 
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].
 
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].
 
 
 
=== Tools and scripts ===
 
 
 
There exists a number of tools and utilities developed externally that can be useful. Here are some links:
 
 
 
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).
 
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).
 
* Transfer script, [[SweStore/swetrans_arc|swetrans_arc]], provided by Adam Peplinski / Philipp Schlatter
 
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]
 
 
 
=== Slides and more ===
 
 
 
[http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]
 
 
 
=== Usage monitoring ===
 
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]
 
 
 
== iRODS ==
 
 
 
 
 
 
 
== Build and install the SNIC iCAT server ==
 
 
 
 
 
=== Build postgres ===
 
 
 
We use postgres version 9.4 with unixodbc and build it from
 
source.
 
 
 
The binaries will be built under <code>/usr/postgres</code>
 
since the e-irods install script expects the postgres directory
 
tree to be under <code>/usr</code>. The userid <code>postgres</code>
 
will own the directory tree.
 
 
 
Data will be owned by the <code>postgres</code> userid and under
 
<code>/postgres</code> since that is necessary for the peaceful
 
co-existence with unixodbc. We will use xfs for the database file system.
 
The file system is created as a RAID6 slice on an array with
 
10 x 3TB spindles. We plan to obtain a mirrored pair of disks
 
for the online logs to have them separate.
 
 
 
An account and a group called postgres is created. In it's home
 
directory there is an install subdirectory with the command line
 
history and the two scripts, <code>postgres-build.sh</code>
 
and <code>unixodbc-build.sh</code> which are used to build the software.
 
There is a tarball of the result of make install and to be used
 
on the iCAT hosts.
 
Use:
 
  cd
 
  cd src
 
  tar -xf ../tar/postgresql-9.2.4.tar.gz
 
  cd postgresql-9.2.4
 
  cp ~/install/scripts/postgres-build.sh .
 
  ./postgres-build.sh 2>&1 | tee -a postgres-build.log
 
  cd
 
  cd src
 
  tar -xf ../tar/unixODBC-2.3.1.tar.gz
 
  cd unixODBC-2.3.1
 
  ./unixodbc-build.sh  2>&1 | tee -a unixodbc-build.log
 
This should create everything under <code>/usr/postgres</code>.
 
A tarball is manually created under <code>~postgres/install/tar</code>
 
as <code>postgres-9.4-snic-build.tar.gz</code>.
 
 
 
The database will run on snic-irods, so the
 
<code>/usr/postgres</code> tree is taken and transplanted
 
from snic-irods-mgmt using the tarball.
 
 
 
 
 
=== Running postgres on the iCAT server ===
 
 
 
The userid <code>postgres</code> is added to the puppet manfiests.
 
 
 
Puppet manifests will also need to be amended with mount options
 
for the <code>/postgres</code> file system.
 
 
 
Create the data file system manually as:
 
  mkfs.xfs -b size=4096 -d su=64k,sw=8 -l size=64m,su=64k \
 
        -i maxpct=1 -L /pgsql /dev/xvdc
 
 
 
Logged as:
 
<pre>
 
meta-data=/dev/xvdc              isize=256    agcount=32, agsize=8388592 blks
 
        =                      sectsz=512  attr=2, projid32bit=0
 
data    =                      bsize=4096  blocks=268434944, imaxpct=1
 
        =                      sunit=16    swidth=128 blks
 
naming  =version 2              bsize=4096  ascii-ci=0
 
log      =internal log          bsize=4096  blocks=16384, version=2
 
        =                      sectsz=512  sunit=16 blks, lazy-count=1
 
realtime =none                  extsz=4096  blocks=0, rtextents=0
 
</pre>
 
 
 
Mount the file system with the options:
 
  logbufs=8,logbsize=256k,largeio,inode64
 
so update <code>/etc/fstab</code> accordingly.
 
 
 
Mount it:
 
  mkdir /postgres
 
  mount -a
 
  chown postgres.postgres /postgres
 
  chmod g+w /postgres
 
  chmod g+s /postgres
 
 
 
The default umask for unkown reason is 002.
 
Might be changed to 022. Create profile for
 
postgres user as:
 
 
 
<pre>
 
# Add postgres stuff.
 
export PATH=$PATH:/usr/postgres/bin
 
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/postgres/lib
 
export MANPATH=/usr/share/man:/usr/postgres/share/man
 
 
 
# Fix umask.
 
umask 022
 
 
 
# Simple prompt.
 
PS1="$ "
 
export PS1
 
 
 
# Colourless and odorless.
 
alias ls='ls --color=never'
 
</pre>
 
 
 
Change limits in /etc/security/limits.conf as:
 
<pre>
 
postgres        -      core            unlimited
 
postgres        -      data            unlimited
 
postgres        -      fsize          unlimited
 
postgres        -      memlock        unlimited
 
postgres        -      nofile          8192
 
postgres        -      rss            unlimited
 
postgres        -      stack          unlimited
 
postgres        -      cpu            unlimited
 
postgres        -      nproc          2048
 
postgres        -      as              unlimited
 
postgres        -      msgqueue        unlimited
 
 
 
eirods          -      core            unlimited
 
eirods          -      data            unlimited
 
eirods          -      fsize          unlimited
 
eirods          -      memlock        unlimited
 
eirods          -      nofile          8192
 
eirods          -      rss            unlimited
 
eirods          -      stack          unlimited
 
eirods          -      cpu            unlimited
 
eirods          -      nproc          2048
 
eirods          -      as              unlimited
 
eirods          -      msgqueue        unlimited
 
</pre>
 
 
 
 
 
=== Change file for authd ===
 
 
 
Change <code>/etc/xinetd.d/auth</code> as:
 
<pre>
 
service auth
 
{
 
disable = no
 
        socket_type    = stream
 
        wait            = no
 
        user            = ident
 
        cps            = 4096 10
 
        instances      = UNLIMITED
 
        server          = /usr/sbin/in.authd
 
        server_args    = -t60 --xerror --os
 
}
 
</pre>
 
 
 
 
 
=== Create the database ===
 
 
 
Make sure the locale is set to UTF8.
 
Use:
 
  /bin/su - postgres
 
  mkdir /postgres/icats01 /postgres/icats01/data /postgres/icats01/log
 
  chmod g+w /postgres/icats01 /postgres/icats01/data /postgres/icats01/log
 
  initdb -D /postgres/icats01/data -X /postgres/icats01/log \
 
    -E UTF8 --locale=UTF8 -U postgres
 
 
 
Get the startup script as:
 
  cp ~postgres/install/postgresql /etc/init.d/postresql
 
 
 
As root:
 
  /etc/init.d/postgresql start
 
 
 
 
 
=== Changes relative to e-iRODS 3.0 ===
 
 
 
A git repository of the original tree for e-iRODS 3.0 had been created
 
by Andreas. It is accessible as
 
<code>ssh://datil.nsc.liu.se/srv/git/e-irods.git</code>.
 
It is accessed via <code>ssh</code> so first on the computer which
 
has the ssh keys:
 
  ssh-add
 
  ssh -A snic-irods-mgmt
 
Then
 
  git clone ssh://datil.nsc.liu.se/srv/git/e-irods.git
 
 
 
In <code>./iRODS/config/config.mk</code> various options changed.
 
Actually this is generated, it is config.mk.in to be
 
changed.
 
  vi ./iRODS/config/config.mk.in
 
 
 
Define <code>POSTGRES_HOME</code> here as it seems to be an option here
 
as well:
 
 
 
Enable syslog logging (will also switch of file logging).
 
<pre>
 
IRODS_SYSLOG = 1
 
</pre>
 
 
 
PAM authentication. This is now default, no need to change.
 
<pre>
 
PAM_AUTH = 1
 
</pre>
 
 
 
Location of the executable, should be read only for iRODS.
 
<pre>
 
PAM_AUTH_CHECK_PROG=/usr/sbin/PamAuthCheck
 
</pre>
 
 
 
To get unique .irodsA credential tokens per session with a
 
TTL defaulting to 8 hrs instead of two weeks, making it
 
more difficult for an attacker to use a stolen .irodsA file.
 
Change in platform.mk to add CFLAG for previous define
 
<pre>
 
PAM_AUTH_NO_EXTEND = 1
 
</pre>
 
 
 
For the encryption of communication channel between
 
client/server when sending PAM password). Now on as default.
 
<pre>
 
USE_SSL = 1
 
</pre>
 
 
 
Suggestion from Pontus.
 
<pre>
 
UNI_CODE = 1
 
</pre>
 
 
 
Update the version number. In file ./packaging/VERSION.
 
Like:
 
<pre>
 
EIRODSVERSION=3.0.1b1-snic2
 
</pre>
 
  vi ./packaging/VERSION
 
  git commit -m "Customized local version" ./packaging/VERSION
 
 
 
  git commit -m "Our config changes" config.mk.in
 
 
 
 
 
We leave this since reviewing the source we believe this is fixed.
 
But it's not. Replace in the function definition startIrods.
 
In the script <code>eirods_setup.pl</code> the call to the function "run" to
 
run a script to start irods gets stuck. The call is to be replaced
 
with the "system" call (does not return diagnostics output but works).
 
  vi ./iRODS/scripts/perl/eirods_setup.pl
 
  git commit -m "Startup changed in function startIrods" ./iRODS/scripts/perl/eirods_setup.pl
 
 
 
In <code>./iRODS/server/core/include/rodsServer.h</code>
 
raise the number of connections MAX_SVR_SVR_CONNECT_CNT to some
 
number above a hundred at least.
 
Also, raise <code>NUM_READ_WORKER_THR</code>.
 
  vi ./iRODS/server/core/include/rodsServer.h
 
  git commit -m "Raising resource limits" ./iRODS/server/core/include/rodsServer.h
 
 
 
In <pre>./iRODS/scripts/perl/irodsctl.pl.orig</pre> iRODS server is changed
 
to start up always in the background.
 
  vi ./iRODS/scripts/perl/irodsctl.pl
 
  git commit -m "Start always in the background and wait" ./iRODS/scripts/perl/irodsctl.pl
 
 
 
More extensive changes in <pre>./packaging/postinstall.sh</pre>.
 
  vi ./packaging/postinstall.sh
 
Running a command with su requires - to pick up the environment of that
 
user. Put cd inside the -c command string.
 
    :%s/su --shell/\/bin\/su - --shell/
 
  git commit -m "Run su with -" ./packaging/postinstall.s
 
 
 
In <pre>./packaging/build.sh</pre> find Postgresql installed under
 
<pre>/usr/postgres</pre>.
 
  vi ./packaging/build.sh
 
  git commit -m "Find Postgresql under /usr" ./packaging/build.sh
 
 
 
In the template listing of package files add profile with postgres
 
access. Can also take the copy of the Postgres profile.
 
The profile content is:
 
<pre>
 
export PATH=$PATH:/usr/postgres/bin
 
export LD_LIBRARY_PATH=$PATH:/usr/postgres/lib
 
export MANPATH=/usr/share/man:/usr/postgres/share/man
 
umask 022
 
</pre>
 
 
 
Add the file itself like:
 
  cat >packaging/eirods-dot-profile
 
  git add ./packaging/eirods-dot-profile
 
  git commit -m "New file profile" ./packaging/eirods-dot-profile
 
  vi ./packaging/eirods.list.template
 
  git commit -m "Profile addedd" ./packaging/eirods.list.template
 
 
 
Startup script is broken, fix it.
 
  vi ./packaging/eirods
 
  git commit -m "Startup script fixed" ./packaging/eirods
 
 
 
Finish.
 
  git commit
 
  git push
 
 
 
 
 
=== Bulding the packages ===
 
 
 
An additional dependency is:
 
  yum install fuse-devel
 
 
 
Run the build shell script to build the iCAT server package.
 
  cd
 
  rm -rf rpmbuild
 
  cd src/e-irods
 
  cd packaging
 
  ./build.sh icat postgres 2>&1 | tee -a eirods-build.log
 
 
 
For the resource server package:
 
  cd
 
  cd src/e-irods
 
  cd packaging
 
  ./build.sh resource 2>&1 | tee -a eirods-build.log
 
 
 
Packages are created in:
 
  cd build
 
 
 
 
 
=== To uninstall ===
 
 
 
Run:
 
  rpm -e eirods-3.0.1b1-0.x86_64
 
  /bin/su - postgres -c 'dropdb EICAT; dropuser eirods;'
 
  /usr/sbin/userdel eirods
 
 
 
 
 
=== Install the eirods packages ===
 
 
 
As root run rpm -i --nodeps to install the package which was
 
built on the management node.
 
 
 
Get the environment for postgres since the install script
 
does not do /bin/su - to run .profile.
 
  . ~postgres/.profile
 
  export PS1='# '
 
 
 
Install the package.
 
  cd /root/kits/eirods-3.0.1b1/rpm
 
  rpm -vv -i --nodeps eirods-3.0.1b1-snic2-64bit-icat-postgres-centos6.rpm
 
  rom -vv -i eirods-dev-3.0.1b1-snic2-64bit-centos6.rpm
 
 
 
Check the output because finishes even when inside the postinstall
 
script fails.
 
 
 
Check the log:
 
  view /var/lib/eirods/iRODS/installLogs/eirods_setup.log
 
 
 
If the postinstall script fails to re-run it like:
 
  /var/lib/eirods/packaging/postinstall.sh /var/lib/eirods eirods icat post
 
gres postgres EICAT localhost 5432 eirods lly2YOjjPEHkM0p
 
 
 
 
 
=== Modify the default zone and set password ===
 
 
 
Run:
 
  /bin/su - eirods
 
  iadmin modzone tempZone name snicZone
 
 
 
Confirm local zone change.
 
 
 
Change the iRODS environment file accordingly.
 
  vi .irods/.irodsEnv
 
Replace tempZone with snicZone everywhere.
 
 
 
Modify password:
 
  iadmin moduser rods password password
 
  iexit full
 
  iinit
 
 
 
 
 
=== Client install ===
 
 
 
The client will have prerequisits as:
 
  yum -y install openssl098e
 
  yum -y install postgresql-odbc
 
  yum -y install fuse fuse-libs
 
 
 
Install the package:
 
  rpm -vv -i kits/eirods-3.0.1b1/eirods-3.0.1b1-snic2-64bit-resource-centos6.rpm
 
 
 
Run setup:
 
  /bin/su - eirods
 
  ./packaging/setup_resource.sh
 
  snic-irods.nsc.liu.se
 
  1247
 
  snicZone
 
  rods
 
  password
 
 
 
=== Create resources ===
 
 
 
Create directories from admin workstation:
 
 
 
  ssh root@snic-sr-001 mkdir /snic-sr-001/Vault
 
  ssh root@snic-sr-002 mkdir /snic-sr-002/Vault
 
  ssh root@snic-sr-003 mkdir /snic-sr-003/Vault
 
  ssh root@snic-sr-004 mkdir /snic-sr-004/Vault
 
 
 
  ssh root@snic-sr-001 chown eirods.eirods /snic-sr-001/Vault
 
  ssh root@snic-sr-002 chown eirods.eirods /snic-sr-002/Vault
 
  ssh root@snic-sr-003 chown eirods.eirods /snic-sr-003/Vault
 
  ssh root@snic-sr-004 chown eirods.eirods /snic-sr-004/Vault
 
 
 
Create resources on the iCAT server:
 
 
 
  iadmin mkresc snic-sr-001-lfs01 "unix file system" snic-sr-001.nsc.liu.se:/snic-sr-001/Vault
 
  iadmin mkresc snic-sr-002-lfs01 "unix file system" snic-sr-002.nsc.liu.se:/snic-sr-002/Vault
 
  iadmin mkresc snic-sr-003-lfs01 "unix file system" snic-sr-003.nsc.liu.se:/snic-sr-003/Vault
 
  iadmin mkresc snic-sr-004-lfs01 "unix file system" snic-sr-004.nsc.liu.se:/snic-sr-004/Vault
 
 
 
Create directories from the admin workstation:
 
 
 
  irods-do ssh "/bin/su - eirods -c \"mkdir /swestore/eirods\""
 
  irods-do ssh "/bin/su - eirods -c \"mkdir /swestore/eirods/Vault\""
 
 
 
Create nfs resources:
 
 
 
  iadmin mkresc snic-sr-001-nfs01 "unix file system" snic-sr-001.nsc.liu.se:/swestore/eirods/Vault
 
  iadmin mkresc snic-sr-002-nfs01 "unix file system" snic-sr-002.nsc.liu.se:/swestore/eirods/Vault
 
  iadmin mkresc snic-sr-003-nfs01 "unix file system" snic-sr-003.nsc.liu.se:/swestore/eirods/Vault
 
  iadmin mkresc snic-sr-004-nfs01 "unix file system" snic-sr-004.nsc.liu.se:/swestore/eirods/Vault
 
 
 
 
 
=== Test composable resource replication ===
 
 
 
Create composable test resource:
 
  iadmin mkresc tmp1 replication
 
  iadmin mkresc tmp11 "unix file system" snic-sr-001.nsc.liu.se:/tmp/Vault1
 
  iadmin mkresc tmp12 "unix file system" snic-sr-002.nsc.liu.se:/tmp/Vault2
 
  iadmin addchildtoresc tmp1 tmp11
 
  iadmin addchildtoresc tmp1 tmp12
 
 
 
 
 
=== Create default resource ===
 
 
 
/bin/su - eirods
 
  iadmin mkresc snicdefResc "unix file system" snic-sr-001.nsc.liu.se:/swes
 
tore/eirods/snicdefResc/Vault
 
 
 
 
 
=== Accessing files as another user ===
 
 
 
Set environment variable:
 
  export clientUserName=user
 
 
 
Then you can run icommands as user.
 
 
 
 
 
=== Setting up project hierarchy ===
 
 
 
As eirods user run:
 
  imkdir /snicZone/proj/p1
 
  iadmin mkgroup p1
 
  iadmin atg p1 u1
 
  ichmod -r read p1 /snicZone/proj/p1
 
  ichmod -r write p1 /snicZone/proj/p1
 
  ichmod -r inherit /snicZone/proj/p1
 
Any member of p1 should able to read/write
 
everything under the project group directory
 
tree.
 
 
 
 
 
==== Security efix ===
 
 
 
Do:
 
  /bin/su - eirods
 
  cd
 
  cd iRODS/server/bin/cmd
 
  mkdir -p /var/lib/eirods/save/cmd-removed
 
  mv hello test_execstream.py univMSSInterface.sh /var/lib/eirods/save/cmd-removed/
 
 
 
== Supported clients ==
 
 
 
: iDrop web - Point your Web browser to [https://iweb.swestore.se iweb.swestore.se]
 
: E-iRODS iCommands - Command line client [http://eirods.org/download/ Download E-iRODS icommands]
 
 
 
SweStore iRODS uses PAM authentication and SweStore yubikeys. With a simple touch of a button, a 44 character one-time password is generated and sent to the system.
 
 
 
The community iRODS client also should work, with PAM authentication e.g.
 
the following changes to the Makefile iRODS/config/config.mk and a recompile:
 
<pre>
 
PAM_AUTH = 1
 
PAM_AUTH_NO_EXTEND = 1
 
USE_SSL = 1
 
</pre>
 
 
 
=== SweStore iRODS usage documentation  ===
 
 
 
To use the system you need to have the E-iRODS command line client installed or using iDROP web.
 
 
 
==== Command line client ====
 
 
 
For Linux systems the iRODS commandline client is available as an installable package for various
 
Linux platforms from the e-iRODS website downloads section.
 
 
 
The command line client is natural to use for Unix users.
 
There are versions of the usual ls, rm, mv, mkdir, pwd, rsync
 
commands prefixed with an i for iRODS, i.e. irm, imv, imkdir etc.
 
 
 
As expected iput and iget move files to and from the irods system.
 
All these commands print short help when using the -h option.
 
 
 
===== iCommands environment file =====
 
 
 
There is an environment file .irodsEnv in the .irods subdirectory
 
of the home directory ($HOME/.irods/.irodsEnv) which contains information where and how
 
to access the iRODS metadata (iCAT) server.
 
 
 
It looks like (placeholders are in <>):
 
<pre>
 
irodsHost 'irods.swestore.se'
 
irodsPort 1247
 
irodsDefResource 'snicdefResc'
 
irodsHome '/snicZone/home/<email address>'
 
irodsCwd '/snicZone/home/<email address>'
 
irodsUserName '<email address>'
 
irodsZone 'snicZone'
 
irodsAuthScheme 'PAM'
 
</pre>
 
 
 
The iCAT server is irods.swestore.se.
 
The default irods zone name is snicZone.
 
The default resource is snicdefResc.
 
 
 
With the corrent environment file all we need is a Yubikey and we can run the iinit command to authenticate to the iCAT server. After that we can use the usual iCommands for 8 hours.
 
 
 
More details on the iCommands are available at
 
https://www.irods.org/index.php/icommands
 
 
 
===== Using iCommands on SNIC HPC clusters =====
 
 
 
On SNIC-clusters the icommands command line tools are either available in the PATH or by adding the irods module, e.g.
 
: module load irods
 
We also need to setup the iCommands environment file $HOME/.irods/.irodsEnv
 
 
 
==== iDROP web client ====
 
 
 
The web client is accessible via the URL https://iweb.swestore.se/.
 
A login screen will be presented first and your Yubikey should
 
be used to log in.
 
 
 
==== Upstream documentation ====
 
Detailed documentation, papers and resources are available from
 
the [http://www.eirods.org E-iRODS web site]
 
 
 
[http://www.irods.org Community iRODS]
 
 
 
[https://groups.google.com/d/forum/irod-chat‎ User forum]
 
 
 
= Center storage =
 
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.
 
 
 
== Unified environment ==
 
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:
 
 
 
* <code>SNIC_BACKUP</code> – the user's primary directory at the centre<br>(the part of the centre storage that is backed up)
 
* <code>SNIC_NOBACKUP</code> – recommended directory for project storage without backup<br>(also on the centre storage)
 
* <code>SNIC_TMP</code> – recommended directory for best performance during a job<br>(local disk on nodes if applicable)
 

Latest revision as of 13:14, 8 February 2023