<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://snicdocs.nsc.liu.se/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jonas+Lindemann+%28LUNARC%29</id>
	<title>SNIC Documentation - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://snicdocs.nsc.liu.se/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jonas+Lindemann+%28LUNARC%29"/>
	<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/wiki/Special:Contributions/Jonas_Lindemann_(LUNARC)"/>
	<updated>2026-04-14T18:41:57Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.10</generator>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6230</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6230"/>
		<updated>2016-04-04T12:41:40Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Deliverables ==&lt;br /&gt;
&lt;br /&gt;
* '''[http://next-generation-hpc-desktop.readthedocs.org/en/latest/ Report: Next Generation HPC Desktop]'''&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 system which will be installed in Erik. (Availability mid-feb 2015)&lt;br /&gt;
&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6229</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6229"/>
		<updated>2016-04-04T12:41:13Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Deliverables ==&lt;br /&gt;
&lt;br /&gt;
 * '''[http://next-generation-hpc-desktop.readthedocs.org/en/latest/ Report: Next Generation HPC Desktop'''&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 system which will be installed in Erik. (Availability mid-feb 2015)&lt;br /&gt;
&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6228</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6228"/>
		<updated>2016-04-04T12:38:19Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
**Deliverable (Report): [http://next-generation-hpc-desktop.readthedocs.org/en/latest/ Next Generation HPC Desktop]** &lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 system which will be installed in Erik. (Availability mid-feb 2015)&lt;br /&gt;
&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6037</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6037"/>
		<updated>2015-02-04T13:14:25Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Future HPC and Accelerators (PDC, Lunarc, HPC2N) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 system which will be installed in Erik. (Availability mid-feb 2015)&lt;br /&gt;
&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6036</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6036"/>
		<updated>2015-02-04T09:35:42Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Future HPC and Accelerators (PDC, Lunarc, HPC2N) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 system which will be installed in Erik. (Availability mid-feb 2015)&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6035</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6035"/>
		<updated>2015-02-04T09:32:13Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Future HPC and Accelerators (PDC, Lunarc, HPC2N) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 which will be installed in Erik. (Availability mid-feb 2015)&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6034</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6034"/>
		<updated>2015-02-04T09:27:47Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 which will be installed in Erik. (Availability XX-XX-XX)&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6033</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6033"/>
		<updated>2015-02-04T09:27:28Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been and are described in the following sections:&lt;br /&gt;
&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies (C3SE, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation (Lunarc, UPPMAX) ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators (PDC, Lunarc, HPC2N) ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 which will be installed in Erik. (Availability XX-XX-XX)&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6032</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6032"/>
		<updated>2015-02-04T09:22:39Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Users and groupquota through Littlejohn. Tested in operation. Scans file system and then uses changelogs to update database. Have since discovered a bug in the Lustre for changeloghantering. To be integrated for better project reporting&lt;br /&gt;
&lt;br /&gt;
Robin Hood and the TSM integration. Based on the changelog is active. Limited progress. Is an activity on the Lustre list.&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have been improving and testing configuration on the existing Lunarc HPC Desktop. Knowledge from this have been the basis for the design of the upcoming prototype hardware. Prototype hardware for evaluating future desktop environments have been procured from SouthPole. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Future HPC and Accelerators ==&lt;br /&gt;
&lt;br /&gt;
For future investment of computing resources in SNIC in the coming years (three large and several smaller systems), it is important that there is current specialist knowledge about different types of existing and future CPU architectures. To obtain a good basis for future decision in HPC resources, it is important that the work done at the centres, the SNIC GPU project and activities within PRACE are continued and developed further.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Evaluate the use of next generation GPU:s K40 and others in the context of future SNIC resources.  (Lunarc, HPC2N)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Zorn GPU development resource (PDC)&lt;br /&gt;
* Operate and provide support for SNICs investments in the Erik GPU development resource (Lunarc)&lt;br /&gt;
* Evaluate the use of next generation Xeon Phi accelerators (Knights Landing) (Lunarc, HPC2N)&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Lunarc have procured a NVIDIA K80 which will be installed in Erik. (Availability XX-XX-XX)&lt;br /&gt;
HPC2N is installing Xeon Phi:s in the existing resources (Availability XX-XX-XX)&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6031</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6031"/>
		<updated>2015-02-04T08:42:09Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
&lt;br /&gt;
Access to hardware accelerated graphics:&lt;br /&gt;
&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
&lt;br /&gt;
Interactive use of graphical user interfaces:&lt;br /&gt;
&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing activities'''&lt;br /&gt;
&lt;br /&gt;
Prototype hardware for evaluating future desktop environments have been procured at Lunarc. The hardware consists of:&lt;br /&gt;
&lt;br /&gt;
* NVIDIA K1/K2 evaluation system for investigating an on-demand desktop solution with accelerated graphics support. Solutions evaluated will be hypervisor-based using SLURM to allocate sessions.&lt;br /&gt;
* Intel Xeon Clearwell system for evaluating graphics support in this architecture. Could be used for providing cost-effective desktop solutions for HPC resources.&lt;br /&gt;
* Commodity graphics card in a server setting.&lt;br /&gt;
&lt;br /&gt;
Prototype system will be available mid february.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6030</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6030"/>
		<updated>2015-02-04T08:27:59Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
'''Objectives and deliverables'''&lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
&lt;br /&gt;
Access to hardware accelerated graphics:&lt;br /&gt;
&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
&lt;br /&gt;
Interactive use of graphical user interfaces:&lt;br /&gt;
&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6029</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6029"/>
		<updated>2015-02-04T08:26:34Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Storage Technologies ==&lt;br /&gt;
&lt;br /&gt;
There are many storage technologies employed within SNIC. To be able to deploy solutions that are suitable for different usage scenarios, it is important that there is a continuous evaluation of availa-ble storage technologies. It is also important that prototype solutions are evaluated closely to users and facilities. Typical projects within this focus area could be:&lt;br /&gt;
&lt;br /&gt;
* File systems for different I/O patterns.&lt;br /&gt;
* New storage hardware.&lt;br /&gt;
* Higher-level file services.&lt;br /&gt;
* Client tools for accessing available resources.&lt;br /&gt;
* Integration services.&lt;br /&gt;
&lt;br /&gt;
=== Objectives and deliverables === &lt;br /&gt;
&lt;br /&gt;
* Investigate how existing resources can be facilitated in new ways and how new approaches for users on how to use existing solutions can be facilitated to provide better support for I/O-intensive simulations and work-flows. (C3SE)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filestystems can be used in conjunction with Tivoli TSM to speed up incremental backups. (C3SE, HPC2N)&lt;br /&gt;
* Investigate how the RobinHood service of Lustre v2+ filesystems can be used to provide more fine-grained quotas, ex. for project storage etc. (C3SE)&lt;br /&gt;
* One solution that has been used in SNIC for a number of years for long-term storage of re-search data is dCache in combination with TSM for archiving to tape. There are several al-ternatives to using TSM for archiving to tape, which should be further studied both from a performance and cost effectiveness. One example is LTFS [3]. To look at various options for long-term storage is also something that connects well to activities within WLCG and EIS-CAT_3D for which HPC2N are involved in. (HPC2N)&lt;br /&gt;
&lt;br /&gt;
== Access-methods and remote visualisation ==&lt;br /&gt;
&lt;br /&gt;
To improve the usability of our HPC resources, many centres have been providing remote desktop solutions to users in an effort to provide a richer user experience in HPC. The service has been a great success in the research groups and the number of users is increasing. Requests for new applications and usage are increasing steadily. &lt;br /&gt;
However, the increasing user base and request for new applications and usages has raised questions on how to efficiently scale these services in many dimensions.&lt;br /&gt;
&lt;br /&gt;
* Monitoring of usage patterns on the desktop servers.&lt;br /&gt;
* Monitoring and scaling of distribution networks using different usage patterns and desktop configurations such as spatial resolution.&lt;br /&gt;
* How to provide seamless access to hardware accelerated desktop services.&lt;br /&gt;
* How to provide interactive access to applications and specific hardware configurations (e.g. GPU enabled nodes) through the batch-systems in a transparent way in the same desktop environment.&lt;br /&gt;
&lt;br /&gt;
Identifying bottlenecks in remote desktop architectures.&lt;br /&gt;
&lt;br /&gt;
=== Objectives and deliverables === &lt;br /&gt;
&lt;br /&gt;
Remote desktop service aspects:&lt;br /&gt;
&lt;br /&gt;
* Evaluate different remote-access architectures based on target network specifications.&lt;br /&gt;
* Develop best practice guides for configuration and setup of remote desktop services.&lt;br /&gt;
* Evaluate techniques for monitoring the load and usage of the servers providing the desktop services, so that the user experience is good. Analyse usage bottlenecks and identify areas for improvement such as easy and (eventually) integrated SFTP file transfer.&lt;br /&gt;
&lt;br /&gt;
Access to hardware accelerated graphics:&lt;br /&gt;
&lt;br /&gt;
* Implement prototypes for providing a scalable and OS independent hardware accelerated back-end to the desktop interface. This can involve taking advantage of single or multiple GPU nodes as a provider for graphical acceleration.&lt;br /&gt;
&lt;br /&gt;
Interactive use of graphical user interfaces:&lt;br /&gt;
&lt;br /&gt;
* Developing an on-demand hardware allocation mechanism using the queuing system to enable unique per-session access to unique hardware resources e.g. CPU or accelerator equipped nodes.&lt;br /&gt;
* Defining community specific desktops.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=User:Jonas_Lindemann_(LUNARC)&amp;diff=6028</id>
		<title>User:Jonas Lindemann (LUNARC)</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=User:Jonas_Lindemann_(LUNARC)&amp;diff=6028"/>
		<updated>2015-02-04T08:21:43Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{application expert info&lt;br /&gt;
|first name=Jonas&lt;br /&gt;
|last name=Lindemann&lt;br /&gt;
|centre=LUNARC&lt;br /&gt;
|fields=Grid computing;Desktop Environments&lt;br /&gt;
|other activities=Director at LUNARC;Coordinator SNIC Emerging Technologies&lt;br /&gt;
|image=JonasLindemann.png&lt;br /&gt;
|office=John Ericssons väg 1; 221 00 Lund&lt;br /&gt;
|phone=(+46)462228162;(+46)707910118&lt;br /&gt;
|fte=20&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|snic ae financing=&lt;br /&gt;
|other ae financing=LU&lt;br /&gt;
|general activities=Coordinating SNIC Emerging Technologies;Leading the development of ARC Storage UI;Grid user documentation;Developer of ARC Job Submission Tool;Lunarc Box;Lunarc HPC Desktop&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Quick facts ==&lt;br /&gt;
* Or any other heading. &lt;br /&gt;
* This part is clear text and very much up to you.&lt;br /&gt;
* Bullet lists are good &lt;br /&gt;
* for the lazy&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Expertise ==&lt;br /&gt;
* [[expertise::Grid computing]]&lt;br /&gt;
* [[expertise::Python]]&lt;br /&gt;
* [[expertise::SciPy]]&lt;br /&gt;
* [[expertise::Numpy]]&lt;br /&gt;
* [[expertise::Fortran]]&lt;br /&gt;
* [[expertise::C++]]&lt;br /&gt;
* [[expertise::Structural Mechanics]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
* [[project::SNIC Emerging Technologies]]&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6027</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6027"/>
		<updated>2015-02-04T08:20:27Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=Coordinating new and emerging technologies within SNIC.&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
Joined project to coordinate the training offered by the [[Centres|SNIC centres]].  An overview on the training events offered or supported by SNIC is available on the [[Training|training page]] of this wiki.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6026</id>
		<title>SNIC ET</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6026"/>
		<updated>2015-02-04T08:19:30Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6025</id>
		<title>SNIC Emerging Technologies</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_Emerging_Technologies&amp;diff=6025"/>
		<updated>2015-02-04T08:19:06Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): Created page with &amp;quot;{{project info |description=SNIC Emerging Technologies |fields=Emerging Technologies |financing=SNIC |active=yes |start date=2014-05-01 |end date= }}  Currently, many SNIC centre...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=SNIC Emerging Technologies&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
Joined project to coordinate the training offered by the [[Centres|SNIC centres]].  An overview on the training events offered or supported by SNIC is available on the [[Training|training page]] of this wiki.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6024</id>
		<title>SNIC ET</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6024"/>
		<updated>2015-02-04T08:14:59Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:SNIC Emerging Technologies}}&lt;br /&gt;
&lt;br /&gt;
{{project info&lt;br /&gt;
|description=SNIC Emerging Technologies&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
Joined project to coordinate the training offered by the [[Centres|SNIC centres]].  An overview on the training events offered or supported by SNIC is available on the [[Training|training page]] of this wiki.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6023</id>
		<title>SNIC ET</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6023"/>
		<updated>2015-02-04T08:11:41Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=SNIC Emerging Technologies&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
# Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
# Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
# Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
Joined project to coordinate the training offered by the [[Centres|SNIC centres]].  An overview on the training events offered or supported by SNIC is available on the [[Training|training page]] of this wiki.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6022</id>
		<title>SNIC ET</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6022"/>
		<updated>2015-02-04T08:11:07Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=SNIC Emerging Technologies&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
1. Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
2. Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
3. Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
Joined project to coordinate the training offered by the [[Centres|SNIC centres]].  An overview on the training events offered or supported by SNIC is available on the [[Training|training page]] of this wiki.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6021</id>
		<title>SNIC ET</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=SNIC_ET&amp;diff=6021"/>
		<updated>2015-02-04T08:10:50Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): Created page with &amp;quot;{{project info |description=SNIC Emerging Technologies |fields=Emerging Technologies |financing=SNIC |active=yes |start date=2014-05-01 |end date= }}  Currently, many SNIC centre...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{project info&lt;br /&gt;
|description=SNIC Emerging Technologies&lt;br /&gt;
|fields=Emerging Technologies&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|active=yes&lt;br /&gt;
|start date=2014-05-01&lt;br /&gt;
|end date=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Currently, many SNIC centres are continuously investigate the market for current developments in emerging technologies and even buy hardware for evaluation. However, in many of these activities user involvement is limited and efforts are often duplicated between centres. This activity aims to coordinate these efforts and competence within SNIC and provide SNIC users and communities with early access to new technologies. The project will also work closely with the user communities to see if they have interests in upcoming technologies.&lt;br /&gt;
&lt;br /&gt;
Three focus areas have been defined:&lt;br /&gt;
&lt;br /&gt;
 1. Storage Technologies (C3SE, HPC2N)&lt;br /&gt;
 2. Access-methods and remote visualisation (Lunarc, UPPMAX)&lt;br /&gt;
 3. Future HPC and Accelerators (Lunarc, PDC, HPC2N)&lt;br /&gt;
&lt;br /&gt;
Joined project to coordinate the training offered by the [[Centres|SNIC centres]].  An overview on the training events offered or supported by SNIC is available on the [[Training|training page]] of this wiki.&lt;br /&gt;
&lt;br /&gt;
== Members ==&lt;br /&gt;
{{#ask: [[Category:Person]] [[project::{{PAGENAME}}]]&lt;br /&gt;
|?centre&lt;br /&gt;
|?role&lt;br /&gt;
|?field&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=User:Jonas_Lindemann_(LUNARC)&amp;diff=6020</id>
		<title>User:Jonas Lindemann (LUNARC)</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=User:Jonas_Lindemann_(LUNARC)&amp;diff=6020"/>
		<updated>2015-02-04T08:03:26Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{application expert info&lt;br /&gt;
|first name=Jonas&lt;br /&gt;
|last name=Lindemann&lt;br /&gt;
|centre=LUNARC&lt;br /&gt;
|fields=Grid computing;Desktop Environments&lt;br /&gt;
|other activities=Director at LUNARC;Coordinator SNIC Emerging Technologies&lt;br /&gt;
|image=JonasLindemann.png&lt;br /&gt;
|office=John Ericssons väg 1; 221 00 Lund&lt;br /&gt;
|phone=(+46)462228162;(+46)707910118&lt;br /&gt;
|fte=20&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|snic ae financing=&lt;br /&gt;
|other ae financing=LU&lt;br /&gt;
|general activities=Coordinating SNIC Emerging Technologies;Leading the development of ARC Storage UI;Grid user documentation;Developer of ARC Job Submission Tool;Lunarc Box;Lunarc HPC Desktop&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Quick facts ==&lt;br /&gt;
* Or any other heading. &lt;br /&gt;
* This part is clear text and very much up to you.&lt;br /&gt;
* Bullet lists are good &lt;br /&gt;
* for the lazy&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Expertise ==&lt;br /&gt;
* [[expertise::Grid computing]]&lt;br /&gt;
* [[expertise::Python]]&lt;br /&gt;
* [[expertise::SciPy]]&lt;br /&gt;
* [[expertise::Numpy]]&lt;br /&gt;
* [[expertise::Fortran]]&lt;br /&gt;
* [[expertise::C++]]&lt;br /&gt;
* [[expertise::Structural Mechanics]]&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=User:Jonas_Lindemann_(LUNARC)&amp;diff=6019</id>
		<title>User:Jonas Lindemann (LUNARC)</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=User:Jonas_Lindemann_(LUNARC)&amp;diff=6019"/>
		<updated>2015-02-04T08:02:30Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{application expert info&lt;br /&gt;
|first name=Jonas&lt;br /&gt;
|last name=Lindemann&lt;br /&gt;
|centre=LUNARC&lt;br /&gt;
|fields=Grid computing&lt;br /&gt;
|other activities=Director at LUNARC;Coordinator SNIC Emerging Technologies&lt;br /&gt;
|image=JonasLindemann.png&lt;br /&gt;
|office=John Ericssons väg 1; 221 00 Lund&lt;br /&gt;
|phone=(+46)462228162;(+46)707910118&lt;br /&gt;
|fte=20&lt;br /&gt;
|financing=SNIC&lt;br /&gt;
|snic ae financing=&lt;br /&gt;
|other ae financing=LU&lt;br /&gt;
|general activities=Coordinating SNIC Emerging Technologies;Leading the development of ARC Storage UI;Grid user documentation;Developer of ARC Job Submission Tool;Lunarc Box;Lunarc HPC Desktop&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Quick facts ==&lt;br /&gt;
* Or any other heading. &lt;br /&gt;
* This part is clear text and very much up to you.&lt;br /&gt;
* Bullet lists are good &lt;br /&gt;
* for the lazy&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Expertise ==&lt;br /&gt;
* [[expertise::Grid computing]]&lt;br /&gt;
* [[expertise::Python]]&lt;br /&gt;
* [[expertise::SciPy]]&lt;br /&gt;
* [[expertise::Numpy]]&lt;br /&gt;
* [[expertise::Fortran]]&lt;br /&gt;
* [[expertise::C++]]&lt;br /&gt;
* [[expertise::Structural Mechanics]]&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Preparing_a_client_certificate&amp;diff=5253</id>
		<title>Preparing a client certificate</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Preparing_a_client_certificate&amp;diff=5253"/>
		<updated>2013-07-02T09:04:16Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Uploading and conversion of the .p12 for your target machine */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Grid computing]]&lt;br /&gt;
[[Category:SweGrid user guide]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
[[Category:SweStore user guide]]&lt;br /&gt;
&lt;br /&gt;
[[Getting started with SweGrid|&amp;lt; Getting started with SweGrid]]&lt;br /&gt;
&lt;br /&gt;
[[SweStore|&amp;lt; SweStore]]&lt;br /&gt;
&lt;br /&gt;
Most of the standalone third party tools installed on SNIC resources and your own machine will not be able to use a &amp;lt;tt&amp;gt;.p12&amp;lt;/tt&amp;gt; certificate bundle (or &amp;lt;tt&amp;gt;.pfx&amp;lt;/tt&amp;gt; if you exported from IE), as that format is intended primarily for secure transport and backup of certificates and their private keys.&lt;br /&gt;
&lt;br /&gt;
Instead of a single &amp;lt;tt&amp;gt;.p12&amp;lt;/tt&amp;gt; file, they expect a pair of files in &amp;lt;tt&amp;gt;.pem&amp;lt;/tt&amp;gt; format, one containing the certificate and the other containing the private key that matches the certificate.&lt;br /&gt;
&lt;br /&gt;
== Uploading and conversion of the .p12 for your target machine ==&lt;br /&gt;
&lt;br /&gt;
As the authentication methods for clusters differ, this section will defer to documentations for your particular site when it comes to transferring files to and from the cluster storage.&lt;br /&gt;
&lt;br /&gt;
The goal is to end up with a &amp;lt;tt&amp;gt;.globus&amp;lt;/tt&amp;gt; directory in your home directory, containing two files named &amp;lt;tt&amp;gt;usercert.pem&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;userkey.pem&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The instructions below assume that your exported certificate file is named &amp;lt;tt&amp;gt;export.p12&amp;lt;/tt&amp;gt; directly in your home directory. If it's a &amp;lt;tt&amp;gt;.pfx&amp;lt;/tt&amp;gt; or with a different name, change &amp;lt;tt&amp;gt;export.p12&amp;lt;/tt&amp;gt; in the instructions to your actual filename or rename your file to &amp;lt;tt&amp;gt;export.p12&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* Transfer the &amp;lt;tt&amp;gt;export.p12&amp;lt;/tt&amp;gt; file to your home directory on the cluster.&lt;br /&gt;
* Get an interactive shell on the login node, via ssh.&lt;br /&gt;
* If an .globus directory already exists, rename it with something like&lt;br /&gt;
  &amp;lt;tt&amp;gt;mv ~/.globus ~/.globus-old&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Create the directory with&lt;br /&gt;
  &amp;lt;tt&amp;gt;mkdir ~/.globus&amp;lt;/tt&amp;gt;&lt;br /&gt;
  &amp;lt;tt&amp;gt;chmod 0700 ~/.globus&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Extract and protect the private key part of &amp;lt;tt&amp;gt;export.p12&amp;lt;/tt&amp;gt;:&lt;br /&gt;
  openssl pkcs12 -nocerts -in ~/export.p12 -out ~/.globus/userkey.pem&lt;br /&gt;
* When asked for import password, specify the password specified when exporting the certificate bundle from your browser. The PEM pass phrase should be a new password that you need to provide whenever using the certificate for tasks like generating a proxy certificate. The output from this command will be similar to the following:&lt;br /&gt;
  Enter Import Password: *******&lt;br /&gt;
  MAC verified OK&lt;br /&gt;
  Enter PEM pass phrase: *******&lt;br /&gt;
  Verifying - Enter PEM pass phrase: *******&lt;br /&gt;
&lt;br /&gt;
* Extract the public client certificate part of &amp;lt;tt&amp;gt;export.p12&amp;lt;/tt&amp;gt;:&lt;br /&gt;
  openssl pkcs12 -clcerts -nokeys -in ~/export.p12 -out ~/.globus/usercert.pem&lt;br /&gt;
* The output will be similar to the following:&lt;br /&gt;
  Enter Import Password: *******&lt;br /&gt;
  MAC verified OK&lt;br /&gt;
* Finally ensure that only your user is allowed to read the private key file. This is important, both for security and due to some tools refusing to use private keys with insufficient restrictions.&lt;br /&gt;
  chmod 0400 ~/.globus/userkey.pem&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=ARC_client_installation_Mac_OS_X&amp;diff=5153</id>
		<title>ARC client installation Mac OS X</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=ARC_client_installation_Mac_OS_X&amp;diff=5153"/>
		<updated>2013-05-10T10:33:43Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Installing the ARC Graphical Clients on Mac OS X requires MacPorts. When MacPorts has been installed check out the following 2 repositories to your home directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ svn checkout svn://svn.code.sf.net/p/arc-gui-clients/svn/trunk/macports/globus-ports ./globus-ports&lt;br /&gt;
$ svn checkout svn://svn.code.sf.net/p/arc-gui-clients/svn/trunk/macports/arc-ports ./arc-ports&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, update add the local repositories to to MacPorts by adding the following lines in the /opt/local/etc/macports/sources.conf before the rsync://rsync.macports… line, as shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file:///Users/[user id]/globus-ports [nosync]&lt;br /&gt;
file:///Users/[user id]/arc-ports [nosync]&lt;br /&gt;
rsync://rsync.macports.org/release/tarballs/ports.tar [default]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To make MacPorts aware of the new local repos issue the following commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd globus-ports&lt;br /&gt;
$ portindex&lt;br /&gt;
$ cd ../arc-ports&lt;br /&gt;
$ portindex&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To install the ARC client tools issue the following commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo port install nordugrid-arc-client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The graphical clients can then be installed with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo port install arc-gui-clients&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that proxy-generation using the firefox credential store is somewhat buggy. If it doesn't work you can always export your certificate and convert it using the arccert-ui graphical tool. Look in the MacPorts folder in Applications. The other tools can also be found in this folder.&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=ARC_client_installation_Mac_OS_X&amp;diff=5152</id>
		<title>ARC client installation Mac OS X</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=ARC_client_installation_Mac_OS_X&amp;diff=5152"/>
		<updated>2013-05-10T10:32:40Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): Created page with &amp;quot;Installing the ARC Graphical Clients on Mac OS X requires MacPorts. When MacPorts has been installed check out the following 2 repositories to your home directory:  &amp;lt;pre&amp;gt; $ svn c...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Installing the ARC Graphical Clients on Mac OS X requires MacPorts. When MacPorts has been installed check out the following 2 repositories to your home directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ svn checkout svn://svn.code.sf.net/p/arc-gui-clients/svn/trunk/macports/globus-ports ./globus-ports&lt;br /&gt;
$ svn checkout svn://svn.code.sf.net/p/arc-gui-clients/svn/trunk/macports/arc-ports ./arc-ports&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, update add the local repositories to to MacPorts by adding the following lines in the /opt/local/etc/macports/sources.conf before the rsync://rsync.macports… line, as shown below:&lt;br /&gt;
&lt;br /&gt;
file:///Users/[user id]/globus-ports [nosync]&lt;br /&gt;
file:///Users/[user id]/arc-ports [nosync]&lt;br /&gt;
rsync://rsync.macports.org/release/tarballs/ports.tar [default]&lt;br /&gt;
&lt;br /&gt;
To make MacPorts aware of the new local repos issue the following commands:&lt;br /&gt;
&lt;br /&gt;
$ cd globus-ports&lt;br /&gt;
$ portindex&lt;br /&gt;
$ cd ../arc-ports&lt;br /&gt;
$ portindex&lt;br /&gt;
&lt;br /&gt;
To install the ARC client tools issue the following commands:&lt;br /&gt;
&lt;br /&gt;
$ sudo port install nordugrid-arc-client&lt;br /&gt;
&lt;br /&gt;
The graphical clients can then be installed with:&lt;br /&gt;
&lt;br /&gt;
$ sudo port install arc-gui-clients&lt;br /&gt;
&lt;br /&gt;
Please note that proxy-generation using the firefox credential store is somewhat buggy right now. If it doesn't work you can always export your certificate and convert it using the arccert-ui graphical tool. Look in the MacPorts folder in Applications. The other tools can also be found in this folder.&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5151</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5151"/>
		<updated>2013-04-30T14:17:30Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Getting access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long&lt;br /&gt;
term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache]&lt;br /&gt;
storage system and is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc],&lt;br /&gt;
[http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax].&lt;br /&gt;
&lt;br /&gt;
Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple&lt;br /&gt;
crash of a storage element to losing an entire site while stil providing access to the stored data. To protect against silent data corruption the&lt;br /&gt;
dCache storage system checksums all stored data and periodically verifies the data using this checksum.&lt;br /&gt;
&lt;br /&gt;
The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages to the distributed nature of dCache is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node&lt;br /&gt;
and having transfers going directly to/from the storage elements if the protocol allows it.&lt;br /&gt;
The Swestore Nationally Accessible Storage system can achieve aggregated  transfer rates&lt;br /&gt;
in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically&lt;br /&gt;
max 1 Gbit/s per file/connection).&lt;br /&gt;
&lt;br /&gt;
==Access protocols==&lt;br /&gt;
; Currently supported protocols&lt;br /&gt;
: GridFTP - gsiftp://gsiftp.swestore.se/&lt;br /&gt;
: Storage Resource Manager - srm://srm.swegrid.se/&lt;br /&gt;
: Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/&lt;br /&gt;
&lt;br /&gt;
; Protocols in evaluation/development&lt;br /&gt;
: NFS4.1, iRODS&lt;br /&gt;
&lt;br /&gt;
For authentication eScience certificates are used, which provides a higher level of security than legacy username/password schemes.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Exporting_a_client_certificate|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
; Transmit and prepare the certificate.&lt;br /&gt;
: In order to use the client certificate on SNIC resources for generating proxy certificates and using command line tools, the certificate needs to be [[Preparing_a_client_certificate|converted into PEM files]] on the target cluster if not already in that format.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you have installed a Terena-certificate in your browser and you have ARC 3.x installed, there is no need to convert or export the certificate from the browser. The arcproxy command can generate a proxy-certificate from the certificate stored in the Firefox credential store. See also [[Grid_certificates#Creating_a_proxy_certificate_using_the_Firefox.2FThunderbird_credential_store|proxy certificates]].&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Interactive browsing and manipulation of single files&lt;br /&gt;
: SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. To browse private data you must first install your certificate in your browser (see above). Projects are organized under the &amp;lt;code&amp;gt;/snic&amp;lt;/code&amp;gt; directory as &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data interactively or with automation&lt;br /&gt;
There are several tools that are capable of using the protocols provided by SweStore national storage.&lt;br /&gt;
For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.&lt;br /&gt;
As an integration point for building scripts and automated systems we suggest using the curl program and library.&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].&lt;br /&gt;
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
== Slides and more ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5150</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5150"/>
		<updated>2013-04-30T14:16:48Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Getting access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long&lt;br /&gt;
term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache]&lt;br /&gt;
storage system and is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc],&lt;br /&gt;
[http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax].&lt;br /&gt;
&lt;br /&gt;
Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple&lt;br /&gt;
crash of a storage element to losing an entire site while stil providing access to the stored data. To protect against silent data corruption the&lt;br /&gt;
dCache storage system checksums all stored data and periodically verifies the data using this checksum.&lt;br /&gt;
&lt;br /&gt;
The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages to the distributed nature of dCache is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node&lt;br /&gt;
and having transfers going directly to/from the storage elements if the protocol allows it.&lt;br /&gt;
The Swestore Nationally Accessible Storage system can achieve aggregated  transfer rates&lt;br /&gt;
in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically&lt;br /&gt;
max 1 Gbit/s per file/connection).&lt;br /&gt;
&lt;br /&gt;
==Access protocols==&lt;br /&gt;
; Currently supported protocols&lt;br /&gt;
: GridFTP - gsiftp://gsiftp.swestore.se/&lt;br /&gt;
: Storage Resource Manager - srm://srm.swegrid.se/&lt;br /&gt;
: Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/&lt;br /&gt;
&lt;br /&gt;
; Protocols in evaluation/development&lt;br /&gt;
: NFS4.1, iRODS&lt;br /&gt;
&lt;br /&gt;
For authentication eScience certificates are used, which provides a higher level of security than legacy username/password schemes.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Exporting_a_client_certificate|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
; Transmit and prepare the certificate.&lt;br /&gt;
: In order to use the client certificate on SNIC resources for generating proxy certificates and using command line tools, the certificate needs to be [[Preparing_a_client_certificate|converted into PEM files]] on the target cluster if not already in that format.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you have installed a Terena-certificate in your browser and you have ARC 3.x installed, there is no need to convert or export the certificate from the browser. The arcproxy command can generate a proxy-certificate from the certificate stored in the Firefox credential store. See also [Grid_certificates#Creating_a_proxy_certificate_using_the_Firefox.2FThunderbird_credential_store|proxy certificates ]&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Interactive browsing and manipulation of single files&lt;br /&gt;
: SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. To browse private data you must first install your certificate in your browser (see above). Projects are organized under the &amp;lt;code&amp;gt;/snic&amp;lt;/code&amp;gt; directory as &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data interactively or with automation&lt;br /&gt;
There are several tools that are capable of using the protocols provided by SweStore national storage.&lt;br /&gt;
For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.&lt;br /&gt;
As an integration point for building scripts and automated systems we suggest using the curl program and library.&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].&lt;br /&gt;
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
== Slides and more ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5149</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5149"/>
		<updated>2013-04-30T14:16:02Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Getting access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long&lt;br /&gt;
term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache]&lt;br /&gt;
storage system and is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc],&lt;br /&gt;
[http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax].&lt;br /&gt;
&lt;br /&gt;
Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple&lt;br /&gt;
crash of a storage element to losing an entire site while stil providing access to the stored data. To protect against silent data corruption the&lt;br /&gt;
dCache storage system checksums all stored data and periodically verifies the data using this checksum.&lt;br /&gt;
&lt;br /&gt;
The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages to the distributed nature of dCache is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node&lt;br /&gt;
and having transfers going directly to/from the storage elements if the protocol allows it.&lt;br /&gt;
The Swestore Nationally Accessible Storage system can achieve aggregated  transfer rates&lt;br /&gt;
in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically&lt;br /&gt;
max 1 Gbit/s per file/connection).&lt;br /&gt;
&lt;br /&gt;
==Access protocols==&lt;br /&gt;
; Currently supported protocols&lt;br /&gt;
: GridFTP - gsiftp://gsiftp.swestore.se/&lt;br /&gt;
: Storage Resource Manager - srm://srm.swegrid.se/&lt;br /&gt;
: Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/&lt;br /&gt;
&lt;br /&gt;
; Protocols in evaluation/development&lt;br /&gt;
: NFS4.1, iRODS&lt;br /&gt;
&lt;br /&gt;
For authentication eScience certificates are used, which provides a higher level of security than legacy username/password schemes.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Exporting_a_client_certificate|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
; Transmit and prepare the certificate.&lt;br /&gt;
: In order to use the client certificate on SNIC resources for generating proxy certificates and using command line tools, the certificate needs to be [[Preparing_a_client_certificate|converted into PEM files]] on the target cluster if not already in that format.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you have installed a Terena-certificate in your browser and you have ARC 3.x installed, there is no need to convert or export the certificate from the browser. The arcproxy command can generate a proxy-certificate from the certificate stored in the Firefox credential store. See also [proxy certificates | https://docs.snic.se/wiki/Grid_certificates#Creating_a_proxy_certificate_using_the_Firefox.2FThunderbird_credential_store]&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Interactive browsing and manipulation of single files&lt;br /&gt;
: SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. To browse private data you must first install your certificate in your browser (see above). Projects are organized under the &amp;lt;code&amp;gt;/snic&amp;lt;/code&amp;gt; directory as &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data interactively or with automation&lt;br /&gt;
There are several tools that are capable of using the protocols provided by SweStore national storage.&lt;br /&gt;
For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.&lt;br /&gt;
As an integration point for building scripts and automated systems we suggest using the curl program and library.&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].&lt;br /&gt;
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
== Slides and more ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Grid_certificates&amp;diff=5148</id>
		<title>Grid certificates</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Grid_certificates&amp;diff=5148"/>
		<updated>2013-04-30T14:15:06Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Proxy certificates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Grid computing]]&lt;br /&gt;
[[Category:SweGrid user guide]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
[[Category:SweStore user guide]]&lt;br /&gt;
[[Getting started with SweGrid|&amp;lt; Getting started with SweGrid]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[SweStore|&amp;lt; SweStore]]&lt;br /&gt;
&lt;br /&gt;
=Introduction to certificates=&lt;br /&gt;
&lt;br /&gt;
In order to get access to computer and storage resources on the grid or [[SweStore]] you must have a valid (grid) certificate. This certificate is used instead of a username and password when accessing the resource. The resource have a certificate that tells you that you have contacted the right resource. This is exactly the same mechanism used when you use a web browser to contact your bank.&lt;br /&gt;
&lt;br /&gt;
A certificate is the similar to a passport in real-life. In the same way you have prove your credentials when you acquire a passport the same is true for a certificate. A third party, the Certificate Authority or CA, that both you and the resource trust has to vouch for your identity and sign your certificate.&lt;br /&gt;
&lt;br /&gt;
A certificate consist of a public key, some user information and a signature of the CA. In addition to the certificate you have a private key. The private key is secret and should be kept as secure as possible.&lt;br /&gt;
&lt;br /&gt;
For more information regarding certificates and public key cryptography:&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Public-key_cryptography http://en.wikipedia.org/wiki/Public-key_cryptography]&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Public_key_certificate http://en.wikipedia.org/wiki/Public_key_certificate]&lt;br /&gt;
&lt;br /&gt;
[http://www.nordugrid.org/documents/certificate_howto.html http://www.nordugrid.org/documents/certificate_howto.html]&lt;br /&gt;
&lt;br /&gt;
* The grid certificate and the private key are stored in your web browser and/or located in ~/.globus at the host(s) from where you will be accessing the resource:&lt;br /&gt;
      usercert.pem&lt;br /&gt;
      userkey.pem&lt;br /&gt;
* The certificate contains your public key, your name and organization and a signature by the CA. It is does not contain any username.&lt;br /&gt;
* The certificate is valid for 13 month and should be renewed yearly.&lt;br /&gt;
* The private key should be handled with great care. It should only be readable by you and not by the group or others (i.e. ``chmod 400 userkey.pem''). Store the key on trusted computers and transfer the key between computers using encryption (using for example scp).&lt;br /&gt;
* On shared file systems make sure that ~/.globus is not readible by everybody:&lt;br /&gt;
 chmod 700 ~/.globus&lt;br /&gt;
and on AFS:&lt;br /&gt;
 fs sa ~/.globus system:anyuser none&lt;br /&gt;
* The private key is encrypted using a passphrase. Anyone that can decrypt the private key will be able to authenticate as you to grid resources. This is similar to the private key in SSH. You must choose a strong passphrase for the private key. This passphrase must not be used anywhere else. You must never ever give away the passphrase to somebody else.&lt;br /&gt;
* You should not share the certificate with someone. It's personal. &lt;br /&gt;
&lt;br /&gt;
For more information regarding certificates and public key cryptography:&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Public-key_cryptography http://en.wikipedia.org/wiki/Public-key_cryptography]&lt;br /&gt;
[http://en.wikipedia.org/wiki/Public_key_certificate http://en.wikipedia.org/wiki/Public_key_certificate]&lt;br /&gt;
&lt;br /&gt;
= Requesting a certificate =&lt;br /&gt;
&lt;br /&gt;
Certificates are issued by a Certificate Authority or CA. For Swedish users there are two relevant CA:s that can issue grid/eScience certificates, Terena and Nordugrid. The Terena CA is preferred if it is available for your university or research group, but many sites has not enabled this service yet. The Nordugrid CA can also be used but requires more manual work by all parties.&lt;br /&gt;
&lt;br /&gt;
Recommended procedure for each university:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| University&lt;br /&gt;
| CA&lt;br /&gt;
| Specific instructions&lt;br /&gt;
|-&lt;br /&gt;
| LU&lt;br /&gt;
| Terena CA&lt;br /&gt;
| [[LU_Certificate_Information|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| LiU&lt;br /&gt;
| Terena CA&lt;br /&gt;
| [[LiU_Certificate_Instructions|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| CTH&lt;br /&gt;
| NorduGrid CA&lt;br /&gt;
| [[Chalmers_Certificate_Instructions|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| GU&lt;br /&gt;
| NorduGrid CA&lt;br /&gt;
| [[GU_Certificate_Instructions|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| UU&lt;br /&gt;
| Terena CA&lt;br /&gt;
| [[UU_Certificate_Instructions|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| KTH&lt;br /&gt;
| Terena CA&lt;br /&gt;
| [[KTH_Certificate_Information|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| SU&lt;br /&gt;
| NorduGrid CA&lt;br /&gt;
| [[SU_Certificate_Information|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| KI&lt;br /&gt;
| NorduGrid CA&lt;br /&gt;
| [[KI_Certificate_Information|more...]]&lt;br /&gt;
|-&lt;br /&gt;
| UmU&lt;br /&gt;
| Terena CA&lt;br /&gt;
| [[UmU_Certificate_Information|more...]]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Requesting a grid certificate using the Terena eScience Portal|Instructions for the Terena CA]]&lt;br /&gt;
&lt;br /&gt;
[[Requesting a grid certificate from the Nordugrid CA|Instructions for the NorduGrid CA (use only if Terena eScience isn't available at your site)]]&lt;br /&gt;
&lt;br /&gt;
= Requesting membership in the SweGrid VO =&lt;br /&gt;
&lt;br /&gt;
SweGrid and SweStore resources are currently being allocated for VO:s, virtual organizations, rather than individual users. A VO is basically just a list of users. To be able to use a SweGrid or SweStore resource a membership in the SweGrid VO (virtual organization) and a corresponding subgroup is required. To apply for membership, make sure that the NorduGrid root CA certificate and your personal certificate is installed in the browser. &lt;br /&gt;
&lt;br /&gt;
The NorduGrid CA cert can be installed by clicking on the following link:&lt;br /&gt;
&lt;br /&gt;
 [http://ca.nordugrid.org/cacrt.crt http://ca.nordugrid.org/cacrt.crt]&lt;br /&gt;
&lt;br /&gt;
Make sure you check the &amp;quot;Trust this CA to identify web sites.&amp;quot; boxes in the dialog shown.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:certinstall.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When certificates have been installed in the browser go to the following URL:&lt;br /&gt;
&lt;br /&gt;
 [https://voms.ndgf.org:8443/voms/swegrid.se https://voms.ndgf.org:8443/voms/swegrid.se]&lt;br /&gt;
&lt;br /&gt;
and follow the instructions. In a couple of hours you will be added to the SweGrid VO. &lt;br /&gt;
&lt;br /&gt;
To be added to the correct SweGrid project send a mail to [mailto:support@swegrid.se support@swegrid.se] and specify your DN as shown in the Terena portal or from the '''arcproxy --info''' command and which SNIC-project to be added to.&lt;br /&gt;
&lt;br /&gt;
To be added to the correct Swestore allocation send a mail to [mailto:swestore-support@snic.vr.se swestore-support@snic.vr.se] and specify your DN as shown in the Terena portal or from the '''arcproxy --info''' command and which Swestore allocation to be added to.&lt;br /&gt;
&lt;br /&gt;
= Proxy certificates =&lt;br /&gt;
&lt;br /&gt;
Authentication on the grid is done using special short lived ''proxy'' certificates. There are several tools available for creating, checking and destroying these proxy certificates.&lt;br /&gt;
 &lt;br /&gt;
== Creating a proxy certificate ==&lt;br /&gt;
&lt;br /&gt;
To create a short lived proxy that can be used for authentication with grid services, the '''arcproxy''' command can be used. A 12 hour (default) proxy is created in the following example:&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy&lt;br /&gt;
 Your identity: /O=Grid/O=NorduGrid/OU=lunarc.lu.se/CN=Kalle Kula&lt;br /&gt;
 Enter pass phrase for /home/kalle/.globus/userkey.pem:&lt;br /&gt;
 .++++++&lt;br /&gt;
 .....++++++&lt;br /&gt;
 Proxy generation succeeded&lt;br /&gt;
 Your proxy is valid until: 2011-03-11 03:00:14&lt;br /&gt;
&lt;br /&gt;
The proxy file itself will be created in the '''/tmp''' directory with the format '''x509up_uid''', where uid is the user id number for your account.&lt;br /&gt;
&lt;br /&gt;
In some cases a longer lived proxy will be needed. This is achieved using the '''--constraint''' switch. A 24-hour can be created by issuing the following command:&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy --constraint=&amp;quot;validityPeriod=24H&amp;quot;&lt;br /&gt;
 Your identity: /O=Grid/O=NorduGrid/OU=lunarc.lu.se/CN=Kalle Kula&lt;br /&gt;
 Enter pass phrase for /home/kalle/.globus/userkey.pem:&lt;br /&gt;
 ....++++++&lt;br /&gt;
 .....++++++&lt;br /&gt;
 Proxy generation succeeded&lt;br /&gt;
 Your proxy is valid until: 2011-03-11 15:03:19&lt;br /&gt;
&lt;br /&gt;
== Creating a proxy certificate using the Firefox/Thunderbird credential store ==&lt;br /&gt;
&lt;br /&gt;
Using the ARC 3.x client tools it is now possible to generate a proxy certificate directly from the Firefox or Thunderbird credential stores. To do this the '''-F''' flag is used as shown in the following example:&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy -F&lt;br /&gt;
 There are 2 NSS base directories where the certificate, key, and module datbases live&lt;br /&gt;
 Number 1 is: /Users/lindemann/Library/Application Support/Firefox/Profiles/t22f3aj2.default&lt;br /&gt;
 Number 2 is: /Users/lindemann/Library/Thunderbird/Profiles/7abb733v.default&lt;br /&gt;
 Please choose the NSS database you would use (1-2): 1&lt;br /&gt;
&lt;br /&gt;
Here ARC finds the available Firefox and Thunderbird profile in which the credential stores are stored. Next the passphrase for the credential store is used to unlock the stored credentials:&lt;br /&gt;
&lt;br /&gt;
 NSS database to be accessed: /Users/lindemann/Library/Application Support/Firefox/Profiles/t22f3aj2.default&lt;br /&gt;
 Enter Password or Pin for &amp;quot;internal (software)&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
If the passphrase was correct, ARC will list the available certificates in the credential store and ask you for which you would like to use.&lt;br /&gt;
&lt;br /&gt;
 There are 2 user certificates existing in the NSS database&lt;br /&gt;
 Number 1 is with nickname: Jonas Lindemann xxxxx@lu.se's TERENA ID (Jonas Lindemann xxxxx@lu.se)&lt;br /&gt;
     expiration time: 2013-06-04 01:59:59&lt;br /&gt;
 Number 2 is with nickname: Imported Certificate (Jonas Lindemann)&lt;br /&gt;
     expiration time: 2014-01-18 16:55:52&lt;br /&gt;
 Please choose the one you would use (1-2): 1&lt;br /&gt;
 Certificate to use is: Jonas Lindemann xxxxxx@lu.se's TERENA ID&lt;br /&gt;
 Proxy generation succeeded&lt;br /&gt;
 Your proxy is valid until: 2013-05-01 04:11:37&lt;br /&gt;
&lt;br /&gt;
== Checking proxy lifetime ==&lt;br /&gt;
&lt;br /&gt;
The remaining lifetime of a proxy certificate can be checked using the '''arcproxy''' command with the '''--info''' switch.&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy --info&lt;br /&gt;
 Subject: /O=Grid/O=NorduGrid/OU=lunarc.lu.se/CN=Kalle Kula/CN=1567862803&lt;br /&gt;
 Identity: /O=Grid/O=NorduGrid/OU=lunarc.lu.se/CN=Kalle Kula&lt;br /&gt;
 Time left for proxy: 11 hours 55 minutes&lt;br /&gt;
 Proxy path: /tmp/x509up_u500&lt;br /&gt;
 Proxy type: X.509 Proxy Certificate Profile RFC compliant restricted proxy&lt;br /&gt;
&lt;br /&gt;
In this example the proxy certificate is valid for 11 hours 55 minutes more.&lt;br /&gt;
&lt;br /&gt;
== Destroying a proxy certificate ==&lt;br /&gt;
&lt;br /&gt;
A proxy can be destroyed with the '''-r''' or '''--remove''' switch.&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy -r&lt;br /&gt;
&lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy --remove&lt;br /&gt;
&lt;br /&gt;
= VOMS certificates =&lt;br /&gt;
&lt;br /&gt;
As long as you are a member of only one VO or VO group, you can&lt;br /&gt;
authenticate to a grid service with the regular grid proxy certificate&lt;br /&gt;
as defined in the previous section. If you are a member of more than&lt;br /&gt;
one VO or VO group you may want to select which membership you want to&lt;br /&gt;
be authenticated as. For example, if you are a member of&lt;br /&gt;
''swegrid.se:/swegrid.se/ops'' (operations staff) and&lt;br /&gt;
''swegrid.se:/swegrid.se/bils'' and want to write a file, who should&lt;br /&gt;
be the owner? Ops or bils? You need to provide some additional&lt;br /&gt;
information. In the grid world this is done with a voms proxy&lt;br /&gt;
certificate which basically is a regular proxy certificate but with a&lt;br /&gt;
so called voms extension that contains a list of your VO group&lt;br /&gt;
memberships (and roles and attributes, which we don't use in&lt;br /&gt;
Swegrid/Swestore at the moment).&lt;br /&gt;
&lt;br /&gt;
'''Please note, if you only have one membership you can skip this section!'''&lt;br /&gt;
&lt;br /&gt;
The voms extension of the certificate is signed by the virtual&lt;br /&gt;
organization management server, or VOMS server. The same VOMS server&lt;br /&gt;
you used when applying for the swegrid.se VO membership in the first&lt;br /&gt;
place. To enable this signing process you need to add a few&lt;br /&gt;
configuration files to your system. First add this to the file&lt;br /&gt;
'''/etc/vomses''':&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;swegrid.se&amp;quot; &amp;quot;voms.ndgf.org&amp;quot; &amp;quot;15009&amp;quot; &amp;quot;/O=Grid/O=NorduGrid/CN=host/voms.ndgf.org&amp;quot; &amp;quot;swegrid.se&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Next create the necessary directories and the file&lt;br /&gt;
'''/etc/grid-security/vomsdir/swegrid.se/voms.ndgf.org.lsc''' with the&lt;br /&gt;
following contents:&lt;br /&gt;
&lt;br /&gt;
   /O=Grid/O=NorduGrid/CN=host/voms.ndgf.org&lt;br /&gt;
   /O=Grid/O=NorduGrid/CN=NorduGrid Certification Authority&lt;br /&gt;
&lt;br /&gt;
== Creating a VOMS proxy ==&lt;br /&gt;
&lt;br /&gt;
VOMS proxies in ARC1 can be created using the '''arcproxy''' command&lt;br /&gt;
and the '''-S''' or '''--voms''' switches as shown in the following&lt;br /&gt;
example (if you are a member of the /swegrid.se/ops group. Adjust as&lt;br /&gt;
necessary):&lt;br /&gt;
&lt;br /&gt;
 $ arcproxy -S swegrid.se:/swegrid.se/ops&lt;br /&gt;
 Your identity: /O=Grid/O=NorduGrid/OU=lunarc.lu.se/CN=Kalle Kula&lt;br /&gt;
 Enter pass phrase for /home/kalle/.globus/userkey.pem:&lt;br /&gt;
 .....++++++&lt;br /&gt;
 ............++++++&lt;br /&gt;
 Contacting VOMS server (named swegrid.se): voms.ndgf.org on port: 15009&lt;br /&gt;
 Proxy generation succeeded&lt;br /&gt;
 Your proxy is valid until: 2011-03-10 23:33:06&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Signing your e-mail with your certificate =&lt;br /&gt;
&lt;br /&gt;
First, you will need your grid certificate in PKCS12 format:&lt;br /&gt;
== How to transform your certificate from PEM format into PKCS#12 format ==&lt;br /&gt;
&lt;br /&gt;
This is how you transform your cert into PKCS12 format that can be used within your webbrowser or email send program:&lt;br /&gt;
You first will have to change directory into where you created and keep the certificate, historically this is often in ~/.globus&lt;br /&gt;
	 &lt;br /&gt;
 openssl pkcs12 -export -in usercert.pem -inkey userkey.pem -out cert+key.p12	 &lt;br /&gt;
&lt;br /&gt;
First you will have to enter the password you used for your private key, then you will be asked for a new password to protect the new file. '''cert+key.p12 contains your private key, and is therefore 'lika känslig' as userkey.pem'''. See also [[#Introduction to certificates]]. Security wise the safest way is to delete the PKCS12 file after having imported it into your mail client or browser. Don't forget this.&lt;br /&gt;
&lt;br /&gt;
Remarks: openssl will either need the variable RANDFILE to be set or that ~/.rnd is writable. So you have to make sure that the current $HOME is yours if you have pagshed away, otherwise the command will fail with ''unable to write 'random state''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Signing in mew ===&lt;br /&gt;
&lt;br /&gt;
Mew uses gpgsm. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Import the nordugrid root cert&lt;br /&gt;
&lt;br /&gt;
1.1. get 1f0e8352.0 from nordugrid web&lt;br /&gt;
&lt;br /&gt;
1.2. gpgsm --import 1f0e8352.0&lt;br /&gt;
&lt;br /&gt;
1.2. Make it trusted:&lt;br /&gt;
     gpgsm --list-keys 2&amp;gt;/dev/null | grep fingerprint | awk '{print $2 &amp;quot; S&amp;quot;}' | grep THE-FINGERPRIT-YOU-WANT &amp;gt;&amp;gt; .gnupg/trustlist.txt&lt;br /&gt;
&lt;br /&gt;
2. Add your own key from the cert+key.p12 file in this case&lt;br /&gt;
&lt;br /&gt;
2.1 openssl pkcs12 -in cert+key.p12 -out tmp.pem -nokeys&lt;br /&gt;
&lt;br /&gt;
2.2. gpgsm --import tmp.pem ; rm tmp.pem&lt;br /&gt;
&lt;br /&gt;
2.3. Tell gpgsm not to use revocation lists (bad bad security)&lt;br /&gt;
     echo disable-crl-checks &amp;gt;&amp;gt; .gnupg/gpgsm.conf&lt;br /&gt;
&lt;br /&gt;
3. Test&lt;br /&gt;
   gpgsm --detach-sign file &amp;gt; sign  # should ask for passphrase and give some kind of sign file&lt;br /&gt;
&lt;br /&gt;
4. Use:&lt;br /&gt;
   C-uC-cC-s  then enter your email address (must match email in cert) and passphrase&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Signing in thunderbird ===&lt;br /&gt;
In thunderbird: options/security/digitally sign this message.&lt;br /&gt;
&lt;br /&gt;
If you do this for the first time and haven't defined yet the certificate to sign with, thunderbird will pop up the according preferences [Account settings/Security], where you can choose between your imported certificates in PKCS12 format.&lt;br /&gt;
&lt;br /&gt;
In the beginning, of course, you haven't imported any: Click there on the same preferences tab that popped up on [View Certificates]. In the new window that opens you can import the certificate.&lt;br /&gt;
&lt;br /&gt;
Afterwards you can then choose this certificate to be used for signing and for encryption for this email account.&lt;br /&gt;
&lt;br /&gt;
Don't forget to actually check that you then really sign the corresponding mail.&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5147</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=5147"/>
		<updated>2013-04-30T14:07:48Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Getting access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long&lt;br /&gt;
term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache]&lt;br /&gt;
storage system and is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc],&lt;br /&gt;
[http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax].&lt;br /&gt;
&lt;br /&gt;
Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple&lt;br /&gt;
crash of a storage element to losing an entire site while stil providing access to the stored data. To protect against silent data corruption the&lt;br /&gt;
dCache storage system checksums all stored data and periodically verifies the data using this checksum.&lt;br /&gt;
&lt;br /&gt;
The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages to the distributed nature of dCache is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node&lt;br /&gt;
and having transfers going directly to/from the storage elements if the protocol allows it.&lt;br /&gt;
The Swestore Nationally Accessible Storage system can achieve aggregated  transfer rates&lt;br /&gt;
in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically&lt;br /&gt;
max 1 Gbit/s per file/connection).&lt;br /&gt;
&lt;br /&gt;
==Access protocols==&lt;br /&gt;
; Currently supported protocols&lt;br /&gt;
: GridFTP - gsiftp://gsiftp.swestore.se/&lt;br /&gt;
: Storage Resource Manager - srm://srm.swegrid.se/&lt;br /&gt;
: Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/&lt;br /&gt;
&lt;br /&gt;
; Protocols in evaluation/development&lt;br /&gt;
: NFS4.1, iRODS&lt;br /&gt;
&lt;br /&gt;
For authentication eScience certificates are used, which provides a higher level of security than legacy username/password schemes.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Exporting_a_client_certificate|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
; Transmit and prepare the certificate.&lt;br /&gt;
: In order to use the client certificate on SNIC resources for generating proxy certificates and using command line tools, the certificate needs to be [[Preparing_a_client_certificate|converted into PEM files]] on the target cluster if not already in that format.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you have installed a Terena-certificate in your browser and you have ARC 3.x installed, there is no need to convert or export the certificate from the browser. The arcproxy command can generate a proxy-certificate from the certificate stored in the Firefox credential store.&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Interactive browsing and manipulation of single files&lt;br /&gt;
: SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. To browse private data you must first install your certificate in your browser (see above). Projects are organized under the &amp;lt;code&amp;gt;/snic&amp;lt;/code&amp;gt; directory as &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data interactively or with automation&lt;br /&gt;
There are several tools that are capable of using the protocols provided by SweStore national storage.&lt;br /&gt;
For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.&lt;br /&gt;
As an integration point for building scripts and automated systems we suggest using the curl program and library.&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].&lt;br /&gt;
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
== Slides and more ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4900</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4900"/>
		<updated>2013-04-19T07:41:14Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se/ ECDS], [http://snd.gu.se/ SND], Bioimage Sweden, [http://www.bils.se/ BILS], [http://www.uppnex.uu.se/ UPPNEX],[http://wlcg.web.cern.ch/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The Swestore Nationally Accessible Storage, commonly called just Swestore, is a robust, flexible and expandable long&lt;br /&gt;
term storage system aimed at storing large amounts of data produced by various Swedish research projects. It is based on the [http://www.dcache.org dCache]&lt;br /&gt;
storage system and is distributed across the SNIC centres [http://www.c3se.chalmers.se/ C3SE], [http://www.hpc2n.umu.se/ HPC2N], [http://www.lunarc.lu.se/ Lunarc],&lt;br /&gt;
[http://www.nsc.liu.se/ NSC], [http://www.pdc.kth.se PDC] and [http://www.uppmax.uu.se Uppmax].&lt;br /&gt;
&lt;br /&gt;
Data is stored in two copies with each copy at a different SNIC centre. This enables the system to cope with a multitude of issues ranging from a simple&lt;br /&gt;
crash of a storage element to losing an entire site while stil providing access to the stored data. To protect against silent data corruption the&lt;br /&gt;
dCache storage system checksums all stored data and periodically verifies the data using this checksum.&lt;br /&gt;
&lt;br /&gt;
The system does NOT yet provide protection against user errors like inadvertent file deletions and so on.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages to the distributed nature of dCache is the excellent aggregated transfer rates possible. This is achieved by bypassing a central node&lt;br /&gt;
and having transfers going directly to/from the storage elements if the protocol allows it.&lt;br /&gt;
The Swestore Nationally Accessible Storage system can achieve aggregated  transfer rates&lt;br /&gt;
in excess of 100 Gigabit per second, but in practice this is limited by connectivity to each University (usually 10 Gbit/s) or a limited number of files (typically&lt;br /&gt;
max 1 Gbit/s per file/connection).&lt;br /&gt;
&lt;br /&gt;
==Access protocols==&lt;br /&gt;
; Currently supported protocols&lt;br /&gt;
: GridFTP - gsiftp://gsiftp.swestore.se/&lt;br /&gt;
: Storage Resource Manager - srm://srm.swegrid.se/&lt;br /&gt;
: Hypertext Transfer Protocol (read-only), Web Distributed Authoring and Versioning - http://webdav.swestore.se/ (unauthenticated), https://webdav.swestore.se/&lt;br /&gt;
&lt;br /&gt;
; Protocols in evaluation/development&lt;br /&gt;
: NFS4.1, iRODS&lt;br /&gt;
&lt;br /&gt;
For most of the access protocols the form of authentication is not username/password but X.509 client certificates, typically acquired from TCS eScience.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Exporting_a_client_certificate|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
; Transmit and prepare the certificate.&lt;br /&gt;
: In order to use the client certificate on SNIC resources for generating proxy certificates and using command line tools, the certificate needs to be [[Preparing_a_client_certificate|converted into PEM files]] on the target cluster if not already in that format.&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Interactive browsing and manipulation of single files&lt;br /&gt;
: SweStore is accessible in your web browser in two ways, as a directory index interface at https://webdav.swestore.se/ and with an interactive file manager at https://webdav.swestore.se/browser/. To browse private data you must first install your certificate in your browser (see above). Projects are organized under the &amp;lt;code&amp;gt;/snic&amp;lt;/code&amp;gt; directory as &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://webdav.swestore.se/snic/YOUR_PROJECT_NAME/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data interactively or with automation&lt;br /&gt;
There are several tools that are capable of using the protocols provided by SweStore national storage.&lt;br /&gt;
For interactive usage on SNIC clusters we recommend using the ARC tools which should be installed on all SNIC resources.&lt;br /&gt;
As an integration point for building scripts and automated systems we suggest using the curl program and library.&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with cURL]].&lt;br /&gt;
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
* [[Accessing SweStore national storage with the ARC client]]&lt;br /&gt;
&amp;lt;!-- * [[Mounting SweStore national storage via WebDAV|Mounting SweStore national storage via WebDAV (Not recomendated at the moment)]] --&amp;gt;&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
== Slides and more ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore/Lund_Seminar_Apr18 Slides and material from seminar for Lund users on April 18th]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4897</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4897"/>
		<updated>2013-04-18T21:37:29Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
[http://youtu.be/I-tNFvSIaRU Video - Generating a Terena certificate]&lt;br /&gt;
&lt;br /&gt;
[http://youtu.be/E9D24PZDK_k Video – Exporting Terena Certificate - Part 1]&lt;br /&gt;
&lt;br /&gt;
[http://youtu.be/jDq774WeF_Y Video – Exporting Terena Certificate - Part 2]&lt;br /&gt;
&lt;br /&gt;
[http://arc-gui-clients.sourceforge.net/ ARC Storage Explorer]&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer scripts ==&lt;br /&gt;
&lt;br /&gt;
The following script can be used in conjunction with the installed proxy_use script (installed on Platon and Alarik)&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_upload [hostname] [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This generates a proxy certificates on your local machine and transfers it to the temp-directory on the remote resource using a unique filename.&lt;br /&gt;
&lt;br /&gt;
On the remote machine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_use&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will look in the /tmp dir for uploaded proxy_certs for your username and rename the file to the standard ARC proxy format.&lt;br /&gt;
&lt;br /&gt;
== proxy_upload (local machine) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
if [ $# -ne 2 ]&lt;br /&gt;
then&lt;br /&gt;
  echo &amp;quot;Usage: `basename $0` hostname username&amp;quot;&lt;br /&gt;
  exit $E_BADARGS&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
proxyPath=/tmp/x509up_u$UID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Generating proxy certificate.&amp;quot;&lt;br /&gt;
arcproxy --proxy=$proxyPath&lt;br /&gt;
echo&lt;br /&gt;
&lt;br /&gt;
if [ -e $proxyPath ] ; then&lt;br /&gt;
        echo &amp;quot;Found generated proxy certificate : $proxyPath&amp;quot;&lt;br /&gt;
else&lt;br /&gt;
        echo &amp;quot;Could not find any proxy certificate.&amp;quot;&lt;br /&gt;
        return -1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
uuid=`uuidgen`&lt;br /&gt;
&lt;br /&gt;
remoteProxyPath=/tmp/x509_up_$2_$uuid&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;Uploading proxy certificate to $1.&amp;quot;&lt;br /&gt;
scp -p -q $proxyPath $2@$1:$remoteProxyPath&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;To use the uploaded proxy on $1, issue the&amp;quot;&lt;br /&gt;
echo &amp;quot;following command:&amp;quot;&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;proxy_use&amp;quot;&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== proxy_use (on remote machine) ==&lt;br /&gt;
&lt;br /&gt;
Available on Platon and Alarik&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/env python&lt;br /&gt;
&lt;br /&gt;
import sys, os&lt;br /&gt;
&lt;br /&gt;
from datetime import datetime&lt;br /&gt;
&lt;br /&gt;
tempDir = &amp;quot;/tmp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def findProxyFiles():&lt;br /&gt;
        userName = os.environ[&amp;quot;LOGNAME&amp;quot;]&lt;br /&gt;
        allFiles = os.listdir(tempDir)&lt;br /&gt;
        proxyFiles = []&lt;br /&gt;
&lt;br /&gt;
        for dirEntry in allFiles:&lt;br /&gt;
                fullPath = os.path.join(tempDir, dirEntry)&lt;br /&gt;
                if os.path.isfile(fullPath):&lt;br /&gt;
                        if fullPath.find(&amp;quot;x509_up_%s&amp;quot; % userName)!=-1:&lt;br /&gt;
                                proxyFiles.append(fullPath)&lt;br /&gt;
&lt;br /&gt;
        return proxyFiles&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
        stdProxyFilename = os.path.join(tempDir, &amp;quot;x509up_u%s&amp;quot; % os.getuid())&lt;br /&gt;
        proxyCertExists = False&lt;br /&gt;
&lt;br /&gt;
        if os.path.isfile(stdProxyFilename):&lt;br /&gt;
                print(&amp;quot;Proxy certificate %s exists.&amp;quot; % stdProxyFilename)&lt;br /&gt;
                proxyCertExists = True&lt;br /&gt;
        else:&lt;br /&gt;
                print(&amp;quot;No existing proxy certificate %s found. &amp;quot; % stdProxyFilename)&lt;br /&gt;
&lt;br /&gt;
        proxyFiles = findProxyFiles()&lt;br /&gt;
        proxyDict = {}&lt;br /&gt;
&lt;br /&gt;
        for proxyFilename in proxyFiles:&lt;br /&gt;
                info = os.stat(proxyFilename)&lt;br /&gt;
                proxyDict[info.st_ctime] = proxyFilename&lt;br /&gt;
&lt;br /&gt;
        sortedProxyKeys = proxyDict.keys()&lt;br /&gt;
        sortedProxyKeys.sort()&lt;br /&gt;
        sortedProxyKeys.reverse()&lt;br /&gt;
&lt;br /&gt;
        if proxyCertExists:&lt;br /&gt;
                proxyCount = 1&lt;br /&gt;
        else:&lt;br /&gt;
                proxyCount = 0&lt;br /&gt;
&lt;br /&gt;
        for timeStamp in sortedProxyKeys:&lt;br /&gt;
                if (proxyCount == 0):&lt;br /&gt;
                        datetime = datetime.fromtimestamp(timeStamp)&lt;br /&gt;
                        os.rename(proxyDict[timeStamp], stdProxyFilename)&lt;br /&gt;
                        print(&amp;quot;Created %s from uploaded proxy %s.&amp;quot; % (stdProxyFilename, proxyDict[timeStamp]))&lt;br /&gt;
                else:&lt;br /&gt;
                        print(&amp;quot;Removing old uploaded proxy %s.&amp;quot; % proxyDict[timeStamp])&lt;br /&gt;
                        os.remove(proxyDict[timeStamp])&lt;br /&gt;
                proxyCount += 1&lt;br /&gt;
&lt;br /&gt;
        if len(proxyFiles) == 0:&lt;br /&gt;
                print(&amp;quot;No uploaded proxy files found. Please upload proxy files using proxy_upload.&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4896</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4896"/>
		<updated>2013-04-18T21:35:44Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* proxy_use */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer scripts ==&lt;br /&gt;
&lt;br /&gt;
The following script can be used in conjunction with the installed proxy_use script (installed on Platon and Alarik)&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_upload [hostname] [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This generates a proxy certificates on your local machine and transfers it to the temp-directory on the remote resource using a unique filename.&lt;br /&gt;
&lt;br /&gt;
On the remote machine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_use&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will look in the /tmp dir for uploaded proxy_certs for your username and rename the file to the standard ARC proxy format.&lt;br /&gt;
&lt;br /&gt;
== proxy_upload (local machine) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
if [ $# -ne 2 ]&lt;br /&gt;
then&lt;br /&gt;
  echo &amp;quot;Usage: `basename $0` hostname username&amp;quot;&lt;br /&gt;
  exit $E_BADARGS&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
proxyPath=/tmp/x509up_u$UID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Generating proxy certificate.&amp;quot;&lt;br /&gt;
arcproxy --proxy=$proxyPath&lt;br /&gt;
echo&lt;br /&gt;
&lt;br /&gt;
if [ -e $proxyPath ] ; then&lt;br /&gt;
        echo &amp;quot;Found generated proxy certificate : $proxyPath&amp;quot;&lt;br /&gt;
else&lt;br /&gt;
        echo &amp;quot;Could not find any proxy certificate.&amp;quot;&lt;br /&gt;
        return -1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
uuid=`uuidgen`&lt;br /&gt;
&lt;br /&gt;
remoteProxyPath=/tmp/x509_up_$2_$uuid&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;Uploading proxy certificate to $1.&amp;quot;&lt;br /&gt;
scp -p -q $proxyPath $2@$1:$remoteProxyPath&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;To use the uploaded proxy on $1, issue the&amp;quot;&lt;br /&gt;
echo &amp;quot;following command:&amp;quot;&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;proxy_use&amp;quot;&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== proxy_use (on remote machine) ==&lt;br /&gt;
&lt;br /&gt;
Available on Platon and Alarik&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/env python&lt;br /&gt;
&lt;br /&gt;
import sys, os&lt;br /&gt;
&lt;br /&gt;
from datetime import datetime&lt;br /&gt;
&lt;br /&gt;
tempDir = &amp;quot;/tmp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def findProxyFiles():&lt;br /&gt;
        userName = os.environ[&amp;quot;LOGNAME&amp;quot;]&lt;br /&gt;
        allFiles = os.listdir(tempDir)&lt;br /&gt;
        proxyFiles = []&lt;br /&gt;
&lt;br /&gt;
        for dirEntry in allFiles:&lt;br /&gt;
                fullPath = os.path.join(tempDir, dirEntry)&lt;br /&gt;
                if os.path.isfile(fullPath):&lt;br /&gt;
                        if fullPath.find(&amp;quot;x509_up_%s&amp;quot; % userName)!=-1:&lt;br /&gt;
                                proxyFiles.append(fullPath)&lt;br /&gt;
&lt;br /&gt;
        return proxyFiles&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
        stdProxyFilename = os.path.join(tempDir, &amp;quot;x509up_u%s&amp;quot; % os.getuid())&lt;br /&gt;
        proxyCertExists = False&lt;br /&gt;
&lt;br /&gt;
        if os.path.isfile(stdProxyFilename):&lt;br /&gt;
                print(&amp;quot;Proxy certificate %s exists.&amp;quot; % stdProxyFilename)&lt;br /&gt;
                proxyCertExists = True&lt;br /&gt;
        else:&lt;br /&gt;
                print(&amp;quot;No existing proxy certificate %s found. &amp;quot; % stdProxyFilename)&lt;br /&gt;
&lt;br /&gt;
        proxyFiles = findProxyFiles()&lt;br /&gt;
        proxyDict = {}&lt;br /&gt;
&lt;br /&gt;
        for proxyFilename in proxyFiles:&lt;br /&gt;
                info = os.stat(proxyFilename)&lt;br /&gt;
                proxyDict[info.st_ctime] = proxyFilename&lt;br /&gt;
&lt;br /&gt;
        sortedProxyKeys = proxyDict.keys()&lt;br /&gt;
        sortedProxyKeys.sort()&lt;br /&gt;
        sortedProxyKeys.reverse()&lt;br /&gt;
&lt;br /&gt;
        if proxyCertExists:&lt;br /&gt;
                proxyCount = 1&lt;br /&gt;
        else:&lt;br /&gt;
                proxyCount = 0&lt;br /&gt;
&lt;br /&gt;
        for timeStamp in sortedProxyKeys:&lt;br /&gt;
                if (proxyCount == 0):&lt;br /&gt;
                        datetime = datetime.fromtimestamp(timeStamp)&lt;br /&gt;
                        os.rename(proxyDict[timeStamp], stdProxyFilename)&lt;br /&gt;
                        print(&amp;quot;Created %s from uploaded proxy %s.&amp;quot; % (stdProxyFilename, proxyDict[timeStamp]))&lt;br /&gt;
                else:&lt;br /&gt;
                        print(&amp;quot;Removing old uploaded proxy %s.&amp;quot; % proxyDict[timeStamp])&lt;br /&gt;
                        os.remove(proxyDict[timeStamp])&lt;br /&gt;
                proxyCount += 1&lt;br /&gt;
&lt;br /&gt;
        if len(proxyFiles) == 0:&lt;br /&gt;
                print(&amp;quot;No uploaded proxy files found. Please upload proxy files using proxy_upload.&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4895</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4895"/>
		<updated>2013-04-18T21:35:20Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* proxy_upload */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer scripts ==&lt;br /&gt;
&lt;br /&gt;
The following script can be used in conjunction with the installed proxy_use script (installed on Platon and Alarik)&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_upload [hostname] [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This generates a proxy certificates on your local machine and transfers it to the temp-directory on the remote resource using a unique filename.&lt;br /&gt;
&lt;br /&gt;
On the remote machine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_use&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will look in the /tmp dir for uploaded proxy_certs for your username and rename the file to the standard ARC proxy format.&lt;br /&gt;
&lt;br /&gt;
== proxy_upload (local machine) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
if [ $# -ne 2 ]&lt;br /&gt;
then&lt;br /&gt;
  echo &amp;quot;Usage: `basename $0` hostname username&amp;quot;&lt;br /&gt;
  exit $E_BADARGS&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
proxyPath=/tmp/x509up_u$UID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Generating proxy certificate.&amp;quot;&lt;br /&gt;
arcproxy --proxy=$proxyPath&lt;br /&gt;
echo&lt;br /&gt;
&lt;br /&gt;
if [ -e $proxyPath ] ; then&lt;br /&gt;
        echo &amp;quot;Found generated proxy certificate : $proxyPath&amp;quot;&lt;br /&gt;
else&lt;br /&gt;
        echo &amp;quot;Could not find any proxy certificate.&amp;quot;&lt;br /&gt;
        return -1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
uuid=`uuidgen`&lt;br /&gt;
&lt;br /&gt;
remoteProxyPath=/tmp/x509_up_$2_$uuid&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;Uploading proxy certificate to $1.&amp;quot;&lt;br /&gt;
scp -p -q $proxyPath $2@$1:$remoteProxyPath&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;To use the uploaded proxy on $1, issue the&amp;quot;&lt;br /&gt;
echo &amp;quot;following command:&amp;quot;&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;proxy_use&amp;quot;&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== proxy_use ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/env python&lt;br /&gt;
&lt;br /&gt;
import sys, os&lt;br /&gt;
&lt;br /&gt;
from datetime import datetime&lt;br /&gt;
&lt;br /&gt;
tempDir = &amp;quot;/tmp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def findProxyFiles():&lt;br /&gt;
        userName = os.environ[&amp;quot;LOGNAME&amp;quot;]&lt;br /&gt;
        allFiles = os.listdir(tempDir)&lt;br /&gt;
        proxyFiles = []&lt;br /&gt;
&lt;br /&gt;
        for dirEntry in allFiles:&lt;br /&gt;
                fullPath = os.path.join(tempDir, dirEntry)&lt;br /&gt;
                if os.path.isfile(fullPath):&lt;br /&gt;
                        if fullPath.find(&amp;quot;x509_up_%s&amp;quot; % userName)!=-1:&lt;br /&gt;
                                proxyFiles.append(fullPath)&lt;br /&gt;
&lt;br /&gt;
        return proxyFiles&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
        stdProxyFilename = os.path.join(tempDir, &amp;quot;x509up_u%s&amp;quot; % os.getuid())&lt;br /&gt;
        proxyCertExists = False&lt;br /&gt;
&lt;br /&gt;
        if os.path.isfile(stdProxyFilename):&lt;br /&gt;
                print(&amp;quot;Proxy certificate %s exists.&amp;quot; % stdProxyFilename)&lt;br /&gt;
                proxyCertExists = True&lt;br /&gt;
        else:&lt;br /&gt;
                print(&amp;quot;No existing proxy certificate %s found. &amp;quot; % stdProxyFilename)&lt;br /&gt;
&lt;br /&gt;
        proxyFiles = findProxyFiles()&lt;br /&gt;
        proxyDict = {}&lt;br /&gt;
&lt;br /&gt;
        for proxyFilename in proxyFiles:&lt;br /&gt;
                info = os.stat(proxyFilename)&lt;br /&gt;
                proxyDict[info.st_ctime] = proxyFilename&lt;br /&gt;
&lt;br /&gt;
        sortedProxyKeys = proxyDict.keys()&lt;br /&gt;
        sortedProxyKeys.sort()&lt;br /&gt;
        sortedProxyKeys.reverse()&lt;br /&gt;
&lt;br /&gt;
        if proxyCertExists:&lt;br /&gt;
                proxyCount = 1&lt;br /&gt;
        else:&lt;br /&gt;
                proxyCount = 0&lt;br /&gt;
&lt;br /&gt;
        for timeStamp in sortedProxyKeys:&lt;br /&gt;
                if (proxyCount == 0):&lt;br /&gt;
                        datetime = datetime.fromtimestamp(timeStamp)&lt;br /&gt;
                        os.rename(proxyDict[timeStamp], stdProxyFilename)&lt;br /&gt;
                        print(&amp;quot;Created %s from uploaded proxy %s.&amp;quot; % (stdProxyFilename, proxyDict[timeStamp]))&lt;br /&gt;
                else:&lt;br /&gt;
                        print(&amp;quot;Removing old uploaded proxy %s.&amp;quot; % proxyDict[timeStamp])&lt;br /&gt;
                        os.remove(proxyDict[timeStamp])&lt;br /&gt;
                proxyCount += 1&lt;br /&gt;
&lt;br /&gt;
        if len(proxyFiles) == 0:&lt;br /&gt;
                print(&amp;quot;No uploaded proxy files found. Please upload proxy files using proxy_upload.&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4894</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4894"/>
		<updated>2013-04-18T21:34:49Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* proxy_upload script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer scripts ==&lt;br /&gt;
&lt;br /&gt;
The following script can be used in conjunction with the installed proxy_use script (installed on Platon and Alarik)&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_upload [hostname] [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This generates a proxy certificates on your local machine and transfers it to the temp-directory on the remote resource using a unique filename.&lt;br /&gt;
&lt;br /&gt;
On the remote machine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
proxy_use&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will look in the /tmp dir for uploaded proxy_certs for your username and rename the file to the standard ARC proxy format.&lt;br /&gt;
&lt;br /&gt;
== proxy_upload ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
if [ $# -ne 2 ]&lt;br /&gt;
then&lt;br /&gt;
  echo &amp;quot;Usage: `basename $0` hostname username&amp;quot;&lt;br /&gt;
  exit $E_BADARGS&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
proxyPath=/tmp/x509up_u$UID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Generating proxy certificate.&amp;quot;&lt;br /&gt;
arcproxy --proxy=$proxyPath&lt;br /&gt;
echo&lt;br /&gt;
&lt;br /&gt;
if [ -e $proxyPath ] ; then&lt;br /&gt;
        echo &amp;quot;Found generated proxy certificate : $proxyPath&amp;quot;&lt;br /&gt;
else&lt;br /&gt;
        echo &amp;quot;Could not find any proxy certificate.&amp;quot;&lt;br /&gt;
        return -1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
uuid=`uuidgen`&lt;br /&gt;
&lt;br /&gt;
remoteProxyPath=/tmp/x509_up_$2_$uuid&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;Uploading proxy certificate to $1.&amp;quot;&lt;br /&gt;
scp -p -q $proxyPath $2@$1:$remoteProxyPath&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;To use the uploaded proxy on $1, issue the&amp;quot;&lt;br /&gt;
echo &amp;quot;following command:&amp;quot;&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;proxy_use&amp;quot;&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== proxy_use ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/env python&lt;br /&gt;
&lt;br /&gt;
import sys, os&lt;br /&gt;
&lt;br /&gt;
from datetime import datetime&lt;br /&gt;
&lt;br /&gt;
tempDir = &amp;quot;/tmp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def findProxyFiles():&lt;br /&gt;
        userName = os.environ[&amp;quot;LOGNAME&amp;quot;]&lt;br /&gt;
        allFiles = os.listdir(tempDir)&lt;br /&gt;
        proxyFiles = []&lt;br /&gt;
&lt;br /&gt;
        for dirEntry in allFiles:&lt;br /&gt;
                fullPath = os.path.join(tempDir, dirEntry)&lt;br /&gt;
                if os.path.isfile(fullPath):&lt;br /&gt;
                        if fullPath.find(&amp;quot;x509_up_%s&amp;quot; % userName)!=-1:&lt;br /&gt;
                                proxyFiles.append(fullPath)&lt;br /&gt;
&lt;br /&gt;
        return proxyFiles&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
        stdProxyFilename = os.path.join(tempDir, &amp;quot;x509up_u%s&amp;quot; % os.getuid())&lt;br /&gt;
        proxyCertExists = False&lt;br /&gt;
&lt;br /&gt;
        if os.path.isfile(stdProxyFilename):&lt;br /&gt;
                print(&amp;quot;Proxy certificate %s exists.&amp;quot; % stdProxyFilename)&lt;br /&gt;
                proxyCertExists = True&lt;br /&gt;
        else:&lt;br /&gt;
                print(&amp;quot;No existing proxy certificate %s found. &amp;quot; % stdProxyFilename)&lt;br /&gt;
&lt;br /&gt;
        proxyFiles = findProxyFiles()&lt;br /&gt;
        proxyDict = {}&lt;br /&gt;
&lt;br /&gt;
        for proxyFilename in proxyFiles:&lt;br /&gt;
                info = os.stat(proxyFilename)&lt;br /&gt;
                proxyDict[info.st_ctime] = proxyFilename&lt;br /&gt;
&lt;br /&gt;
        sortedProxyKeys = proxyDict.keys()&lt;br /&gt;
        sortedProxyKeys.sort()&lt;br /&gt;
        sortedProxyKeys.reverse()&lt;br /&gt;
&lt;br /&gt;
        if proxyCertExists:&lt;br /&gt;
                proxyCount = 1&lt;br /&gt;
        else:&lt;br /&gt;
                proxyCount = 0&lt;br /&gt;
&lt;br /&gt;
        for timeStamp in sortedProxyKeys:&lt;br /&gt;
                if (proxyCount == 0):&lt;br /&gt;
                        datetime = datetime.fromtimestamp(timeStamp)&lt;br /&gt;
                        os.rename(proxyDict[timeStamp], stdProxyFilename)&lt;br /&gt;
                        print(&amp;quot;Created %s from uploaded proxy %s.&amp;quot; % (stdProxyFilename, proxyDict[timeStamp]))&lt;br /&gt;
                else:&lt;br /&gt;
                        print(&amp;quot;Removing old uploaded proxy %s.&amp;quot; % proxyDict[timeStamp])&lt;br /&gt;
                        os.remove(proxyDict[timeStamp])&lt;br /&gt;
                proxyCount += 1&lt;br /&gt;
&lt;br /&gt;
        if len(proxyFiles) == 0:&lt;br /&gt;
                print(&amp;quot;No uploaded proxy files found. Please upload proxy files using proxy_upload.&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4893</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4893"/>
		<updated>2013-04-18T21:29:21Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* proxy_upload script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_upload script ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
if [ $# -ne 2 ]&lt;br /&gt;
then&lt;br /&gt;
  echo &amp;quot;Usage: `basename $0` hostname username&amp;quot;&lt;br /&gt;
  exit $E_BADARGS&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
proxyPath=/tmp/x509up_u$UID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Generating proxy certificate.&amp;quot;&lt;br /&gt;
arcproxy --proxy=$proxyPath&lt;br /&gt;
echo&lt;br /&gt;
&lt;br /&gt;
if [ -e $proxyPath ] ; then&lt;br /&gt;
        echo &amp;quot;Found generated proxy certificate : $proxyPath&amp;quot;&lt;br /&gt;
else&lt;br /&gt;
        echo &amp;quot;Could not find any proxy certificate.&amp;quot;&lt;br /&gt;
        return -1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
uuid=`uuidgen`&lt;br /&gt;
&lt;br /&gt;
remoteProxyPath=/tmp/x509_up_$2_$uuid&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;Uploading proxy certificate to $1.&amp;quot;&lt;br /&gt;
scp -p -q $proxyPath $2@$1:$remoteProxyPath&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;To use the uploaded proxy on $1, issue the&amp;quot;&lt;br /&gt;
echo &amp;quot;following command:&amp;quot;&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;proxy_use&amp;quot;&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4892</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4892"/>
		<updated>2013-04-18T21:29:01Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* proxy_transfer package */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_upload script ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
if [ $# -ne 2 ]&lt;br /&gt;
then&lt;br /&gt;
  echo &amp;quot;Usage: `basename $0` hostname username&amp;quot;&lt;br /&gt;
  exit $E_BADARGS&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
proxyPath=/tmp/x509up_u$UID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Generating proxy certificate.&amp;quot;&lt;br /&gt;
arcproxy --proxy=$proxyPath&lt;br /&gt;
echo&lt;br /&gt;
&lt;br /&gt;
if [ -e $proxyPath ] ; then&lt;br /&gt;
        echo &amp;quot;Found generated proxy certificate : $proxyPath&amp;quot;&lt;br /&gt;
else&lt;br /&gt;
        echo &amp;quot;Could not find any proxy certificate.&amp;quot;&lt;br /&gt;
        return -1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
uuid=`uuidgen`&lt;br /&gt;
&lt;br /&gt;
remoteProxyPath=/tmp/x509_up_$2_$uuid&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;Uploading proxy certificate to $1.&amp;quot;&lt;br /&gt;
scp -p -q $proxyPath $2@$1:$remoteProxyPath&lt;br /&gt;
&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo &amp;quot;To use the uploaded proxy on $1, issue the&amp;quot;&lt;br /&gt;
echo &amp;quot;following command:&amp;quot;&lt;br /&gt;
echo&lt;br /&gt;
echo &amp;quot;proxy_use&amp;quot;&lt;br /&gt;
echo &amp;quot;-------------------------------------------------------------&amp;quot;&lt;br /&gt;
echo&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4891</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4891"/>
		<updated>2013-04-18T21:25:29Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer package ==&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4890</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4890"/>
		<updated>2013-04-18T21:25:17Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://docs.snic.se/wiki/Swestore General information on SweStore]&lt;br /&gt;
[http://docs.snic.se/wiki/Apply_for_storage_on_SweStore Applying for storage]&lt;br /&gt;
[http://docs.snic.se/wiki/Grid_certificates#Requesting_a_certificate Applying for certificate]&lt;br /&gt;
[http://download.nordugrid.org/repos-13.02.html Installing NorduGrid Client]&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer package ==&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4889</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4889"/>
		<updated>2013-04-18T21:23:09Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Slides */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Swestore_slides_sem_apr15.pdf]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer package ==&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4888</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4888"/>
		<updated>2013-04-18T21:21:21Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
[[File:Example.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer package ==&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=File:Swestore_slides_sem_apr15.pdf&amp;diff=4887</id>
		<title>File:Swestore slides sem apr15.pdf</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=File:Swestore_slides_sem_apr15.pdf&amp;diff=4887"/>
		<updated>2013-04-18T21:20:00Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4886</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4886"/>
		<updated>2013-04-18T21:15:04Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Slides ==&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer package ==&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4885</id>
		<title>Swestore/Lund Seminar Apr18</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore/Lund_Seminar_Apr18&amp;diff=4885"/>
		<updated>2013-04-18T21:14:49Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): Created page with &amp;quot;= Seminar: Using the national storage system (SweStore) =  == Slides ==  == Links ==  == proxy_transfer package ==&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Seminar: Using the national storage system (SweStore) =&lt;br /&gt;
&lt;br /&gt;
== Slides ==&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
== proxy_transfer package ==&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4275</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4275"/>
		<updated>2012-07-30T12:41:41Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Download and upload data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se ECDS], [http://snd.gu.se SND], Bioimage Sweden, [http://www.bils.se BILS], [http://www.uppnex.uu.se UPPNEX],[http://http://lcg.web.cern.ch/lcg/public/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The aim of the nationally accessible storage is to build a robust, flexible and expandable system that can&lt;br /&gt;
be used in most cases where access to large scale storage is needed. To the user it should appear as a single large system,&lt;br /&gt;
while it is desirable that some parts of the system are distributed across all SNIC centra to benefit from the advantages&lt;br /&gt;
of, among other things, locality and cache effects. The system is intended as a versatile long-term storage system.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Requesting_a_grid_certificate_using_the_Terena_eScience_Portal#Exporting Terena certificate for use with Grid tools|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Browse and download data&lt;br /&gt;
: SweStore is accessible from your web browser, here http://webdav.swegrid.se/. To browse private data you must first install your certificate in your browser (see above). Your data is available at &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;http://webdav.swegrid.se/snic/YOUR_PROJECT_NAME&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with the cURL]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
: Use globus-url-copy. Please see the instructions for [[Accessing SweStore national storage with globus-url-copy]].&lt;br /&gt;
&lt;br /&gt;
== Examples of storage projects ==&lt;br /&gt;
Below are some examples of project that are using SweStore today.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; style=&amp;quot;text-align:left; border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000&amp;quot; class=&amp;quot;wikitable sortable&amp;quot;  valign=top&lt;br /&gt;
!Allocation name&lt;br /&gt;
!Size in TB&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Project full name&lt;br /&gt;
|-&lt;br /&gt;
|alice&lt;br /&gt;
|400&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|uppnex&lt;br /&gt;
|140&lt;br /&gt;
|[https://www.uppnex.uu.se UPPmax NExt Generation Sequencing Cluster &amp;amp; Storage]&lt;br /&gt;
|-&lt;br /&gt;
|brain_protein_atlas&lt;br /&gt;
|10&lt;br /&gt;
|Mouse brain protein atlas project&lt;br /&gt;
|-&lt;br /&gt;
| scims2lab&lt;br /&gt;
|20&lt;br /&gt;
| Identification of novel gene models by matching mass spectrometry data against 6-frame translations of the human genome&lt;br /&gt;
|-&lt;br /&gt;
|subatom&lt;br /&gt;
|&lt;br /&gt;
|Low-energy nuclear theory and experiment&lt;br /&gt;
|-&lt;br /&gt;
|genomics-gu&lt;br /&gt;
|10&lt;br /&gt;
|Genomics Core Facility, Sahlgrenska academy at University of Gothenburg.&lt;br /&gt;
|-&lt;br /&gt;
|Chemo&lt;br /&gt;
|5TB&lt;br /&gt;
|Genetic interaction networks in human deseas&lt;br /&gt;
|-&lt;br /&gt;
|cesm1_holocene&lt;br /&gt;
|30&lt;br /&gt;
|Arctic sea ice in warm climates&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [[SweStore introduction]]&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
* [[Accessing SweStore national storage with the ARC client]]&lt;br /&gt;
&amp;lt;!-- * [[Mounting SweStore national storage via WebDAV|Mounting SweStore national storage via WebDAV (Not recomendated at the moment)]] --&amp;gt;&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Accessing_Swestore_with_globus-url-copy&amp;diff=4274</id>
		<title>Accessing Swestore with globus-url-copy</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Accessing_Swestore_with_globus-url-copy&amp;diff=4274"/>
		<updated>2012-07-30T12:41:09Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Copying a single file to a local file:&lt;br /&gt;
&lt;br /&gt;
 globus-url-copy gsiftp://srm.swegrid.se/snic/myproject/myfile file:///home/myname/myfile&lt;br /&gt;
&lt;br /&gt;
Copying a single file to a directory:&lt;br /&gt;
&lt;br /&gt;
 globus-url-copy gsiftp://srm.swegrid.se/snic/myproject/myfile gsiftp://srm.swegrid.se/snic/myproject/mydir/&lt;br /&gt;
&lt;br /&gt;
Copying a directory recursively to a local directory (verbose -v):&lt;br /&gt;
&lt;br /&gt;
 globus-url-copy -v -cd -r gsiftp://srm.swegrid.se/snic/myproject/test2/ file:///home/myname/test2/&lt;br /&gt;
&lt;br /&gt;
Listing files in a directory:&lt;br /&gt;
&lt;br /&gt;
 globus-url-copy -list gsiftp://srm.swegrid.se/snic/myproject/&lt;br /&gt;
&lt;br /&gt;
P.S. Please note the trailing slashes for directories.&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Accessing_Swestore_with_globus-url-copy&amp;diff=4273</id>
		<title>Accessing Swestore with globus-url-copy</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Accessing_Swestore_with_globus-url-copy&amp;diff=4273"/>
		<updated>2012-07-30T12:39:04Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): Created page with &amp;quot;Copying a single file to a local file:  globus-url-copy gsiftp://srm.swegrid.se/ops/jonas/arc-gui-clients-0.2.1.1.tar.gz file:///home/jonas/test.tar.gz  Copying a single file to ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Copying a single file to a local file:&lt;br /&gt;
&lt;br /&gt;
globus-url-copy gsiftp://srm.swegrid.se/ops/jonas/arc-gui-clients-0.2.1.1.tar.gz file:///home/jonas/test.tar.gz&lt;br /&gt;
&lt;br /&gt;
Copying a single file to a directory:&lt;br /&gt;
&lt;br /&gt;
globus-url-copy gsiftp://srm.swegrid.se/ops/jonas/arc-gui-clients-0.2.1.1.tar.gz gsiftp://srm.swegrid.se/ops/jonas/test2/&lt;br /&gt;
&lt;br /&gt;
Copying a directory recursively to a local directory (verbose -v):&lt;br /&gt;
&lt;br /&gt;
globus-url-copy -v -cd -r gsiftp://srm.swegrid.se/ops/jonas/test2/ file:///home/jonas/test2/&lt;br /&gt;
&lt;br /&gt;
Listing files in a directory:&lt;br /&gt;
&lt;br /&gt;
globus-url-copy -list gsiftp://srm.swegrid.se/ops/jonas/&lt;br /&gt;
&lt;br /&gt;
P.S. Please note the trailing slashes for directories.&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4272</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4272"/>
		<updated>2012-07-30T12:37:57Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Download and upload data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se ECDS], [http://snd.gu.se SND], Bioimage Sweden, [http://www.bils.se BILS], [http://www.uppnex.uu.se UPPNEX],[http://http://lcg.web.cern.ch/lcg/public/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The aim of the nationally accessible storage is to build a robust, flexible and expandable system that can&lt;br /&gt;
be used in most cases where access to large scale storage is needed. To the user it should appear as a single large system,&lt;br /&gt;
while it is desirable that some parts of the system are distributed across all SNIC centra to benefit from the advantages&lt;br /&gt;
of, among other things, locality and cache effects. The system is intended as a versatile long-term storage system.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Requesting_a_grid_certificate_using_the_Terena_eScience_Portal#Exporting Terena certificate for use with Grid tools|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Browse and download data&lt;br /&gt;
: SweStore is accessible from your web browser, here http://webdav.swegrid.se/. To browse private data you must first install your certificate in your browser (see above). Your data is available at &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;http://webdav.swegrid.se/snic/YOUR_PROJECT_NAME&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with the cURL]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with lftp]].&lt;br /&gt;
&lt;br /&gt;
== Examples of storage projects ==&lt;br /&gt;
Below are some examples of project that are using SweStore today.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; style=&amp;quot;text-align:left; border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000&amp;quot; class=&amp;quot;wikitable sortable&amp;quot;  valign=top&lt;br /&gt;
!Allocation name&lt;br /&gt;
!Size in TB&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Project full name&lt;br /&gt;
|-&lt;br /&gt;
|alice&lt;br /&gt;
|400&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|uppnex&lt;br /&gt;
|140&lt;br /&gt;
|[https://www.uppnex.uu.se UPPmax NExt Generation Sequencing Cluster &amp;amp; Storage]&lt;br /&gt;
|-&lt;br /&gt;
|brain_protein_atlas&lt;br /&gt;
|10&lt;br /&gt;
|Mouse brain protein atlas project&lt;br /&gt;
|-&lt;br /&gt;
| scims2lab&lt;br /&gt;
|20&lt;br /&gt;
| Identification of novel gene models by matching mass spectrometry data against 6-frame translations of the human genome&lt;br /&gt;
|-&lt;br /&gt;
|subatom&lt;br /&gt;
|&lt;br /&gt;
|Low-energy nuclear theory and experiment&lt;br /&gt;
|-&lt;br /&gt;
|genomics-gu&lt;br /&gt;
|10&lt;br /&gt;
|Genomics Core Facility, Sahlgrenska academy at University of Gothenburg.&lt;br /&gt;
|-&lt;br /&gt;
|Chemo&lt;br /&gt;
|5TB&lt;br /&gt;
|Genetic interaction networks in human deseas&lt;br /&gt;
|-&lt;br /&gt;
|cesm1_holocene&lt;br /&gt;
|30&lt;br /&gt;
|Arctic sea ice in warm climates&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [[SweStore introduction]]&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
* [[Accessing SweStore national storage with the ARC client]]&lt;br /&gt;
&amp;lt;!-- * [[Mounting SweStore national storage via WebDAV|Mounting SweStore national storage via WebDAV (Not recomendated at the moment)]] --&amp;gt;&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4271</id>
		<title>Swestore-dCache</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Swestore-dCache&amp;diff=4271"/>
		<updated>2012-07-30T12:37:11Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): /* Download and upload data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Storage]]&lt;br /&gt;
[[Category:SweStore]]&lt;br /&gt;
SNIC is building a storage infrastructure to complement the computational resources.&lt;br /&gt;
&lt;br /&gt;
Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.&lt;br /&gt;
&lt;br /&gt;
Swestore is in collaboration with [http://www.ecds.se ECDS], [http://snd.gu.se SND], Bioimage Sweden, [http://www.bils.se BILS], [http://www.uppnex.uu.se UPPNEX],[http://http://lcg.web.cern.ch/lcg/public/ WLCG], [http://www.nrm.se/ NaturHistoriska RiksMuseet].&lt;br /&gt;
&lt;br /&gt;
= National storage =&lt;br /&gt;
The aim of the nationally accessible storage is to build a robust, flexible and expandable system that can&lt;br /&gt;
be used in most cases where access to large scale storage is needed. To the user it should appear as a single large system,&lt;br /&gt;
while it is desirable that some parts of the system are distributed across all SNIC centra to benefit from the advantages&lt;br /&gt;
of, among other things, locality and cache effects. The system is intended as a versatile long-term storage system.&lt;br /&gt;
&lt;br /&gt;
== Getting access ==&lt;br /&gt;
; Apply for storage&lt;br /&gt;
: Please follow instructions [[Apply for storage on SweStore|here]]&lt;br /&gt;
; Get a client certificate.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_a_certificate|here]] to get your client certificate. For Terena certificates, please make sure you also [[Requesting_a_grid_certificate_using_the_Terena_eScience_Portal#Exporting Terena certificate for use with Grid tools|export the certificate for use with grid tools]]. For Nordugrid certificates, please make sure to also [[Requesting_a_grid_certificate_from_the_Nordugrid_CA#Installing_the_certificate_in_your_browser|install your client certificate in your browser]].&lt;br /&gt;
; Request membership in the SweGrid VO.&lt;br /&gt;
: Follow the instructions [[Grid_certificates#Requesting_membership_in_the_SweGrid_VO|here]] to get added to the SweGrid virtual organisation.&lt;br /&gt;
&lt;br /&gt;
== Download and upload data ==&lt;br /&gt;
; Browse and download data&lt;br /&gt;
: SweStore is accessible from your web browser, here http://webdav.swegrid.se/. To browse private data you must first install your certificate in your browser (see above). Your data is available at &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;http://webdav.swegrid.se/snic/YOUR_PROJECT_NAME&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
; Upload and delete data&lt;br /&gt;
: Use the ARC client. Please see the instructions for [[Accessing SweStore national storage with the ARC client]].&lt;br /&gt;
: Use cURL. Please see the instructions for [[Accessing SweStore national storage with the cURL]].&lt;br /&gt;
: Use lftp. Please see the instructions for [[Accessing SweStore national storage with the lftp]].&lt;br /&gt;
&lt;br /&gt;
== Examples of storage projects ==&lt;br /&gt;
Below are some examples of project that are using SweStore today.&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; style=&amp;quot;text-align:left; border-collapse: collapse; border-width: 1px; border-style: solid; border-color: #000&amp;quot; class=&amp;quot;wikitable sortable&amp;quot;  valign=top&lt;br /&gt;
!Allocation name&lt;br /&gt;
!Size in TB&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Project full name&lt;br /&gt;
|-&lt;br /&gt;
|alice&lt;br /&gt;
|400&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|uppnex&lt;br /&gt;
|140&lt;br /&gt;
|[https://www.uppnex.uu.se UPPmax NExt Generation Sequencing Cluster &amp;amp; Storage]&lt;br /&gt;
|-&lt;br /&gt;
|brain_protein_atlas&lt;br /&gt;
|10&lt;br /&gt;
|Mouse brain protein atlas project&lt;br /&gt;
|-&lt;br /&gt;
| scims2lab&lt;br /&gt;
|20&lt;br /&gt;
| Identification of novel gene models by matching mass spectrometry data against 6-frame translations of the human genome&lt;br /&gt;
|-&lt;br /&gt;
|subatom&lt;br /&gt;
|&lt;br /&gt;
|Low-energy nuclear theory and experiment&lt;br /&gt;
|-&lt;br /&gt;
|genomics-gu&lt;br /&gt;
|10&lt;br /&gt;
|Genomics Core Facility, Sahlgrenska academy at University of Gothenburg.&lt;br /&gt;
|-&lt;br /&gt;
|Chemo&lt;br /&gt;
|5TB&lt;br /&gt;
|Genetic interaction networks in human deseas&lt;br /&gt;
|-&lt;br /&gt;
|cesm1_holocene&lt;br /&gt;
|30&lt;br /&gt;
|Arctic sea ice in warm climates&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== More information ==&lt;br /&gt;
* [[SweStore introduction]]&lt;br /&gt;
* [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage]&lt;br /&gt;
* [[Accessing SweStore national storage with the ARC client]]&lt;br /&gt;
&amp;lt;!-- * [[Mounting SweStore national storage via WebDAV|Mounting SweStore national storage via WebDAV (Not recomendated at the moment)]] --&amp;gt;&lt;br /&gt;
If you have any issues using SweStore please do not hesitate to contact [mailto:swestore-support@snic.vr.se swestore-support].&lt;br /&gt;
&lt;br /&gt;
== Tools and scripts ==&lt;br /&gt;
&lt;br /&gt;
There exists a number of tools and utilities developed externally that can be useful. Here are some links:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/samuell/arc_tools ARC_Tools] - Convenience scripts for the arc client (Only a recursive rmdir so far).&lt;br /&gt;
* [http://sourceforge.net/projects/arc-gui-clients ARC Graphical Clients] - Contains the ARC Storage Explorer (SweStore supported development).&lt;br /&gt;
* Transfer script, [http://snicdocs.nsc.liu.se/wiki/SweStore/swstrans_arc swetrans_arc], provided by Adam Peplinski / Philipp Schlatter&lt;br /&gt;
* [http://www.nordugrid.org/documents/SWIG-wrapped-ARC-Python-API.pdf Documentation of the ARC Python API (PDF)]&lt;br /&gt;
&lt;br /&gt;
= Center storage =&lt;br /&gt;
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.&lt;br /&gt;
&lt;br /&gt;
== Unified environment ==&lt;br /&gt;
To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_BACKUP&amp;lt;/code&amp;gt; – the user's primary directory at the centre&amp;lt;br&amp;gt;(the part of the centre storage that is backed up)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_NOBACKUP&amp;lt;/code&amp;gt; – recommended directory for project storage without backup&amp;lt;br&amp;gt;(also on the centre storage)&lt;br /&gt;
* &amp;lt;code&amp;gt;SNIC_TMP&amp;lt;/code&amp;gt; – recommended directory for best performance during a job&amp;lt;br&amp;gt;(local disk on nodes if applicable)&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
	<entry>
		<id>https://snicdocs.nsc.liu.se/w/index.php?title=Accessing_Swestore_with_cURL&amp;diff=4270</id>
		<title>Accessing Swestore with cURL</title>
		<link rel="alternate" type="text/html" href="https://snicdocs.nsc.liu.se/w/index.php?title=Accessing_Swestore_with_cURL&amp;diff=4270"/>
		<updated>2012-07-30T12:31:52Z</updated>

		<summary type="html">&lt;p&gt;Jonas Lindemann (LUNARC): &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide outlines the procedure for using cURL to access files through the WebDav door of dCache.&lt;br /&gt;
&lt;br /&gt;
== Essential parameters ==&lt;br /&gt;
&lt;br /&gt;
 --capath /etc/grid-security/certificates&lt;br /&gt;
The certificate bundle provided through --capath is required in order for cURL to accept the server certificates the door presents. If the certificate bundle is not available, the -k flag may be passed to allow untrusted server certificates.&lt;br /&gt;
&lt;br /&gt;
 --cert /tmp/x509up_u1234&lt;br /&gt;
--cert (or -E) names the proxy certificate generated by arcproxy or similar tools, which is a single PEM file consisting of the client certificate, the proxy key and the proxy certificate. The name will vary based on the user issuing it.&lt;br /&gt;
grid-proxy-init (and thus arcproxy) will put the certificate in /tmp by default and name it according to the pattern x509up_u&amp;lt;NumericUID&amp;gt;. The -out parameter to grid-proxy-init takes a location to store the certificate in if the default is not sufficient.&lt;br /&gt;
&lt;br /&gt;
 --location&lt;br /&gt;
--location (or -L) instructs cURL to follow HTTP redirects, in this case the 302 redirects that the dCache door uses to direct clients to different storage nodes.&lt;br /&gt;
&lt;br /&gt;
== Sample invocations ==&lt;br /&gt;
&lt;br /&gt;
Downloads the file 'file-to-download.ext':&lt;br /&gt;
 curl --location --capath /etc/grid-security/certificates --cert /tmp/x509up_u1234 -O https://webdav.swegrid.se/target/path/file-to-download.ext&lt;br /&gt;
&lt;br /&gt;
Upload the file 'source.file' as 'uploaded.ext':&lt;br /&gt;
 curl --location --capath /etc/grid-security/certificates --cert /tmp/x509up_u1234 -T ~/source.file https://webdav.swegrid.se/target/path/uploaded.ext&lt;br /&gt;
&lt;br /&gt;
= Credits =&lt;br /&gt;
&lt;br /&gt;
This guide was written by Lars Viklund&lt;/div&gt;</summary>
		<author><name>Jonas Lindemann (LUNARC)</name></author>
		
	</entry>
</feed>