Search by property
This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.
List of results
- DDT (PDC September 2014) + (Debugging applications using DDT)
- Debugging of HPC applications, September 2019 + (Debugging of HPC applications)
- ENCCS/PDC VeloxChem Workshop: Quantum Chemistry Towards Pre-exascale and Beyond (March 2022) + (ENCCS/PDC VeloxChem Workshop: Quantum Chemistry Towards Pre-exascale and Beyond)
- ENCCS/PDC VeloxChem Workshop: Quantum Chemistry from Laptop to HPC (May 2021) + (ENCCS/PDC VeloxChem Workshop: Quantum Chemistry from Laptop to HPC)
- Fido + (Easy access to safe and reliable hosting and computation for Swedish bioinformatics.)
- Course: Efficient MD simulations at HPC2N (February 2019) + (Efficient MD simulations at HPC2N)
- Course: Efficient MD simulations at HPC2N (February 2020) + (Efficient MD simulations at HPC2N)
- Matlab in an HPC environment (Lunarc May 2016) + (Efficient use of Matlab in an HPC environment)
- Electronic Structure Workshop (Linköping, March 2017) + (Electronic Structure / Seminars and discussion sessions)
- Improved FFT and I/O for the Pencil code + (Enabling support as part of the Prace DECI 8 program)
- Essense Code Optimisation + (Essense Code Analysis and Optimisation)
- Infrastructure for the European Network for Earth System modelling - Phase 2 + (European network of distributed e-infrastructure to support Earth system modelling.)
- EMTOx + (Exact Muffin-Tin Orbitals method (x), an electronic structure code based on the Green's function technique)
- EMTO + (Exact Muffin-Tin Orbitals method, an electronic structure code based on the Green's function technique)
- Develop multi-category LIM3 sea-ice capabilities in EC-Earth 3 + (Feature request to support multi-category sea-ice in the IFS component of EC-Earth 3.)
- MSC Nastran + (Finite Element Analysis (FEA) solver)
- FFTW + (Freely available high performance library to perform fast Fourier transformations)
- ANNOVAR + (Functional annotation of genetic variants from high-throughput sequencing data)
- Zorn + (GPU cluster)
- NSC GPU and Accelerator Pilot + (GPU/Accelerator Pilot Project at NSC)
- Gaussian Workshop, HPC2N, 14-15 May 2018 + (Gaussian Workshop at HPC2N in Umeå)
- HPC Tools for the Modern Era (PDC, October 2018) + (HPC Tools for the Modern Era)
- HPC2N storage + (HPC2N Swestore storage node of 400Tb)
- Handling large data within SNIC, using Swestore - 15 March 2022 + (Handling large data within SNIC, using Swestore)
- Heterogeneous computing with performance modelling, Umeå, 2020-11-(4-5) + (Heterogeneous computing with performance modelling)
- Hierarchical modules and software selection (Lund, October 2018) + (Hierarchical modules and software selection)
- Hierarchical modules and software selection (Lund, March 2018) + (Hierarchical modules and software selection)
- Hierarchical modules and software selection (Lund, December 2017) + (Hierarchical modules and software selection)
- How to work effectively on Tetralith / Sigma (Linköping Nov 2018) + (How to work effectively on Tetralith)
- How to work effectively on Tetralith (Stockholm Dec 2018) + (How to work effectively on Tetralith)
- CASAVA + (Illumina's Consensus Assessment of Sequence and Variation (CASAVA) software)
- Improving MPI communication latency on euroben kernels + (Improving the MPI collective performance by network aware communication)
- Intel Cluster Studio/HPC Training (HPC2N, November 2015) + (Intel Cluster Studio/HPC Training at HPC2N)
- Intel development / HPC tools (HPC2N, May 2016) + (Intel development / HPC tools)
- Intel oneAPI webinar (Mar 2020) + (Intel oneAPI Overview Webinar)
- Intermediate Topics in MPI (June 2022) + (Intermediate Topics in MPI)
- Introduction to HPC2N (September 2018) + (Introduction course for (new) users of HPC2N's systems)
- Uppmax Intro Course + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (January 2019) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (August 2016) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (October 2014) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (August 2019) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (January 2017) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (January 2015) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (August 2017) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (August 2015) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (January 2018) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (October 2015) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (August 2018) + (Introduction course for new users of Uppmax systems)
- Uppmax Intro Course (January 2016) + (Introduction course for new users of Uppmax systems)
- Introduction to HPC (Lunarc November 2015) + (Introduction course for new users of high performance computing)
- Introduction to HPC (Lunarc May 2014) + (Introduction course for new users of high performance computing)
- Introduction to HPC (Lunarc October 2014) + (Introduction course for new users of high performance computing)
- Introduction to HPC (Lunarc May 2015) + (Introduction course for new users of high performance computing)
- SNIC Science Cloud Computing Workshop (May 2016) + (Introduction seminar and workshop to SNIC Cloud resources)
- Introduction to Deep Learning, Umeå (22-23 November 2018) + (Introduction to Deep Learning)
- Introduction to Distributed Memory Programming and MPI (HPC2N, April 23, 2015) + (Introduction to Distributed Memory Programming and MPI)
- Introduction to GPU programming with CUDA (PDC, May 2015) + (Introduction to GPU programming with CUDA)
- Introduction to GPU programming: When and how to use GPU-acceleration?, HPC2N, 5 November 2019 + (Introduction to GPU programming: When and how to use GPU-acceleration?)
- PDC/Introduction to GROMACS Workshop (Sept 2020) + (Introduction to GROMACS Workshop)
- Introduction to Git, Umeå (2020-09-30) + (Introduction to Git)
- Introduction to Git, HPC2N, 2021-11-(9-12) + (Introduction to Git)
- Introduction to Git, Umeå, 2022-11-(14-18) + (Introduction to Git)
- Introduction to HPC and Kebnekaise, HPC2N, 2021-01-21 + (Introduction to HPC and Kebnekaise)
- Introduction to HPC (HPC2N, October 2015) + (Introduction to HPC at HPC2N)
- Introduction to HPC2N (January 2020) + (Introduction to HPC2N)
- Introduction to HPC2N (September 2020) + (Introduction to HPC2N)
- Introduction to HPC2N (September 2017) + (Introduction to HPC2N)
- Introduction to HPC2N (Umeå, January 2018) + (Introduction to HPC2N)
- Introduction to HPC2N (January 2019) + (Introduction to HPC2N)
- Introduction to HPC2N (September 2019) + (Introduction to HPC2N)
- Introduction to HPC2N and Kebnekaise (November 2016) + (Introduction to HPC2N and Kebnekaise)
- Introduction to HPC2N and Kebnekaise (February 2017) + (Introduction to HPC2N and Kebnekaise)
- Introduction to Programming the Xeon Phi Processor (November 2015) + (Introduction to Intel's Xeon Phi processor for scientific computing)
- Introduction to Kebnekaise (HPC2N), 2022-01-19 + (Introduction to Kebnekaise)
- Introduction to Kebnekaise, 2022-09-15, HPC2N/UmU + (Introduction to Kebnekaise)
- Introduction to Kebnekaise (HPC2N), 2021-09-08 + (Introduction to Kebnekaise)
- Introduction to Linux and Abisko (HPC2N, April 22) + (Introduction to Linux and Abisko)
- Introduction to Linux and Abisko (HPC2N, May 27) + (Introduction to Linux and Abisko)
- Introduction to NSC (Nov 2021) + (Introduction to NSC)
- Introduction to NSC (March 2021) + (Introduction to NSC)
- Introduction to OpenMP and MPI (HPC2N, December 2016) + (Introduction to OpenMP and MPI)
- Introduction to PDC (Sept 2016) + (Introduction to PDC)
- Introduction to PDC (Oct 2019) + (Introduction to PDC)
- Introduction to PDC (Feb 2017) + (Introduction to PDC)
- Introduction to PDC (Feb 2020) + (Introduction to PDC)
- Introduction to PDC (Oct 2017) + (Introduction to PDC)
- Introduction to PDC (September 2014) + (Introduction to PDC)
- ARM HPC hands on workshop (Feb 2020) + (Introduction to PDC)
- Introduction to PDC (Feb 2018) + (Introduction to PDC)
- Introduction to PDC (September 2015) + (Introduction to PDC)
- Introduction to PDC (Nov 2020) + (Introduction to PDC)
- Introduction to PDC (Oct 2018) + (Introduction to PDC)
- Introduction to PDC (February 2016) + (Introduction to PDC)
- Introduction to PDC (Feb 2019) + (Introduction to PDC)
- Introduction to PDC (October 2022) + (Introduction to PDC Systems)
- Introduction to PDC (March 2022) + (Introduction to PDC Systems)
- Introduction to Shared Memory Programming and OpenMP (HPC2N, May 28, 2015) + (Introduction to Shared Memory Programming and OpenMP)
- Unix for new users of HPC (Lunarc May 2016) + (Introduction to Unix for new users of HPC)
- Introduction to WIEN2k (Nov 2021) + (Introduction to WIEN2k)
- Awk workshop (UPPMAX, January 2016) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, January 2019) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, August 2016) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, August 2019) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, January 2017) + (Introduction to awk and sed)
- Awk workshop (Karolinska, January 2020) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, August 2017) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, January 2018) + (Introduction to awk and sed)
- To awk or not (UPPMAX, October 2015) + (Introduction to awk and sed)
- Awk workshop (UPPMAX, August 2018) + (Introduction to awk and sed)
- Datahandling using R tidyverse (Lund, November 2019) + (Introduction to data handling using R tidyverse in a modern software environment)
- Introduction to PDC (Dec 2021) + (Introduction to the new Dardel system at PDC)
- NSC introduction day (Linköping, October 2017) + (Introduction to using NSC resources)
- Applied Cloud Computing Workshop (UPPMAX, October 2015) + (Introduction to using SNIC Cloud resources)
- Applied Cloud Computing Workshop (March 2016) + (Introduction to using SNIC Cloud resources)
- Working with Python on Tetralith (Stockholm, October 2019) + (Introduction to working with Python at NSC)
- LUNARC storage + (LUNARC Swestore storage node of 400Tb)
- Allinea Performance and Debugging Tools Workshop (C3SE January 2016) + (Learn to use Allinea’s highly scalable and easy-to-use HPC development tools)
- Allinea Performance and Debugging Tools Workshop (HPC2N January 2016) + (Learn to use Allinea’s highly scalable and easy-to-use HPC development tools)
- Matlab (LiU, October '16) + (MATLAB Programming Techniques)
- MD simulations with a focus on NAMD (HPC2N, UmU), 2022-04-(07-08) + (MD simulations with a focus on NAMD)
- Trace analyzer and collector + (MPI job analyser tool)
- Intel MPI + (MPI library)
- MVAPICH2 + (MPI library)
- Machine Learning with R, HPC2N, 3 December 2019 + (Machine Learning with R)
- Research data for open science (Lund, November 2018) + (Making your research data fit for a future of open science and open data)
- Research data for open science (Lund, April 2019) + (Making your research data fit for a future of open science and open data)
- SNIC Science Cloud Workshop Material + (Material for technical training and workshops hosted by SNIC Science Cloud.)
- Matlab HPC training (Linköping, Oct 2018) + (Matlab HPC training)
- NEC SX-Aurora TSUBASA Webinar (Feb 2020) + (NEC SX-Aurora TSUBASA Webinar)
- NMRPipe + (NMR spectroscopy data analysis suite)
- NSC storage + (NSC Swestore storage node of 200Tb)
- NSC introduction to Tetralith/Sigma (Apr 2022) + (NSC introduction to Tetralith/Sigma)
- NSC introduction to Tetralith/Sigma (Nov 2022) + (NSC introduction to Tetralith/Sigma)
- NVIDIA GPU Boot Camp and DLI (Sept 2019) + (NVIDIA GPU Boot Camp and Deep Learning workshop)
- Nordic collaboration on e-infrastructures for Earth System Modeling + (NeIC climate and environment collaborative project.)
- Nek5000 OpenACC + (Nek5000 with OpenACC)
- PRACE WP12: Network topology analysis and efficient collective design + (Network topology analysis and efficient collective design and)
- Category:Molecular dynamics + (Newtonian motion simulation in systems with hundreds to millions of particles)
- OpenFOAM Training Workshop (Dec 2021) + (OpenFOAM Training Workshop offered by ENCCS and PDC)
- Online training materials + (Overview page on online training materials freely available)
- PDC storage + (PDC Swestore storage node of 200Tb)
- PDC/PRACE Online Course: Writing Parallel Applications Using MPI (May 2020) + (PDC/PRACE Online Course: Writing Parallel Applications Using MPI)
- PRACE/BioExcel Seasonal School HPC for Life Sciences (June 2019) + (PRACE/BioExcel Seasonal School: HPC for Life Sciences)
- Matlab (C3SE, October '16) + (Parallel Computing in MATLAB and Scaling to SNIC HPC Clusters)
- Matlab (Uppmax, October '16) + (Parallel Computing in MATLAB and Scaling to SNIC HPC Clusters)
- Matlab (HPC2N, October '16) + (Parallel Computing in MATLAB and Scaling to SNIC HPC Clusters)
- Matlab (PDC, October '16) + (Parallel Computing in MATLAB and Scaling to SNIC HPC Clusters)
- Matlab (Lunarc, October '16) + (Parallel Computing in MATLAB and Scaling to SNIC HPC Clusters)
- Enabling Xnavis for Massively Parallel Simulations of Wind Farms + (Parallel I/O Implementation and Communication Optimization on Xnavis Wind Farm Simulation Code)
- Parallel I/O Implementation on the Multiple Sequence Alignment Software ClustalW-MPI + (Parallel I/O Implementation on the Multiple Sequence Alignment Software ClustalW-MPI)
- Parallel Programming with Open Standards (Sept 2016) + (Parallel Programming seminar provided by PDC, PGI and NVIDIA)
- LES Code Parallelization + (Parallelization of a Large Eddy Simulation Code)
- Dalton CPP-LR parallelization + (Parallelization of the coupled cluster complex polarization propagator module in the Dalton program)
- Parallelization of a materials science code + (Parallelization request for a materials science code)
- Patchwork + (Patchwork: Bioinformatic tool for allele-specific copynumber analysis of tumor samples)
- HYPE Code Parallelisation + (Performance Analysis and Parallelisation of SMHI's HYPE Code)
- Performance Analysis of ad OSS Program + (Performance Analysis of ad_OSS Program for Modeling Water Molecules)
- Performance Tools Course at HPC2N (14 March 2017) + (Performance Tools (Paraver, Extrae, Scalasca) Course)
- Parallel FFTs in Molsim + (Performance improvement of the SPME in Molsim)
- Scalasca + (Performance profiler for parallel applications)
- GARLI + (Performs heuristic phylogenetic searches ...)
- Petascaling enabling and support for EC-EARTH3 + (Petascaling of high resolution EC-EARTH on PRACE Tier-0 Curie System)
- EC-Earth compilation and performance analysis: Beskow + (Port EC-Earth to Beskow and complete performance/scaling tests.)
- Porting Earth system models to triolith + (Port three commonly used Earth system models to triolith.)
- Portability performance analysis and improvement of ESM + (Porting and performance analysis of earth system models (ESM) on different architectures)
- Erik + (Prototype system featuring 68 Nvidia Tesla K20m GPU cards and 2 Xeon Phi cards)
- Course: Python for Scientific Computing + (Python for Scientific Computing)
- CSA + (Python implementation of the Connection-set Algebra (Djurfeldt 2012))
- QM/MM best practices, HPC2N, 2021-12-9 + (QM/MM best practices)
- R in an HPC environment, Umeå, 2022-12-(14-15) + (R in an HPC environment)
- Integration of OpenIFS into EC-Earth3 + (Replace the atmospheric component of EC-Earth (IFS) with the OpenIFS model (c38r1).)
- Running MD applications efficiently in HPC, HPC2N, 26-27 April 2021 + (Running MD applications efficiently in HPC)
- SSC training workshop, HPC2N, 10 October 2017 + (SNIC Cloud Computing Workshop)
- SNIC Science Cloud Workshop (C3SE November 2016) + (SNIC Science Cloud Workshop)
- Linux For Beginners (C3SE November 2016) + (SNIC Science Cloud Workshop)
- SNIC Science Cloud Workshop, Mittuniversitetet, Sundswall (2018-08-31) + (SNIC Science Cloud Workshop)
- SNIC Science Cloud + (SNIC Science Cloud is a cloud computing infrastructure run by SNIC.)
- SNIC coordinated training + (SNIC coordinated training)
- Schrödinger Molecular Modelling Workshop at HPC2N (29 March 2017) + (Schrödinger Molecular Modelling / Drug Discovery Workshop)
- Scientific Visualisation (Uppsala, Nov 2018) + (Scientific Visualisation Workshop)
- Scientific Visualisation Workshop(UPPMAX, January 2016) + (Scientific Visualisation Workshop 2014)
- Scientific Visualisation Workshop(UPPMAX, November 2014) + (Scientific Visualisation Workshop 2014)
- Scientific Visualisation Workshop(UPPMAX, November 2016) + (Scientific Visualisation Workshop Autumn 2016)
- C3SE Debugging Seminar April 29 2015 + (Seminar for all users at C3SE, covering software debugging on C3SE systems.)
- C3SE Software Development Seminar April 14 2015 + (Seminar for all users at C3SE, covering software debugging on C3SE systems.)
- C3SE Scheduling Seminar April 1 2015 + (Seminar for all users at C3SE, covering the queuing system used at our systems.)
- C3SE Linux for Beginners Seminar February 24 2015 + (Seminar for all users at C3SE, covering fundamental Linux skills)
- C3SE Environment Seminar March 12 2015 + (Seminar for all users at C3SE, describing the C3SE hardware and software environment.)
- C3SE Introductory Seminar October 18 2017 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar May 18 2016 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar September 20 2019 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar October 17 2018 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar November 21 2017 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar February 14 2017 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar October 23 2019 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar February 10 2015 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar November 20 2018 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar February 20 2018 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar April 19 2017 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar November 25 2019 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar May 20 2015 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar January 30 2019 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar March 21 2018 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar May 17 2017 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar March 25 2015 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar March 26 2019 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar May 16 2018 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar September 20 2017 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar April 13 2016 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar May 8 2019 + (Seminar for new users at C3SE, describing how to use our systems)
- C3SE Introductory Seminar September 19 2018 + (Seminar for new users at C3SE, describing how to use our systems)
- VASP best practices (NSC-UPPMAX January 2015) + (Seminar on running VASP efficiently at Triolith and Beskow (the new Cray XC-40 at PDC))
- VASP best practices (LiU February 2015) + (Seminar on running VASP efficiently at Triolith and Beskow (the new Cray XC-40 at PDC))
- Optimization of a lagrangian cloud parcel model for use in the global climate model ECHAM6.1-HAM2.2 + (Serial optimization of a cloud parcel model for embedding in a the ECHAM-HAM global climate model.)
- Bolin Centre software management through local modules - trial evaluation. + (Set up and evaluate a local module system on Triolith for Bolin Centre software installations.)
- Shared memory programming with OpenMP (NSC May 2013) + (Shared memory parallel programming using OpenMP)
- Software Carpentry Workshop Lund (March 2018) + (Software Carpentry Workshop)
- Software Carpentry Workshop in R, Umeå (October 2018) + (Software Carpentry Workshop in R)
- Software Carpentry Stockholm (June 2019) + (Software Carpentry workshop)
- Software Carpentry Stockholm (March 2018) + (Software Carpentry workshop)
- Running parallel jobs in Matlab (Lunarc Sept 2015) + (Solving large problems efficiently through parallel computing in Matlab)
- An introduction to solving partial differential equations in Python with FEniCS (Lunarc June 2015) + (Solving partial differential equations in Python with FEniCS)
- MATLAB using SNIC clusters (HPC2N September 2014) + (Speeding up MATLAB Computations using SNIC Clusters)
- MATLAB using SNIC clusters (UPPMAX September 2014) + (Speeding up MATLAB Computations using SNIC Clusters)
- MATLAB using SNIC clusters (Lunarc May 2014) + (Speeding up MATLAB Computations using SNIC Clusters)
- MATLAB using SNIC clusters (C3SE June 2014) + (Speeding up MATLAB Computations using SNIC Clusters)
- Using the DDT debugger (Lunarc, October 2015) + (Speeding up code modernisation and bug resolution with Allinea DDT)
- GATK + (Structured software library for writing analysis tools for next-generation sequencing data.)
- Synthetic Benchmark on Curie + (Synthetic Benchmark for PRACE Tier-0 Curie System)
- Task-based parallelism in scientific computing (March 2020) + (Task-based parallelism in scientific computing)
- Task-based parallelism in scientific computing (HPC2N/PRACE, May 2021) + (Task-based parallelism in scientific computing)
- CodeRefinery Workshop on Sustainable Scientific Software Development (February 2017) + (Teaching researchers in sustainable software development)
- CodeRefinery Workshop on Sustainable Scientific Software Development (November 2017) + (Teaching researchers in sustainable software development)
- Tensorflow and Deep Learning, HPC2N, 8-9 May 2019 + (Tensorflow and Deep Learning)
- Test suite for VASP + (Test suite for VASP)
- The Effective Use of the Kebnekaise Accelerators (HPC2N, December 2017) + (The Effective Use of the Kebnekaise Accelerators)
- Pencil + (The Pencil Code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields)
- The future of HPC programming - a Modern Fortran workshop, Umeå, 2022-11-(24-25) + (The future of HPC programming - a Modern Fortran workshop)
- Parallel Programming Education + (Training in parallel programming)
- Transfering data (Lund, Oct 2018) + (Transfering data to and from an HPC system)
- Transfering data (Lund, February 2019) + (Transfering data to and from an HPC system)
- Transfering data (Lund, Nov 2017) + (Transfering data to and from an HPC system)
- UPPMAX Introductory Course, August 16-19, 2022 + (UPPMAX Introductory Course)
- UPPMAX storage + (UPPMAX Swestore storage node of 200Tb)
- MDR model library update + (Updating the automated classification system for MDR proteins based on new data.)
- UppASD Autumn School (October 2022) + (UppASD Autumn School)
- XC-40 Architecture (PDC February 2015) + (Using Cray XC-40 Machines)
- Using Matlab in an HPC environment (Lunarc, November 2018) + (Using Matlab in an HPC environment)
- Using Matlab in an HPC environment (Lunarc, 2017) + (Using Matlab in an HPC environment)
- Using Matlab in an HPC environment (Lunarc, October 2017) + (Using Matlab in an HPC environment)
- Using Matlab in an HPC environment (Lunarc, April 2018) + (Using Matlab in an HPC environment)
- Using Python in an HPC environment, September 2022, UPPMAX/HPC2N + (Using Python in an HPC environment)
- Using Python in an HPC environment, May 2023, UPPMAX/HPC2N + (Using Python in an HPC environment)
- Using R in an HPC environment, HPC2N, 2021-02-(25-26) + (Using R in an HPC environment)
- Commercial engineering software (Lund, Nov 2017) + (Using commercial engineering software in an HPC environment)
- Intel Compiler (Lunarc November 2016) + (Using the Intel® compiler and performance tools)
- Utilising a modern HPC environment (Lunarc, May 2016) + (Utilising a modern HPC environment)
- Vasp best practices (Stockholm, May 2019) + (VASP best practices)
- Vasp best practices (Uppsala, June 2019) + (VASP best practices)
- Vasp best practices (Linköping, June 2019) + (VASP best practices)
- VASP best practices workshop (NSC, Feb 2022) + (VASP best practices workshop)
- VASP best practices workshop (NSC, Oct 2020) + (VASP best practices workshop)
- Vasp - Basic Theory and Best Practices, HPC2N, October 2019 + (Vasp - Basic Theory and Best Practices)
- Version Control Workshop, HPC2N + (Version Control Workshop)
- Visualisation and interactivity in HPC (LUNARC, March 2019) + (Visualisation and interactivity in HPC - The LUNARC HPC Desktop)
- Vapor + (Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers)
- Grace + (WYSIWYG tool to make two-dimensional plots of scientific data)
- PconsC for Fido + (Web hosting for PconsC)
- Working effectively with HPC systems (NSC, April 2021) + (Working effectively with HPC systems)
- Schrödinger materials science suite workshop (Linköping, November 2017) + (Workshop on using the Schrödinger materials science suite with Quantum Espresso)
- Writing Parallel Applications Using MPI (Stockholm, December 2019) + (Writing Parallel Applications Using MPI)
- MPI (PDC December 2015) + (Writing parallel applications using MPI)
- MPI (PDC December 2017) + (Writing parallel applications using MPI)
- MPI (PDC November 2014) + (Writing parallel applications using MPI)
- Xds + (X-ray Detector Software for processing single-crystal monochromatic diffraction data recorded by the rotation method.)
- FEFF + (a real-space full multiple scattering (RSFMS) Green's function method)
- MPQC + (ab initio quantum chemistry)
- GAMESS + (ab initio quantum chemistry)
- Jaguar + (ab initio quantum mechanics)
- Elk + (all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features)
- RSPt + (all-electron full-potential linearised muffin-tin orbital (FP-LMTO) code with many features. Dynamic mean field capabilities is included in the code.)
- BLAT + (an alignment tool like BLAST, but it is structured differently.)
- NCL + (analysis and visualization)
- Category:Computational electromagnetics + (application of computer science methods to solve and model electromagnetic fields)
- Category:Solid mechanics + (application of computer science methods to solve continuum solid mechanics problems)
- Category:Computational chemistry + (application of computer science methods to solve chemical problems)
- Category:Climate research + (application of computer science methods to study the Earth's climate)
- Category:Computational materials science + (applying the properties of matter to various areas of science and engineering)
- CP2K + (atomistic and molecular simulations code)
- Charmm + (atomistic and molecular simulations code)
- Cyana + (biological macromolecule structure calculation based on NMR conformational constraints)
- Akka + (capability cluster resource of 54 TFLOPS with infiniband interconnect)
- Neolith + (capability cluster resource of 60 TFLOPS with full bisection infiniband interconnect)
- Abisko + (capability resource of 153 TFLOPS with full bisectional infiniband interconnect)
- LAMMPS + (classical molecular dynamics code)
- NCAR diagnostic packages + (climate model diagnostics)
- Matter + (cluster resource of 37 TFLOPS dedicated to materials science)
- Kalkyl + (cluster resource of about 21 TFLOPS)
- Grad + (cluster resource primarily used for SweGrid)
- Category:Grid computing + (combines computers from multiple administrative domains)
- DIANA + (commercial FEM package)
- PowerFLOW + (commercial computational fluid dynamics package)
- StarCCM + (commercial computational fluid dynamics package)
- Fluent + (commercial computational fluid dynamics package)
- Fire + (commercial computational fluid dynamics package)
- STAR-CD + (commercial computational fluid dynamics package)
- GNU compiler collection + (compiler collection for a number of languages including C, C++ and Fortran)
- PGI + (compiler suite)
- PathScale + (compiler suite)
- Intel compiler suite + (compilers for C, C++ and Fortran)
- Mathematica + (computational software for technical computing)
- NCO + (data analysis)
- CDO + (data analysis)
- CS-Rosetta + (de novo protein structure generation)
- ABINIT + (density-functional theory code)
- OpenMX + (density-functional theory code)
- SIESTA + (density-functional theory code for very large systems)
- Ccp4 + (determining macromolecular structures by X-ray crystallography)
- MOLDEN + (display ab initio molecular densities)
- Ruby + (dynamic, reflective, general-purpose object-oriented programming language)
- MKL + (efficient mathematics library)
- ACML + (efficient mathematics library)
- VASP + (electronic structure calculation)
- CASTEP + (electronic structure calculation)
- Greens + (electronic structure codes based on the KKRASA Green's function technique)
- Muscle + (fast, high-quality multiple sequence alignment)
- Meep + (finite-difference time-domain simulation software package)
- Abaqus + (finite-element package)
- ASE + (framework for setting up and analyzing atomistic simulations)
- OpenFOAM + (free, open source CFD software package by OpenCFD Ltd)
- Exciting-code + (full-potential all-electron density-functional-theory (DFT) package based on the linearized augmented plane-wave (LAPW) method)
- EC-Earth + (global climate model)
- CESM1 + (global climate model)
- NorESM + (global climate model)
- SAM + (hidden Markov model analysis of biological sequences)
- Category:Performance optimisation + (improving the computational efficiency of an application)
- Category:Bioinformatics + (information handling in biology)
- Shake n bake + (is a computer program based on Shake-and-Bake, a dual-space direct-methods procedure for determining crystal structures from X-ray diffraction data.)
- Hkl2map + (is a graphical user-interface for macromolecular phasing)
- Pymol + (is a molecular visualization system.)
- Shelx + (is a set of programs for the determination of small (SM) and macromolecular (MM) crystal structures by single crystal X-ray and neutron diffraction.)
- Phenix + (is a software suite for the automated determination of macromolecular structures using X-ray crystallography and other methods)
- ANSYS + (large modeling suite)
- Yambo + (many-body calculations in solid state and molecular physics)
- Inspector + (memory error and thread checker)
- Amber + (molecular dynamics)
- ESPResSo + (molecular dynamics of soft matter systems)
- Desmond + (molecular dynamics package)
- Mafft + (multiple sequence alignment program)
- Octave + (numerical computation and visualisation language)
- Category:HPC training + (offering training and education to the SNIC communities in HPC related matters.)
- Open MPI + (open source [[MPI]] library)
- SciPy + (open-source software for mathematics, science, and engineering.)
- Category:Performance tuning + (optimisation of simulation applications with respect to an optimal use of hardware features.)
- Category:Neuroinformatics + (organization of neuroscience data)
- BLAST + (package for aligning nucleotide or amino acid sequences)
- FASTA + (package for aligning nucleotide or amino acid sequences)
- PHYLIP + (package for inference of phylogenies)
- Simson + (package for solving the Navier-Stokes equations for incompressible channel and boundary layer flows)
- HMMER + (package for working with profile hidden Markov models (HMM))
- Environment modules + (package to manage the systems and application software)
- NAMD + (parallel molecular dynamics code)
- Test training 2014 + (parallel performance optimization tools)
- Test training 2012 + (parallel performance optimization tools)
- Dacapo + (plane-wave DFT)
- CPMD + (plane-wave DFT)
- GENE + (plasma microturbulence code)
- ClustalW + (popular multiple sequence aligner)
- Gnuplot + (portable command-line driven graphing utility)
- Dalton + (powerful molecular electronic structure program.)
- TAU + (profiling and tracing tool-kit for performance analysis of parallel programs)
- Matlab + (programming language with extensive plotting and graphics functionalities)
- Category:Parallel programming + (programming with multiple threads or processes)
- Rosetta + (protein structure prediction suite)
- GPAW + (real-space DFT)
- POV-Ray + (render high-quality images of three dimensional objects)
- Halvan + (shared-memory computer with 64 cores and 2 TB of memory)
- Efield + (simulation environment for electromagnetic simulations)
- Category:Visualisation + (software for graphical representation of data)
- Coot + (software for macromolecular model building, model completion and validation, particularly suitable for protein modelling using X-ray data)
- Molsim + (software for molecular dynamic, Monte Carlo, and Brownian dynamics simulation)
- Totalview + (source code defect analysis tool)
- R + (statistical computing and visualisation language.)
- Category:Structural biology + (structural and functional analysis of proteins and their biomolecular complexes)
- NumPy + (the fundamental package needed for scientific computing with Python)
- Bioscope + (the pipeline stack that comes with the solid sequence platform)
- VTune Amplifier + (threading and performance optimization tool)
- Hebbe + (throughput cluster resource)
- Beda + (throughput cluster resource)
- Glenn + (throughput cluster resource)
- Kappa + (throughput cluster resource of 26 TFLOPS)
- Platon + (throughput cluster resource of 26 TFLOPS)
- Alarik + (throughput cluster resource of 40 TFLOPS)
- Ferlin + (throughput cluster resource of 58 TFLOPS)
- Siri + (throughput resource for SweGrid)
- Aurora + (throughput/general purpose cluster resource)
- Octopus + (time dependent density-functional theory code)