Difference between revisions of "Category:Parallel programming"

From SNIC Documentation
Jump to: navigation, search
(Created page with "Category:Area of expertise In a parallel program, the computational work to be performed by the application is divided into a number of work packages. These work packages c...")
 
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[Category:Area of expertise]]
+
[[Category:Computational science]]
 +
{{field info
 +
|description=programming with multiple threads or processes
 +
}}
 +
{{PAGENAME}} entails {{#show: Category:{{PAGENAME}}|?description}}.
  
 
In a parallel program, the computational work to be performed by the application is divided into a number of work packages.  These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently.  This should deliver a faster time to solution, when compared to utilising a single processing element only.  By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks.   
 
In a parallel program, the computational work to be performed by the application is divided into a number of work packages.  These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently.  This should deliver a faster time to solution, when compared to utilising a single processing element only.  By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks.   
  
In a typical parallel program the work packages are however not fully independent.  They frequently require access to data generated or modified on other processing elements.  This data then needs communicating.  When using a distributed memory system communication is typically facilitated by deploying some form of message passing.  On a shared memory system (e.g. multi core system) all computational threads have access to a common shared memory space, which allows data communication between the processing elements.  
+
In a typical parallel program the work packages are however not fully independent.  They frequently require access to data generated or modified on other processing elements.  This data then needs communicating.  When using a [[distributed memory programming|distributed memory]] system communication is typically facilitated by deploying some form of [[message passing]].  On a shared memory system (e.g. multi core system) one has a choice of using message passing or [[shared memory programming]] techniques. When using shared memory programming one spawns a number of threads, which have access to a common shared memory space.  The threads communicate by writing data to and reading data from this shared space.
  
 +
== Experts ==
 +
{{list general experts}}
  
 
+
<!--
== Experts ==
+
== Software ==
{{list experts}}
+
{{list software for category}}
 +
-->

Latest revision as of 08:10, 4 June 2012

Empty strings are not accepted. Parallel programming entails programming with multiple threads or processes.

In a parallel program, the computational work to be performed by the application is divided into a number of work packages. These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently. This should deliver a faster time to solution, when compared to utilising a single processing element only. By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks.

In a typical parallel program the work packages are however not fully independent. They frequently require access to data generated or modified on other processing elements. This data then needs communicating. When using a distributed memory system communication is typically facilitated by deploying some form of message passing. On a shared memory system (e.g. multi core system) one has a choice of using message passing or shared memory programming techniques. When using shared memory programming one spawns a number of threads, which have access to a common shared memory space. The threads communicate by writing data to and reading data from this shared space.

Experts

  FieldAE FTEGeneral activities
Birgitte Brydsö (HPC2N)HPC2NParallel programming
HPC
Training, general support
Jerry Eriksson (HPC2N)HPC2NParallel programming
HPC
HPC, Parallel programming
Joachim Hein (LUNARC)LUNARCParallel programming
Performance optimisation
85Parallel programming support
Performance optimisation
HPC training
Marcus Lundberg (UPPMAX)UPPMAXComputational science
Parallel programming
Performance tuning
Sensitive data
100I help users with productivity, program performance, and parallelisation. I also work with allocations and with sensitive data questions
Mirko Myllykoski (HPC2N)HPC2NParallel programming
GPU computing
Parallel programming, HPC, GPU programming, advanced support
Wei Zhang (NSC)NSCComputational science
Parallel programming
Performance optimisation
code optimization, parallelization.