From SNIC Documentation
Parallel programming entails programming with multiple threads or processes.
In a parallel program, the computational work to be performed by the application is divided into a number of work packages. These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently. This should deliver a faster time to solution, when compared to utilising a single processing element only. By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks.
In a typical parallel program the work packages are however not fully independent. They frequently require access to data generated or modified on other processing elements. This data then needs communicating. When using a distributed memory system communication is typically facilitated by deploying some form of message passing. On a shared memory system (e.g. multi core system) one has a choice of using message passing or shared memory programming techniques. When using shared memory programming one spawns a number of threads, which have access to a common shared memory space. The threads communicate by writing data to and reading data from this shared space.
|Field||AE FTE||General activities|
|Birgitte Brydsö (HPC2N)||HPC2N||HPC
|Training, general support|
|Jerry Eriksson (HPC2N)||HPC2N||HPC
|HPC, Parallel programming|
|Marcus Lundberg (UPPMAX)||UPPMAX||Performance tuning
|100100||I help users with productivity, program performance, and parallelisation.|
|Mirko Myllykoski (HPC2N)||HPC2N||Parallel programming
|Parallel programming, HPC, GPU programming, advanced support|
|Wei Zhang (NSC)||NSC||Computational science
|code optimization, parallelization.|
Pages in category "Parallel programming"
The following 29 pages are in this category, out of 29 total.