Shared memory programming
Shared memory programming is a form of parallel programming. A shared memory program typically achieves its parallelism by spawning threads. The threads can be distributed onto more than one processing element (e.g. core of a multi core processor) to gain a parallel speed-up on the process. As the name said, all threads have access to a large shared memory area and can read and/or write to it. When accessing the shared memory from different threads, care needs to be taken that this accesses happen in the right order to avoid data races.
To write shared memory programs for a multi-core system, popular choices of a programming language are pthreads to parallelise a C or C++ program, OpenMP for Fortran, C or C++ programs or a threaded language such as Java. Many shared memory programs for a GPU are written in OpenCL or Cuda.
As implied above, to execute a shared memory program specialist hardware is needed. While being capable to progress more than one thread simultaneously, it needs to provide efficient access to the shared memory space from all these threads. Fortunately these days such hard ware is not that expensive any more. A simple multi-core system or a single GPU can be used if the requirement towards parallel speed-up are modest.