Parallelization of a materials science code

From SNIC Documentation
Revision as of 08:55, 10 June 2013 by Chandan Basu (NSC) (talk | contribs) (Created page with "{{project info |description=Parallelization request for a materials science code <!--|research project=--> |fields=Computational science; Materials science |financing=SNIC |activ...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Name Parallelization of a materials science code
Description Parallelization request for a materials science code
Project financing   SNIC
Is active yes
Start date 2013-05-01
End date

This project is the result of a request from a research grroup in LiU. The code is embarrassingly parallel in nature. The parallelizing task is to send receive an array of user defined complicated data type.

Requestors and collaborators:

  • Olle Hellman @ IFM, LiU


Description

We have implemented an easy interface for MPI send / receive for derived data. The send/recv routines "know" the data structure. These routines internally pack / unpack the structure data on a character buffer. We have taken a data structure which is similar to what was described by the user. We have implemented it in a template format. For the actual data structure the user needs to modify the relevant places in pack and unpack routines. The data structure dependent parts are placed in a separate module. In the main code the user needs to call MPI_Send_point / MPI_Recv_point to send / recv derived data types.

The routine can be used as a stand alone library to be linked at run time. The interface can be made more user friendly by "overloading" the MPI_Send_point / MPI_Recv_point routines as MPI_Send / MPI_recv respectively.


The code


Members

 CentreRoleField
Chandan Basu (NSC)NSCApplication expertComputational science