Difference between revisions of "Parallelization of a materials science code"

From SNIC Documentation
Jump to: navigation, search
(Created page with "{{project info |description=Parallelization request for a materials science code <!--|research project=--> |fields=Computational science; Materials science |financing=SNIC |activ...")
 
m
 
Line 2: Line 2:
 
|description=Parallelization request for a materials science code
 
|description=Parallelization request for a materials science code
 
<!--|research project=-->
 
<!--|research project=-->
|fields=Computational science; Materials science
+
|fields=Computational science; Computational materials science
 
|financing=SNIC
 
|financing=SNIC
 
|active=yes
 
|active=yes

Latest revision as of 08:18, 28 February 2020

Name Parallelization of a materials science code
Description Parallelization request for a materials science code
Project financing   SNIC
Is active yes
Start date 2013-05-01
End date

This project is the result of a request from a research grroup in LiU. The code is embarrassingly parallel in nature. The parallelizing task is to send receive an array of user defined complicated data type.

Requestors and collaborators:

  • Olle Hellman @ IFM, LiU


Description

We have implemented an easy interface for MPI send / receive for derived data. The send/recv routines "know" the data structure. These routines internally pack / unpack the structure data on a character buffer. We have taken a data structure which is similar to what was described by the user. We have implemented it in a template format. For the actual data structure the user needs to modify the relevant places in pack and unpack routines. The data structure dependent parts are placed in a separate module. In the main code the user needs to call MPI_Send_point / MPI_Recv_point to send / recv derived data types.

The routine can be used as a stand alone library to be linked at run time. The interface can be made more user friendly by "overloading" the MPI_Send_point / MPI_Recv_point routines as MPI_Send / MPI_recv respectively.


The code


Members

 CentreRoleField
Chandan Basu (NSC)NSCApplication expertComputational science