Difference between revisions of "LES Code Parallelization"

From SNIC Documentation
Jump to: navigation, search
(Created page with "{{project info |description=Parallelization of a Large Eddy Simulation Code |financing=SNIC |active=Yes |start date=2011-03-01 |end date= }} This is a NSC-promoted project of su...")
 
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{{project info
 
{{project info
 
|description=Parallelization of a Large Eddy Simulation Code
 
|description=Parallelization of a Large Eddy Simulation Code
 +
|fields=Computational fluid dynamics
 
|financing=SNIC
 
|financing=SNIC
|active=Yes
+
|active=No
 
|start date=2011-03-01
 
|start date=2011-03-01
|end date=
+
|end date=2012-02-29
 
}}
 
}}
  
 
This is a NSC-promoted project of supporting code parallelization for prominent Swedish scientists. A serial Large Eddy Simulation (LES) code by Dr. L. Davidson at Charmers University has been selected as the candidate. We provide  
 
This is a NSC-promoted project of supporting code parallelization for prominent Swedish scientists. A serial Large Eddy Simulation (LES) code by Dr. L. Davidson at Charmers University has been selected as the candidate. We provide  
  
- The distinct partitioning code for 3-D structured geometries
+
* The distinct light-weight partitioning code for 3-D structured geometries
 +
* Inter-processor and global communicators for halo information exchange on baseline convection-diffusion code
 +
* Parallelization of the 1-D multigrid pressure Poisson solver
  
- Inter-processor and global communicators for halo information exchange on baseline convection-diffusion code
+
----- Abstract
 +
Prof. Lars Davidson's LES (Large Eddy Simulation) fluid dynamics code has been chosen as a pilot project of NSC's code parallelisation service. We devise the standalone domain partitioning code for decomposing the computational domain to each core. We deploy the MPI communicator for the halo cell exchange, which has several forms so that the communication routine fits with the regular 3-D data structure and 1-D converted data structure used for the multi-grid implementation. Parallel performance shows the linear speed-up on small number of processors, up to 20 cores. We do not observe more speed-up with increased number of processor in this strong-scale instrumentation, because the original domain size is designed very small (around 2 million mesh points) to enable the performance measurement from a single-core run. Nevertheless, we expect that the code will show the good performance on larger number of cores in case of the weak scale test. Furthermore, we emphasize that this parallelisation effort faciliate the more-detailed flow simulations in a complex geometry whose mesh system shall be constructed with lots of mesh points which has been exceeding the capacity of a single core program. We find that the change of time integration scheme will further improve the performance by providing the better convergence criteria, which will be one of main objectives of a next project.
  
- Parallelization of the 1-D multigrid pressure Poisson solver
+
Full details will be updated [http://www.nsc.liu.se/~sko/Projects/LES_Parallel here].
  
 
== Members ==
 
== Members ==

Latest revision as of 09:41, 31 August 2012

Name LES Code Parallelization
Description Parallelization of a Large Eddy Simulation Code
Project financing   SNIC
Is active No
Start date 2011-03-01
End date 2012-02-29

This is a NSC-promoted project of supporting code parallelization for prominent Swedish scientists. A serial Large Eddy Simulation (LES) code by Dr. L. Davidson at Charmers University has been selected as the candidate. We provide

  • The distinct light-weight partitioning code for 3-D structured geometries
  • Inter-processor and global communicators for halo information exchange on baseline convection-diffusion code
  • Parallelization of the 1-D multigrid pressure Poisson solver

Abstract

Prof. Lars Davidson's LES (Large Eddy Simulation) fluid dynamics code has been chosen as a pilot project of NSC's code parallelisation service. We devise the standalone domain partitioning code for decomposing the computational domain to each core. We deploy the MPI communicator for the halo cell exchange, which has several forms so that the communication routine fits with the regular 3-D data structure and 1-D converted data structure used for the multi-grid implementation. Parallel performance shows the linear speed-up on small number of processors, up to 20 cores. We do not observe more speed-up with increased number of processor in this strong-scale instrumentation, because the original domain size is designed very small (around 2 million mesh points) to enable the performance measurement from a single-core run. Nevertheless, we expect that the code will show the good performance on larger number of cores in case of the weak scale test. Furthermore, we emphasize that this parallelisation effort faciliate the more-detailed flow simulations in a complex geometry whose mesh system shall be constructed with lots of mesh points which has been exceeding the capacity of a single core program. We find that the change of time integration scheme will further improve the performance by providing the better convergence criteria, which will be one of main objectives of a next project.

Full details will be updated here.

Members

 CentreRoleField
Soon-Heum Ko (NSC)NSCApplication expertComputational fluid dynamics