exercise: replicated data

initial situation : data is replicated within one node

rank = 0 in MPI_COMM_WORLD broadcasts an array to all processes in MPI_COMM_WORLD . All ranks compute the sum of the elements of that array.

code : MPI_Bcast to all ranks within MPI_COMM_WORLD

C: ~/MPI/course/C/1sided/data-rep_base.c

Fortran: ~/MPI/course/F_30/1sided/data-rep_base_30.f90

task : avoiding replicated data within one node

In order to save memory consumption the array is stored into a shared memory window only once per node (physical shared memory island). The array is broadcasted to only one rank per node.

needed :

communicators : 

+ for the shared memory islands: comm_shm, rank_shm, size_shm (MPI_Comm_split_type / MPI_COMM_TYPE_SHARED)

+ across nodes, including all rank_shm=0 (MPI_Comm_split / color = 0 for all rank_shm=0, for all other ranks color = MPI_UNDEFINED)

shared memory :

+ allocate the shared memory for rank_shm = 0 --> in the size of the array (MPI_Win_allocate_shared)

+ all other ranks define length zero for the shared memory. Then the pointer to the shared memory is not defined. Call MPI_Win_shared_query in order to obtain the starting address of the shared array. (Only in the situation when all ranks allocate non-zero-sized shared memory, the individual shared memory portions are contiguous and the pointers to the shared memory portions of the other ranks can be computed by local information, only.)

broadcast between the heads :

MPI_Bcast can be called only by ranks that are inside of comm_head, for the other ranks comm_head is not defined

use of the shared memory :

take care where to put the memory fences

use this program as baseline for your my_shared_exa3.c/_30.f90 : 

C

~/MPI/course/C/1sided/data-rep_exercise.c

Fortran

~/MPI/course/F_30/1sided/data-rep_exercise_30.f90




Last modified: Monday, 28 January 2019, 2:41 PM