Topic outline

  • General

    VSCMost HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.

    Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. This course is organized by VSC (Vienna Scientific Cluster) in cooperation with HLRS and RRZE.


    Agenda & Content (preliminary)

    1st day:

    08:45   Registration
    09:00      Welcome
    09:05      Motivation
    09:15      Introduction
    09:45      Programming Models
    09:45       - Pure MPI
    10:05   Coffee Break
    10:25       - Topology Optimization
    11:05         Practical (application aware Cartesian topology)
    11:45       - Topology Optimization (Wrap up)
    12:00   Lunch
    13:00       - MPI + MPI-3.0 Shared Memory
    13:30         Practical (replicated data)
    14:00   Coffee break
    14:20       - MPI Memory Models and Synchronization
    15:00         Practical (substituting pt-to-pt by shared memory)
    15:45   Coffee break
    16:00         Practical (substituting barrier synchronization by pt-to-pt)
    17:00   End

    2nd day:

    09:00      Programming Models (continued)
    09:00       - MPI + OpenMP
    10:30   Coffee Break
    10:50         Practical (how to compile and start)
    11:30         Practical (hybrid through OpenMP parallelization)
    13:00   Lunch
    14:00       - Overlapping Communication and Computation
    14:20         Practical (taskloops)
    15:00   Coffee Break
    15:20       - MPI + OpenMP Conclusions
    15:30       - MPI + Accelerators
    15:45      Tools
    16:00      Conclusions
    16:15      Q & A
    16:30   End

    Date: Wednesday, June 12, 2019, 08:45 - Thursday, June 13, 2019, 16:30
    Location:  FH Internet-Raum FH1 (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)

    Lecturers:

    Rolf Rabenseifner (HLRS), Claudia Blaas-Schenner and Irene Reichl (VSC Team, TU Wien), Georg Hager (RRZE)


    Course material (here ☺):

    http://tiny.cc/MPIX-VSC