Topic outline

  • General

    HLRS logoImportant note: The course agenda and materials shown below are those of the 2020 edition of the course. All materials will be updated right before the next course takes place at HLRS.

    Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.

    Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. This course is organized by HLRS in cooperation with RRZE and VSC (Vienna Scientific Cluster).


    Agenda & Content

    1st day (Hybrid programming, part 1)

    08:45   Registration
    09:00      Welcome
    09:05      Motivation
    09:15      Introduction
    09:45      Programming Models
    09:50       - MPI + OpenMP
    10:30   Coffee Break
    10:50       - continue: MPI + OpenMP
    11:40         Practical (how to compile and start)
    12:30         Practical (hybrid through OpenMP parallelization)
    13:00   Lunch
    14:00         Practical (continued)
    15:00   Coffee Break
    15:20       - Overlapping Communication and Computation
    15:40         Practical (taskloops)
    16:20       - MPI + OpenMP Conclusions
    16:30       - MPI + Accelerators
    16:45      Tools
    17:00   End of first day

    2nd day (Hybrid programming, part 2)

    09:00      Programming Models (continued)
    09:05       - MPI + MPI-3.0 Shared Memory
    09:45         Practical (replicated data)
    10:30   Coffee break
    10:50         continue: Practical (replicated data)
    11:50       - MPI Memory Models and Synchronization
    12:30   Lunch
    13:30       - Pure MPI
    13:50       - Topology Optimization
    14:30   Coffee Break
    14:50         Practical (application aware Cartesian topology)
    15:45       - Topology Optimization (Wrap up)
    16:00      Conclusions
    16:15      Q & A
    16:30   End of second day (course)


    Date: Monday, January 27, 2020, 08:45 - Tuesday, January 28, 2020, 16:30
    Location:  HLRS, Room 0.439 / Rühle Saal, University of Stuttgart, Nobelstr. 19, D-70569 Stuttgart, Germany

    Lecturers:

    Rolf Rabenseifner (HLRS), Claudia Blaas-Schenner and Irene Reichl (VSC Team, TU Wien), Georg Hager (RRZE)


    Course material (here ☺):

    http://tiny.cc/MPIX-HLRS