This course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code.
This is because parallelism takes us only half the way to good performance.
Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes.
Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.
This course is a PRACE training event.
09:15 Welcome – Intro – Computer architecture (1)
10:45 Welcome – Intro – Computer architecture (2)
11:45 Tools: topology, affinity, clock speed
13:15 Microbenchmarking for architectural exploration
14:15 The Roofline performance model: basics (1)
15:30 The Roofline performance model: basics (2)
17:00 End of day 1
09:00 Tools: hardware performance counters
09:45 Optimal use of parallel resources: SIMD, ccNUMA, (SMT) (1)
10:45 Optimal use of parallel resources: SIMD, ccNUMA, (SMT) (2)
11:30 Performance Engineering with patterns
13:15 Roofline case study: Jacobi smoother
14:15 Roofline case study: sparse matrix-vector multiplication
15:30 Case study: tall & skinny matrix-matrix multiplication
16:30 Optional: The ECM performance model
17:00 End of day 2
For publications of the RRZE HPC group, see https://hpc.fau.de/research/publications/
LIKWID tool suite: https://github.com/RRZE-HPC/likwid
LIKWID documentation Wiki: http://tiny.cc/LIKWID
LIKWID quick reference sheet
Kerncraft automatic Roofline/ECM modeling tool: https://github.com/RRZE-HPC/kerncraft
Online layer condition calculator: https://rrze-hpc.github.io/layer-condition/#calculator
GHOST sparse building blocks library: http://tiny.cc/GHOST
PHIST, a Pipelined Hybrid Parallel Iterative Solver Toolkit: https://bitbucket.org/essex/phist
STREAM source code: stream.c
Compile with, e.g.:
icc -Ofast -xHost -qopenmp -fno-alias -nolib-inline -qopt-streaming-stores never|always -o stream.exe stream.c
likwid-pin -c <pin_mask> ./stream.exe
LIKWID-instrumented STREAM source code: stream-mapi.c
Compile with, e.g.:
icc <options-from-above> -DLIKWID_PERFMON -I<path_to_likwid_inc> stream-mapi.c -o stream-mapi.exe -L<path_to_likwid_lib> -llikwid
likwid-perfctr -C <pin_mask> -m -g <perf_group> ./stream-mapi.exe
Vector Triad throughput benchmark: triad-throughput.tar.gz
icc -c timing.c icc -c dummy.c ifort -Ofast -xHost -qopenmp -fno-alias -fno-inline triad-tp.f90 dummy.o timing.o -o triad.exe
echo <size> | likwid-pin <PIN_OPTIONS> ./triad.exe
Dense matrix-vector multiplication code (with LIKWID markers): dmvm-plain.tar.gz
icc -c timing.c
ifort -Ofast -xHost -qopenmp -I<path_to_likwid_inc> dmvm.f90 timing.o -L<path_to_likwid_lib> -llikwid -o dmvm-plain.exe
echo <NUM_ROWS> <TOTAL_MATRIX_ELEMENTS> | likwid-perfctr -g <METRIC_GROUP> -C <PIN_EXPR> -m ./dmvm-plain.exe
Jacobi 3D stencil code: j3d_with_likwid.tar_.gz
Build with the supplied Makefile (may need to adapt to your LIKWID setup).
likwid-perfctr -C <pin_mask> -m -g <perf_group> ./J3D.exe <size>
Sparse matrix benchmark code (CSR/SELL-C-sigma): sparsematrixbench.tar.gz