Hands-On: likwid-topology and likwid-pin

In this hands-on exercise you will compile and run a main memory bandwidth benchmark. You will learn how to explore node properties and topology with likwid-topology and how to use likwid-pin to explicitly control thread affinity.

Finally you learn how to determine the maximum sustained memory bandwidth for one socket and a complete node.

Time to finish: around 15 Minutes.

Preparation

You can find the benchmark code in the BWBENCH folder of the teacher account.

  • Get the source from the teaching account:

    • $ cp -a ~ghager/BWBENCH ~

  • Load Intel compiler and LIKWID modules:

    • $ module load intel likwid
  • Explore node topology

Execute likwid-topology:

$ likwid-topology -g

Answer the following questions:

  1. How many cores are available in one socket, the whole node?
  2.  Is SMT enabled?
  3. What is the aggregate size of the last level cache in MB per socket?
  4. How many ccNUMA memory domains are there?
  5. What is the total installed memory capacity?

Compile benchmark

Compile a threaded OpenMP binary with optimizing flags:

$ icc -Ofast -xHost -std=c99 -qopenmp -o bwBench-ICC  bwBench.c

Run benchmark

Execute with 12 threads without explicit pinning:

$ env OMP_NUM_THREADS=12 ./bwBench-ICC

Perform multiple (about 10) runs.

  1. Do the results fluctuate? 
  2. By how much?

Run again with explicit pinning also using 12 threads but pinned to 12 physical cores of socket 0:

$ likwid-pin -c S0:0-11 ./bwBench-ICC

  1. Is the result different?
  2.  If yes: why is it different? 
  3. Can you recover the previous result?

Benchmark the memory bandwidth scaling within one socket (in 1 thread steps):

  1. What is the maximum memory bandwidth in GB/s?
  2. Which benchmark case reaches the highest bandwidth?
  3. At which core count can you saturate the main memory bandwidth?

Measure the maximum memory bandwidth using all cores in the node (single measurement).

What is the maximum bandwidth in GB/s?



Last modified: Wednesday, 10 March 2021, 3:53 PM