PHYS52015: Introduction to High Performance Computing #
This is the course webpage for the High Performance Computing part of PHYS52015. It collects the exercises, syllabus, and notes. The source repository is hosted on GitHub.
Course organisation #
The course will run over four weeks starting on 9th November 2021. Each week there will be two sessions, scheduled at 4pm UK time on Tuesdays and Fridays in TLC025.
You can attend remotely over zoom, and will need to be authenticated with your Durham account.
Meeting ID: 979 3263 5844
Passcode: 371456
The sessions will be a combination of short lectures, discussion, and practical work on the exercises in small groups, with tutors to help. Please bring a laptop along to the sessions if you can.
We will use the same slack channel that you have been using for the scientific computing part of the course to facilitate discussion.
The notes contain exercises, please do attempt them as you go through.
Exercise
Exercises look like this.
Slides/recordings #
2021-11-30 #
We looked at collectives, and built a simple performance model for the reduction we coded in the ring reduce exercise. This motivated better algorithms and MPI’s various collective operations.
No session on Friday 3rd December due to UCU strike action. Some Durham-specific comments on the action are available on their website.
2021-11-26 #
We had an impromptu online session due to storm Arwen. We continued with point to point messaging, and discussed a bit more about how messages traverse through the network. Then we looked at nonblocking messages. I fixed some of the typos from the live slides.
2021-11-23 #
We started looking at programming with distributed memory parallelism, and introduced the MPI library.
The MPI library has a lot of functions, and can be a bit overwhelming, but please read through the overview, and point-to-point messaging notes.
Do have a go at the exercises linked at the end of the point to point slides before the next session. I will go through some solutions then.
2021-11-19 #
We introduced the concept of parallel scaling, and looked at some examples of Amdahl’s law.
Hopefully you were able to produce some plots of the parallel performance of the example code. What did you observe?
2021-11-16 #
We talked about collectives, and in particular reductions. We also touched on data races, and synchronisation constructs you can use to avoid them.
Understanding data races, and how to rework code to avoid them, is of critical importance for writing correct OpenMP code. So I recommend working through the notes and exercises them to check that you really understand what is going on.
2021-11-12 #
We continued with OpenMP, starting to introduce the concept of
parallel regions and loop
parallelism. We saw how to
control the number of threads in a parallel region using the
OMP_NUM_THREADS
environment variable, as well as runtime control
with on the directives.
We then did some more OpenMP exercises.
A reminder that to transfer data to Hamilton, your best bet is to use
scp
or similar (rather than
trying to copy and paste into a terminal editor). It is also
worthwhile getting remote editing set up.
2021-11-09 #
We briefly introduced the course as a whole, then Holger provided an introduction to the concepts of shared memory, and the OpenMP programming model.
We then spent the second half of the session starting to run some things on Hamilton (with varying success). Most of you were able to log in and run a very simple hello world example. Some people’s accounts had not been set up (sorry!), and there are instructions in the slack channel about what to do in this case.
Syllabus #
- A brief introduction to parallel computing, and its necessity
- Scaling laws
- Available parallelism in modern supercomputers
- Shared memory parallelism, with OpenMP
- Distributed memory parallelism, with MPI
Assessment #
Via a single piece of summative coursework.