Prof. Dr. Felix Naumann

Distributed Data Management

Lecturer: Dr. Thorsten Papenbrock

IMPORTANT NOTE 1: This lecture is a catch-up course for HPI students in the "Data Engineering" track that start their studies in SS2020. The lecture is given annually, but because we switched from winter- to summer-periodes, this iteration of the lecture is only for these students. It is one of their core modules and its next iteration in one year is too late; hence, the extra iteration this semester.

IMPORTANT NOTE 2: Due to the current COVID-19 situation, we need to start this lecture in online-mode. This means that we use teleTask recordings for the teaching sessions and jitsi web-meetings for on-boarding, question, hands-on and homework sessions. Please find the detailed descriptions of our schedule below.


The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization.

Big data analytics and, hence, data management are a multi-million dollar markets that grow constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements.

In this lecture, we take a look at various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...).


All Data Engineering students of 2020 that have not already taken the Distributed Data Management lecture need to register until Wednesday 22.04.2020. On Monday 27.04.2020, I will post a jitsi meet link on this website that all participants should use to attend our first session on Monday 27.04.2020 at 1:30 PM. In this online session, we answer first questions and discuss the organization of the lectures, the hands-on sessions and the homeworks depending on the current situation. Please check prior to our first meeting that you can run jitsi in your web browser and that you have aworking microphone. It would also be great if you have a webcam so that we can see each other at least once. It is important to attent the online meetings, because they are not recorded! A recording of the lecture is availabe on tele-TASK.

Note for the schedule: The dates in the following table are the dates where we will have jitsi meetings. We assume that you watched all topics up to that date and including that date prior to the jitsi meeting so that we can duscuss the topics. So on 04.05.2020, we assume that you have seen the Introduction, Foundations, and Encoding and Communication lectures. You are, of corse, free to watch the lectures faster. Please make sure that you hold deadlines for the homeworks and that you attent the homework presentation sessions, because every team needs to present their solution. We send the jitsi meeting links via email.

27.04.2020jitsi meet link via email
04.05.2020Encoding and Communication
11.05.2020Hands-on Akka Actor Programming
 Data Models and Query Languages
18.05.2020Storage and Retrieval
 Distributed Systems
 Consistency and Consensus
 Batch Processing
22.06.2020Hands-on Spark Batch Processing (tpch.zip) (DEADLINE Akka Homework)
 Stream Processing
06.07.2020Distributed DBMSs + Akka Homework Discussion (DEADLINE Spark Homework)
13.07.2020Distributed Query Optimization + Spark Homework Discusssion
20.07.2020Lecture Summary


The grade will determined in an oral exam. The prerequisite for admission to the exam is the successful completion of the exercises.

To practice for the exam, you can use the exams of WS_2017/18 (solution), WS_2018/19 (solution), and WS_2019/20 (solution). You can also find check-yourself questions in the slides (solution).


Course book:

  • Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems, Martin Kleppmann, 2017, 978-1449373320

Further reading:

  • Distributed Systems, Maarten van Steen and Andrew S. Tanenbaum, 2017, 978-1543057386
  • Web-Scale Data Management for the Cloud, Wolfgang Lehner and Kai-Uwe Sattler, 2013, 1489997717
  • Introduction to Parallel Computing, Zbigniew J. Czech, 2017, 978-1107174399
  • Principles of Distributed Database Systems, M. Tamer Özsu and Patrick Valduriez, 2011, 978-1441988331
  • Designing Distributed Systems: Patterns and Paradigms for Scalable, Reliable Services, Brendan Burns, 2017, 978-1491983645
  • Spark: Big Data Cluster Computing in Production, Ilya Ganelin and Ema Orhian and Kai Sasaki and Brennon York, 2016, 978-1119254010
  • Reactive Messaging Patterns with the Actor Model, Vaughn Vernon, 2015, 978-0133846836
  • Mining Massive Datasets, Jure Leskovec and Anand Rajaraman and Jeffrey David Ullman, 2014, 978-1107077232
  • Algorithmische Geometrie, Rolf Klein, 2005, 978-3540209560