Distributed Data Management (Sommersemester 2020)
Lecturer: Dr. Thorsten Papenbrock
- Weekly Hours: 4
- Credits: 6
- Enrolment Deadline: 01.04.2020 - 30.04.2020
- Teaching Form: Lecture / Exercise
- Enrolment Type: Compulsory Elective Module
- Course Language: English
Programs & Modules
IMPORTANT NOTE: This lecture is a catch-up course for HPI students in the "Data Engineering" track that start their studies in SS2020. The lecture is given annually, but because we switched from winder- to summer-periodes, this iteration of the lecture is only for these students. It is one of their core modules and its next iteration in one year is too late; hence, the extra iteration this semester.
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization.
Big data analytics and, hence, data management are a multi-million dollar markets that grow constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements.
In this lecture, we take a look at various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...).
- Encoding & Communication
- Akka Actor Programming
- Data Models & Query Languages
- Storage & Retrieval
- Distributed Systems
- Consistency & Consensus
- Batch Processing
- Spark Batch Processing
- Stream Processing
- Distributed DBMS
- Distributed Query Optimization
- Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems, Martin Kleppmann, 2017, 978-1449373320
- Distributed Systems, Maarten van Steen and Andrew S. Tanenbaum, 2017, 978-1543057386
- Web-Scale Data Management for the Cloud, Wolfgang Lehner and Kai-Uwe Sattler, 2013, 1489997717
- Introduction to Parallel Computing, Zbigniew J. Czech, 2017, 978-1107174399
- Principles of Distributed Database Systems, M. Tamer Özsu and Patrick Valduriez, 2011, 978-1441988331
- Designing Distributed Systems: Patterns and Paradigms for Scalable, Reliable Services, Brendan Burns, 2017, 978-1491983645
- Spark: Big Data Cluster Computing in Production, Ilya Ganelin and Ema Orhian and Kai Sasaki and Brennon York, 2016, 978-1119254010
- Reactive Messaging Patterns with the Actor Model, Vaughn Vernon, 2015, 978-0133846836
- Mining Massive Datasets, Jure Leskovec and Anand Rajaraman and Jeffrey David Ullman, 2014, 978-1107077232
- Algorithmische Geometrie, Rolf Klein, 2005, 978-3540209560
Lectures and Exercises
The grade will determined in an oral exam. The prerequisite for admission to the exam is the successful completion of the exercises.
Please see web page.