GPU-Accelerated Data Processing (Wintersemester 2020/2021)
Dozent:
Prof. Dr. Tilmann Rabl
(Data Engineering Systems)
,
Ilin Tolovski
(Data Engineering Systems)
Website zum Kurs:
https://hpi.de/rabl/teaching/winter-term-2020-21/gpu-accelerated-data-processing.html
Allgemeine Information
- Semesterwochenstunden: 4
- ECTS: 6
- Benotet:
Ja
- Einschreibefrist: Send us an email until 06.11.2020 23:59
- Lehrform: Seminar
- Belegungsart: Wahlpflichtmodul
- Lehrsprache: Englisch
Studiengänge, Modulgruppen & Module
- OSIS: Operating Systems & Information Systems Technology
- HPI-OSIS-K Konzepte und Methoden
- OSIS: Operating Systems & Information Systems Technology
- HPI-OSIS-T Techniken und Werkzeuge
- OSIS: Operating Systems & Information Systems Technology
- HPI-OSIS-S Spezialisierung
- IT-Systems Engineering
- IT-Systems Engineering
- SCAL: Scalable Data Systems
- HPI-SCAL-K Konzepte und Methode
- SCAL: Scalable Data Systems
- HPI-SCAL-T echniken und Werkzeuge
- SCAL: Scalable Data Systems
- HPI-SCAL-S Spezialisierung
- DATA: Data Analytics
- HPI-DATA-K Konzepte und Methoden
- DATA: Data Analytics
- HPI-DATA-T Techniken und Werkzeuge
- DATA: Data Analytics
- HPI-DATA-S Spezialisierung
- SECA: Security Analytics
- HPI-SECA-K Konzepte und Methoden
- SECA: Security Analytics
- HPI-SECA-T Techniken und Werkzeuge
- SECA: Security Analytics
- HPI-SECA-S Spezialisierung
Beschreibung
As peak frequencies in the processors are reaching the maximum, many of the computationally heavy tasks are offloaded to other hardware components. The parallel architecture in the graphical processing units (GPUs) originally used for efficient rendering of scenes has been redesigned and specialized to accelerate data processing and data management tasks in database systems and machine learning. State-of-the-art research in GPU computation goes in the direction of efficient workload distribution across multiple GPUs using fast interconnects between GPUs for local peer-to-peer communication, or InfiniBand for direct memory access between nodes in a cluster setup. These developments allow us to rethink and distribute fundamental operations (joining, aggregation, sorting) that were long tied to the CPU. In this course we will gain a deeper understanding of the developments in general purpose GPU (GPGPU) computation. Our aim is to introduce and evaluate new solutions for workloads that will utilize the most of the vast capability of modern GPU architectures.
Zurück