Hasso-Plattner-InstitutSDG am HPI
Hasso-Plattner-InstitutDSG am HPI

Machine Intelligence with Deep Learning (Wintersemester 2019/2020)

Dozent: Dr. Haojin Yang (Internet-Technologien und -Systeme) , Joseph Bethge (Internet-Technologien und -Systeme) , Ting Hu (Internet-Technologien und -Systeme) , Goncalo Filipe Torcato Mordido (Internet-Technologien und -Systeme)

Allgemeine Information

  • Semesterwochenstunden: 4
  • ECTS: 6
  • Benotet: Ja
  • Einschreibefrist: 01.10.-30.10.2019
  • Lehrform: Seminar / Projekt
  • Belegungsart: Wahlpflichtmodul
  • Lehrsprache: Englisch
  • Maximale Teilnehmerzahl: 15

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • IT-Systems Engineering
    • HPI-ITSE-A Analyse
  • IT-Systems Engineering
    • HPI-ITSE-E Entwurf
  • IT-Systems Engineering
    • HPI-ITSE-K Konstruktion
  • IT-Systems Engineering
    • HPI-ITSE-M Maintenance
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-K Konzepte und Methoden
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-T Techniken und Werkzeuge
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-S Spezialisierung
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-K Konzepte und Methoden
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-T Techniken und Werkzeuge
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-S Spezialisierung
Data Engineering MA
Digital Health MA
Cybersecurity MA


Artificial intelligence (AI) is the intelligence exhibited by computer. This term is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Currently researchers and developers in this field are making efforts to AI and machine learning algorithms which intend to train the computer to mimic some human skills such as "reading", "listening", "writing" and "making inference" etc. From the year 2006 "Deep Learning" (DL) has attracted more and more attentions in both academia and industry. Deep learning or deep neural networks is a branch of machine learning based on a set of algorithms that attempt to learn representations of data and model their high level abstractions. In a deep network, there are multiple so-called "neural layers" between the input and output. The algorithm is allowed to use those layers to learn higher abstraction, composed of multiple linear and non-linear transformations. Recently DL gives us break-record results in many novel areas as e.g., beating human in strategic game systems like Go (Google’s AlphaGo), self-driving cars, achieving dermatologist-level classification of skin cancer etc. In our current research we focus on video analysis and multimedia information retrieval (MIR) by using Deep-Learning techniques.

Course language: German and English

Topics in this seminar:

  • Importance batching for improved training of neural networks: Current day neural networks are trained using stochastic learning, which consists of splitting the (usually large) training data into multiple batches, called mini-batches. This is desirable since it speeds up training and helps the network's convergence due to the added noise [1, 2, 3]. However, the samples that form such mini-batches are usually chosen randomly throughout training, which might not be ideal for optimal learning. The goal of this topic is to study the effects of constructing each mini-batch using importance sampling techniques [4] based on the network's loss. (reference: [1] [2] [3] [4])
  • Natural Language Generation: Natural Language Processing (NLP) is a subfield of artificial intelligence that deals with the interaction between computers and humans using natural language. Recent emerging of several pre-training models, such as BERT and XLNet, has substantially advanced the state of the art across various NLP tasks. Natural Language Generation (NLG), which transforms structured data into natural language, is also one of these. NLG plays a critical role in other complicated tasks, such as dialogue system and machine translation. In this seminar topic, we are going to apply large pre-training models like BERT to E2E NLG challenge, and make comprehensive comparisons with other NLG methods.

  • Text Detection with Reinforcement Learning: AI systems based on reinforcement learning model the behaviour and interactions of an agent with respect to a given but unknown environment. Popular Examples of recent breakthroughs with deep reinforcement learning are the breakthroughs of Google's AlphaGo system or the recent advances in playing Star Craft with a deep model. In this seminar topic we want to use deep reinforcement learning for the localization of text in images. We will develop and train an agent that is able to predict a series of transformations. These pedicted transformations will be applied to a given image in order to extract text lines from that image. This topic is a follow-up topic of an earlier topic, where we want to further push the quality of the results and en.

  • Content Analysis on Archival Data: Large archives contain a wealth of information. This information is encoded in each page of the archive and can not be used to its full potential unless the archive is available in digitized form. Only digitizing the archive still does not allow the usage of the full potential, since now each page is available as pixel data with no semantic information. That is why it is necessary to use clever processing techniques that allow us to extract as much semantic information as possible. A well known (and very well working) method is optical character recognition for printed text. In this seminar topic we want to use machine learning to extract the structure of a given digitized page and determine what can be seen in this document. Does it contain only handwriting? Are there images? Is it a book cover? Is it something completely different?
    We want to apply the gathered knowledge on a dataset of real archival scans that we received from the Wildenstein Plattner Institute (WPI) and help the art-historians of the WPI with their research work.

  • Optimizing Inference of Binary Neural Networks: Convolutional neural networks have achieved astonishing results in different application areas. Various methods which allow us to use these models on mobile and embedded devices have been proposed. Especially Binary Neural Networks (BNNs) seem to be a promising approach for devices with low computational power or applications which have real-time requirements. In this topic you are going to optimize the inference of BNNs with BMXNet 2, based on advances in other frameworks. The goal at the end is to run a real-time machine learning demo application on a RaspberryPi (provided by us), without relying on a network connection.


  • Strong interests in video/image processing, machine learning (Deep Learning) and/or computer vision

  • Software development in C/C++ or Python

  • Experience with OpenCV and machine learning applications as a plus



Online courses:

  • cs231n tutorials: Convolutional Neural Networks for Visual Recognition
  • Deep Learning courses at Coursera

Deep Learning frameworks:


The final evaluation will be based on:

  • Initial implementation / idea presentation, 10%

  • Final presentation, 20%

  • Report/Documentation, 12-18 pages, 30%

  • Implementation, 40%

  • Participation in the seminar (bonus points)


Montag, 15.15-16.45

Room H-E.51


Vorstellung der Seminar Themen (PDF)

14.10. - 20.10.2019

Wahl der Themen (Inform your preferred and secondary topics by email: christian.bartz(at)hpi.de)


Bekanntgabe der Themen- und Gruppenzuordnung


Individuelle Meetings mit dem Betreuer


Technologievorträge und geführte Diskussion (je 15+5min)


Präsentation der Endergebnisse (je 15+5min)

bis Ende Februar 2020

Abgabe von Implementierung und Dokumentation (Latex template)

bis Ende März 2020

Bewertung der Leistungen