Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
Login
 

Machine Intelligence with Deep Learning (Wintersemester 2017/2018)

Dozent: Dr. Haojin Yang (Internet-Technologien und -Systeme)
Tutoren: Goncalo Filipe Torcato Mordido

Allgemeine Information

  • Semesterwochenstunden: 4
  • ECTS: 6
  • Benotet: Ja
  • Einschreibefrist: 27.10.2017
  • Lehrform: Seminar / Projekt
  • Belegungsart: Wahlpflichtmodul
  • Maximale Teilnehmerzahl: 12

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • IT-Systems Engineering
    • HPI-ITSE-A Analyse
  • IT-Systems Engineering
    • HPI-ITSE-E Entwurf
  • IT-Systems Engineering
    • HPI-ITSE-K Konstruktion
  • IT-Systems Engineering
    • HPI-ITSE-M Maintenance
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-S Spezialisierung
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-T Techniken und Werkzeuge
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-K Konzepte und Methoden
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-S Spezialisierung
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-T Techniken und Werkzeuge

Beschreibung

Artificial intelligence (AI) is the intelligence exhibited by computer. This term is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Currently researchers and developers in this field are making efforts to AI and machine learning algorithms which intend to train the computer to mimic some human skills such as "reading", "listening", "writing" and "making inference" etc. From the year 2006 "Deep Learning" (DL) has attracted more and more attentions in both academia and industry. Deep learning or deep neural networks is a branch of machine learning based on a set of algorithms that attempt to learn representations of data and model their high level abstractions. In a deep network, there are multiple so-called "neural layers" between the input and output. The algorithm is allowed to use those layers to learn higher abstraction, composed of multiple linear and non-linear transformations. Recently DL gives us break-record results in many novel areas as e.g., beating human in strategic game systems like Go (Google’s AlphaGo), self-driving cars, achieving dermatologist-level classification of skin cancer etc. In our current research we focus on video analysis and multimedia information retrieval (MIR) by using Deep-Learning techniques.

Topics in this seminar:

  • Exploring the power of pre-trained word vectors in NLP models with this topic we plan to "explore" the power of pre-trained word vectors in various NLP (Natural Language Processing) applications, such as machine translation, dialog generation, sentiment analysis, etc. In recent years, the combination of Deep Learning model (DNN, CNN, RNN, GAN...) and distributed word representations (Word Vectors) has achieved remarkable success in NLP. However, there is no conclusion that whether the model should take pre-trained WVs or just randomly initialize them. So in this topic, you are requested to run multiple existing NLP models with both pre-trained and randomly initialized WVs, to figure out how it helps. We believe by investing efforts in this topic, you would first get familiar with the popular theories and state-of-the-art techniques in DL and NLP, and then learn how to conduct a preliminary scientific research topic.
  • Learning how to play games with Deep Genetic Algorithms genetic algorithms try to resemble the evolutionary properties inherent of our biological evolution. The initial population is composed by several randomly initialized individuals, with each individual being a deep neural network initialized with random weights in our case. After all individuals of the first generation have finished playing the game, a set of the top players (i.e. the ones who achieved better score or faster time) are chosen to stay for the next generation whilst all the lowest ranked individuals are discarded from the population. To try to enhance the potential of the top players, new individuals (i.e. childs) are added on each new generation from the combination of the weights of two top player parents. The notion of mutation also comes to play by randomly and slightly changing some of the weights of the childs. After several generations, one could expect to have individuals who perform well in the game. Challenges may arise on the choice of input, network structure, and fitness function used since these are dependent on the type of game chosen. Practical example: ML for Flappy Bird
  • Meta-Learning for Place Recognition Deep Learning algorithms work because we have a huge amount of data available. This data is used to train large models and more data, nearly always, means better results.But what happens if there is not enough data? If it is not feasible to get all the data, you might need? Take for example an intelligent travel guide, that is supposed to recognise a building, based on an image taken by a tourist. With a standard deep learning approach, you will need at least 1000 labelled images of the given building to achieve decent results.
    In this seminar topic, we want to have a look at meta-learning for place recognition. With meta-learning it might be possible to only need ca. 10 images per building for recognising it. Thus, we will try to build a system that is (1.) able to detect buildings, and (2.) able to recognise them, using a meta-learning approach. 
    Further information: http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/
  • Deformable convolutional networks for medical image registration During the past years machine learning and deep learning has raised a huge attention by showing promising results for state-of-the-art computer vision tasks. Image registration is an application of transformation modeling which is the process of transforming different sets of data into one coordinate system. Image registration is a frequently used technique in medical image processing for diagnosis of diseases like Alzheimer or tumor monitoring. In this topic we want to explore the acceleration of geometric transformation specially image registration in the context of neuroimaging application for diagnosis of three stages of Alzheimer disease.

Voraussetzungen

  • Strong interests in video/image processing, machine learning (Deep Learning) and/or computer vision

  • Software development in C/C++ or Python

  • Experience with OpenCV and machine learning applications as a plus

Literatur

  • [1] Yoshua Bengio and Ian J. Goodfellow and Aaron Courville, "Deep Learning", online version: http://www.deeplearningbook.org/
  • [2] cs231n tutorials: Convolutional Neural Networks for Visual Recognition
  • [3] Deep Learning courses at Coursera
  • [4] Caffe: Deep learning framework by the BVLC
  • [5] Chainer: A flexible framework of neural networks
  • [6] MXNet: A Flexible and Efficient Library for Deep Learning

Leistungserfassung

The final evaluation will be based on:

  • Initial implementation / idea presentation, 10%

  • Final presentation, 20%

  • Report/Documentation, 12-18 pages, 30%

  • Implementation, 40%

  • Participation in the seminar (bonus points)

Termine

Montag, 15.15-16.45

Room A-2.1

16.10.2017 15:15-16:45

Vorstellung der Themen (PDF)

bis 23.10.2017 

Wahl der Themen  (Anmelden on Doodle)

24.10.2017

Bekanntgabe der Themen- und Gruppenzuordnung

wöchentlich

Individuelle Meetings mit dem Betreuer

04.12.2017

Technologievorträge und geführte Diskussion (je 15+5min)

05.02.2018

Präsentation der Endergebnisse (je 15+5min)

bis Ende Februar

Abgabe von Implementierung und Dokumentation

bis Ende März

Bewertung der Leistungen

Zurück