Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
 

Machine Intelligence with Deep Learning (Wintersemester 2021/2022)

Lecturer: Dr. Haojin Yang (Internet-Technologien und -Systeme) , Joseph Bethge (Internet-Technologien und -Systeme) , Ting Hu (Internet-Technologien und -Systeme)

General Information

  • Weekly Hours: 4
  • Credits: 6
  • Graded: yes
  • Enrolment Deadline: 01.10.2021 - 22.10.2021
  • Teaching Form: Seminar / Project
  • Enrolment Type: Compulsory Elective Module
  • Course Language: English
  • Maximum number of participants: 12

Programs, Module Groups & Modules

IT-Systems Engineering MA
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-K Konzepte und Methoden
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-T Techniken und Werkzeuge
  • ISAE: Internet, Security & Algorithm Engineering
    • HPI-ISAE-S Spezialisierung
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-K Konzepte und Methoden
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-T Techniken und Werkzeuge
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-S Spezialisierung
  • IT-Systems Engineering
    • HPI-ITSE-E Entwurf
  • IT-Systems Engineering
    • HPI-ITSE-K Konstruktion
Data Engineering MA
Digital Health MA
Cybersecurity MA

Description

Artificial intelligence (AI) is the intelligence exhibited by computer. This term is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Currently researchers and developers in this field are making efforts to AI and machine learning algorithms which intend to train the computer to mimic some human skills such as "reading", "listening", "writing" and "making inference" etc. From the year 2006 "Deep Learning" (DL) has attracted more and more attentions in both academia and industry. Deep learning or deep neural networks is a branch of machine learning based on a set of algorithms that attempt to learn representations of data and model their high level abstractions. In a deep network, there are multiple so-called "neural layers" between the input and output. The algorithm is allowed to use those layers to learn higher abstraction, composed of multiple linear and non-linear transformations. Recently DL gives us break-record results in many novel areas as e.g., beating human in strategic game systems like Go (Google’s AlphaGo), self-driving cars, achieving dermatologist-level classification of skin cancer etc. In our current research we focus on video analysis and multimedia information retrieval (MIR) by using Deep-Learning techniques.

Course language: English and German

Topics in this seminar:

  • Binary Neural Networks: Binary Neural Networks (BNNs) are an energy-efficient approach to run Neural Networks on devices with low computational power or applications which have real-time requirements. They have been applied to traditional Convolution Neural Networks to convert them into efficient, but slightly less accurate alternatives. In this topic, you are going to extend the application areas of BNNs by diving into Deep Learning Ranking Models (for example based on the approach of FAIR), which are the basis for many systems used in real-world appplications and in the industry. The goal is to train and evaluate a BNN model for various ranking tasks, based on BITorch, a framework for BNNs, developed by our chair and written in PyTorch.
  • Efficient Image Super-Resolution: Image super-resolution (ISR) is a classic computer vision task to reconstruct a high-resolution image from its low-resolution counterpart. At present, the deep learning model-based SR methods have achieved noteworthy improvement in precision. Still, due to its high computational complexity, it is not easy to be applied to mobile devices. The client-side ISR is the basis of many applications, such as solving the image or video transmission problem under low bandwidth. We will try to design a client-side ISR method with both high precision and high efficiency in this project.
  • Natural Language Generation: The development of large pre-trained language models has facilitated numerous Natural Language Processing applications. The typical paradigm is fine-tuning pre-trained language models on different downstream tasks. However, the fine-tuning process is still time-consuming and requires many resources, especially on large datasets. Moreover, the setting of one model for one task limits the distribution of these models in practical applications. The more recent method is obtaining a single general model for multiple downstream tasks by prompt tuning. Prompts, a sequence of tokens prepended before input texts, possibly including task descriptions and several examples, are the key to this paradigm. In this topic, we will conduct prompting tuning on Natural Language Generation tasks, generating natural sentences from structured data, such as Knowledge Graph. We aim to obtain general prompts, enabling a single pre-trained language model to serve different NLG tasks. 

Requirements

  • Strong interests in video/image processing, machine learning/Deep Learning and/or computer vision

  • Software development in C/C++ or Python

  • Experience with OpenCV and machine learning applications as a plus

Literature

Books

Deep Learning frameworks:

Examination

The final evaluation will be based on:

  • Initial implementation / idea presentation, 10%

  • Final presentation, 20%

  • Report (research paper), 12-18 pages, 30%

  • Implementation , 40%

  • Participation in the seminar (bonus points)

Dates

(apart from the presentations, there will be no regular meetings in our seminar room!)

25.10.2021 15:15-16:15

Zoom link here , code: 818665

Introduction and QA session for seminar topics.

[Slides]

until 31.10.2021

Belegung des Seminars beim Studienreferat (Studienreferat(at)hpi.uni-potsdam.de)

(Send your preferred and secondary topic to: haojin.yang@hpi.de)

until 03.11.2021

Topics and Teams finalized

weekly

Individual meeting with your tutor

13.12.2021 (15:15)

Zoom link here , code: 446987

Mid-Term presentation (15+5min),

14.02.2022 (15:15)

Zoom link, code: 141586 

Final presentation (15+5min)

15.03.2022

Hand in code and paper (Latex template)

31.03. 2022

Grading finished

Zurück