Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
 

Safe Reinforcement Learning for Self-Adaptive Systems (Sommersemester 2023)

Dozent: Prof. Dr. Holger Giese (Systemanalyse und Modellierung) , Christian Medeiros Adriano (Systemanalyse und Modellierung) , He Xu (Systemanalyse und Modellierung) , Sona Ghahremani (Systemanalyse und Modellierung) , Iqra Zafar (Systemanalyse und Modellierung) , Mustafa Ghani (Systemanalyse und Modellierung)

Allgemeine Information

  • Semesterwochenstunden: 4
  • ECTS: 6
  • Benotet: Ja
  • Einschreibefrist: 01.04.2023 - 07.05.2023
  • Lehrform: Projektseminar
  • Belegungsart: Wahlpflichtmodul
  • Lehrsprache: Englisch

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-K Konzepte und Methoden
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-S Spezialisierung
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-T Techniken und Werkzeuge
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-K Konzepte und Methoden
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-T Techniken und Werkzeuge
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-S Spezialisierung
Data Engineering MA
Digital Health MA
Software Systems Engineering MA

Beschreibung

Motivation

From Alpho-Go to more recently ChatGPT, reinforcement learning (RL) has been demonstrating one impressive feat after another where an agent interacts with an environment to learn a policy that maximizes a reward function. The main benefit of RL is that it precludes the expensive data labeling (self-supervision) or definition of constraints (convex optimization). Instead, RL relies on systematically balancing exploration (try new solutions) versus exploration (maximize reward). Current RL methods comprise a rich ensemble of model-based and model-free methods and corresponding strategies like regularization, pre-training, meta-learning, multi-task learning, goal-based, context-based, etc.

However, safety-critical systems like robotics, digital health, autonomous vehicles remain beyond the reach of RL methods. The reason being that the exploration & exploitation approach does not provide any guarantee that during training the RL algorithm will avoid actions that can produce harm to the equipment, environment, or people. The solution has been to apply simulators like Open AI Gym and Safety Gym to provide realistic scenarios. In this project seminar, we will explore how to use these environments to train RL agents safely.

This project seminar will contemplate lectures on a set of topics focused on drawing practical lessons and guidance to the future of the engineering of intelligent safety-critical systems.

Topics

Foundations

  • Introduction to RL (Model-Free, Model-Based)
  • Open AI Gym, Safety Gym, and Third-party Gym environments
  • Safe-RL Methods
  • Evaluation methods in RL (sensitivity analysis, ablation studies, threats to validity)

Advanced Topics

  • Large Language Models for RL
  • Neurosymbolic Methods for RL
  • Causal Machine Learning and Counterfactual Reasoning
  • Foundation Models for Decision-Making

Literatur

Literatur

  • Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
  • Gu, Shangding, et al. "A review of safe reinforcement learning: Methods, theory and applications." arXiv preprint arXiv:2205.10330 (2022).
  • Du, Yuqing, et al. "Guiding Pretraining in Reinforcement Learning with Large Language Models." arXiv preprint arXiv:2302.06692 (2023).
  • Yang, Sherry, et al. "Foundation Models for Decision Making: Problems, Methods, and Opportunities." arXiv preprint arXiv:2303.04129 (2023).
  • Turchetta, Matteo, et al. "Safe reinforcement learning via curriculum induction." Advances in Neural Information Processing Systems 33 (2020): 12151-12162.
  • Garcıa, Javier, and Fernando Fernández. "A comprehensive survey on safe reinforcement learning." Journal of Machine Learning Research 16.1 (2015): 1437-1480.
  • Brunke, Lukas, et al. "Safe learning in robotics: From learning-based control to safe reinforcement learning." Annual Review of Control, Robotics, and Autonomous Systems 5 (2022): 411-444.
  • Amodei, Dario, et al. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).
  • Yuan, Zhaocong, et al. "safe-control-gym: A unified benchmark suite for safe learning-based control and reinforcement learning in robotics." IEEE Robotics and Automation Letters 7.4 (2022): 11142-11149.
  • Liu, Zuxin, et al. "On the robustness of safe reinforcement learning under observational perturbations." arXiv preprint arXiv:2205.14691 (2022).

Tools

Lern- und Lehrformen

The course is a project seminar, which has an introductory phase comprising initial short lectures. After that, the students will work in groups on jointly identified experiments applying specific solutions to given problems and finally prepare a presentation and write a report about their findings concerning the experiments.

There will be an introductory phase to present basic concepts for the theme, including the necessary foundations.

Lectures will happen through Zoom from our seminar room. The students interested can also join face-to-face in the seminar room.

Leistungserfassung

We will grade the group's reports (80%) and presentations (20%). Note that the report includes documenting the experiments and the obtained results. Therefore, the grading of the report includes the experiments. During the project phase, we will require participation in meetings and other groups' presentations in the form of questions and feedback to their peers.

Termine

The first lecture will take place on April 24, 2023 (Monday). The lectures will take place in A.1.1 and A.2.2 and remotely via Zoom (credentials)*

We will follow the recurrent schedule of:

  • Mondays from 11:00-12:30 in room A-1.1
  • Tuesdays from 09:15-10:45 in room A-1.1 

* In case that you do not have access to GitLab, please email christian.adriano [at] hpi.de

Zurück