Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI

Responsible aritifical Intelligence (Sommersemester 2023)

Dozent: Prof. Dr. Holger Giese (Systemanalyse und Modellierung) , Christian Medeiros Adriano (Systemanalyse und Modellierung) , Christian Zöllner (Systemanalyse und Modellierung) , Matthias Barkowsky (Systemanalyse und Modellierung)

Allgemeine Information

  • Semesterwochenstunden: 4
  • ECTS: 6
  • Benotet: Ja
  • Einschreibefrist: 01.04.2023 - 07.05.2023
  • Lehrform: Projektseminar
  • Belegungsart: Wahlpflichtmodul
  • Lehrsprache: Englisch

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-K Konzepte und Methoden
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-S Spezialisierung
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-T Techniken und Werkzeuge
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-K Konzepte und Methoden
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-S Spezialisierung
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-T Techniken und Werkzeuge
Data Engineering MA
Digital Health MA
Software Systems Engineering MA



“Machine learning is an ostensibly technical field, crashing increasingly on human questions. Our human, social, and civic dilemmas are becoming technical. And our technical dilemmas are becoming human, social, and civic. Our successes and failures alike in getting these systems to do ‘what we want’, it turns out, offers an unflinching, revelatory mirror.” - Brian Christian [1]

While machine learning-based systems have presented impressive feats, the industry and governments still face difficult dilemmas to deploy these systems. From the engineering perspective, two imperatives help guide design decisions: the ethical and the explanatory one. Although machines are not humans, machines are expected to obey the ethical norms that regulate human society. Ethics is one of the necessary conditions for responsible behavior, but it is not sufficient, autonomous agents also need to explain their actions or the lack thereof.

This project seminar will contemplate lectures on a set of topics focused on drawing practical lessons and guidance to the future of the engineering of intelligent safety-critical systems.



  • Schools of Ethics
  • Knowledge and Truth (Gettier cases)
  • Ethical dilemmas


  • AI Alignment, Human-Centered AI
  • Fairness, Safety, Robustness, Explainability, Accountability
  • Safety-Critical Systems


  • Large Language Models (ChatGPT, GPT4)
  • Reinforcement Learning (Safety, Reward modeling, Human-in-the-loop)
  • Causal Machine Learning and Counterfactual Reasoning
  • Neurosymbolic methods
  • Evaluation methods (sensitivity analysis, generalization, transportability, ablation studies, threats to validity)


  1. Christian, B., 2021, The alignment problem: How can machines learn human values?. Atlantic Books.
  2. Bubeck, S., et al., 2023, Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712
  3. Mialon, G., et al., 2023, Augmented language models: a survey. arXiv:2302.07842
  4. Jojic, A., Wang, Z., & Jojic, N., 2023, GPT is becoming a Turing machine: Here are some ways to program it. arXiv preprint arXiv:2303.14310.
  5. Mökander, J., et al., 2023, Auditing large language models: a three-layered approach. arXiv:2302.08500
  6. Mitchell, M., 2021, Why AI is harder than we think. arXiv preprint arXiv:2104.12871
  7. Brundage, M., et al., 2020, Toward trustworthy AI development: mechanisms for supporting verifiable claims
  8. Morley, J., et al., 2021, Ethics as a service: a pragmatic operationalisation of AI Ethics. Minds and Machines.
  9. Xiong, P., et al., 2021, Towards a Robust and Trustworthy Machine Learning System Development.
  10. Cammarota, R., et al., 2020, Trustworthy AI Inference Systems: An Industry Research View.
  11. Coeckelbergh, M., 2020, AI Ethics. MIT Press.
  12. Rudin, C., et al., 2021, Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
  13. Karimi, et al., 2021, A survey of algorithmic recourse: contrastive explanations and consequential recommendations.
  14. Bommasani, R., et al., 2021, On the Opportunities and Risks of Foundation Models.
  15. Wing, J. 2021, Trust AI, CACM
  16. IEEE, 2019, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition.

Lern- und Lehrformen

The course is a project seminar, which has an introductory phase comprising initial short lectures. After that, the students will work in groups on jointly identified experiments applying specific solutions to given problems and finally prepare a presentation and write a report about their findings concerning the experiments.

There will be an introductory phase to present basic concepts for the theme, including the necessary foundations.

Lectures will happen through Zoom from our seminar room. The students interested can also join face-to-face in the seminar room.


We will grade the group's reports (80%) and presentations (20%). Note that the report includes documenting the experiments and the obtained results. Therefore, the grading of the report includes the experiments. During the project phase, we will require participation in meetings and other groups' presentations in the form of questions and feedback to their peers.


The first lecture will take place on April 24, 2023 (Monday). The lectures will take place in room A-1.1 and remotely via Zoom (credentials)*

We will follow the recurrent schedule of:

  • Mondays from 9:15-10:45 in room A-1.1
  • Tuesdays from 17:00-18:30 in room A-1.1

* In case that you do not have access to GitLab, please email christian.adriano [at] hpi.de