Hasso-Plattner-InstitutSDG am HPI
Hasso-Plattner-InstitutDSG am HPI

Advanced Topics in Self-Adaptive Systems: Online Machine Learning (Wintersemester 2019/2020)

Dozent: Prof. Dr. Holger Giese (Systemanalyse und Modellierung) , Christian Medeiros Adriano (Systemanalyse und Modellierung) , Sona Ghahremani (Systemanalyse und Modellierung) , Joachim Hänsel (Systemanalyse und Modellierung) , He Xu (Systemanalyse und Modellierung)

Allgemeine Information

  • Semesterwochenstunden: 2
  • ECTS: 3
  • Benotet: Ja
  • Einschreibefrist: 01.10.-30.10.2019
  • Lehrform: Seminar
  • Belegungsart: Wahlpflichtmodul
  • Lehrsprache: Englisch

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-K Konzepte und Methoden
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-S Spezialisierung
  • OSIS: Operating Systems & Information Systems Technology
    • HPI-OSIS-T Techniken und Werkzeuge
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-K Konzepte und Methoden
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-S Spezialisierung
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-T Techniken und Werkzeuge



The important concern for modern software systems is to become more cost-effective, while being versatile, flexible, resilient,self-healing , energy-efficient, customizable, configurable and self-optimizing when reacting to runtime changes. Self-adaptation as a mean to achieve such properties has become one of the most promising directions. In particular, self-adaptive systems should be able to modify their behavior and/or structure in response to their perception of the , and their goals (cf.[1]).

A self adaptive system~(SAS) has developed the ability to observe the changes of the environment and the system at runtime, reason about itself and adopt accordingly [2]. 

After a machine learning model is deployed as part of a SAS, the model  needs to be constantly monitored with respect to its ability to make correct predictions. This implies manual work that is expensive and error-prone. Manual work involves selecting data for retraining the model and tuning the parameters of the model. This is very challenging for two reasons: (1) data is dynamically produced by SAS in large amounts and (2) there is no obvious criteria to determine how to sample this data for training and testing.

Several approaches have been developed to keep machine learning models up-to-date with unforeseen changes in the execution environment. Next we present two general approaches: lifelong machine learning and collective learning.

Lifelong Machine Learning (LL)

 LL tackles the fundamental problem of evolving machine learning models by relying on four system requirements [3]: learn on the job (online), discover new prediction tasks, update knowledge base (strengthen or remove beliefs), and apply current knowledge to new tasks (transfer learning). 

LL has been successfully applied in combination with machine learning algorithms [3], for instance, reinforcement learning, neural networks, topic modeling, and sentiment analysis. Different optimization frameworks also allow to steer LL, for instance, Bayesian optimization [4,5], interactive optimization [6,7], and interactive machine learning [8].

Notwithstanding these concrete solutions, very are still interesting open challenges in LL, for instance, how to manage the knowledge base in terms of correctness and applicability? When to learn new tasks (self-motivation)? What to learn (multi-tasks and domains)? How to learn (self-supervised).

We will use these questions as a guidance to survey LL methods. We will also keep a system engineering evolution perspective, which means that we will discuss architectural and process development concerns. To strengthen our understanding, we will rely on concrete descriptions of the LL application scenarios.

Collective Learning in Self-adaptive Systems (CSAS)

Collective self-adaptive systems (CSAS) are distributed and interconnected systems composed of multiple agents that can perform complex tasks[9]. Such systems are characterized by a high degree of adaptation, giving them resilience in the face of perturbations. 

Achieving effective self-adaptation in CSAS is complex for several reasons. First, the environment in which each agents might operate is often highly dynamic, and significantly more complex than that of systems with centralised control. As a result it is impossible to predict at design time all possible scenarios that can occur at runtime and provide CSAS agents with pre-specified adaptation plans. Second, system-wide knowledge is distributed among the agents, entailing that advanced mechanisms should be used for efficient knowledge sharing and acquisition between agents [10,11]

To address these issues, CSAS agents must be enhanced with \emph{learning capabilities}, allowing them both to instantiate learning models using the knowledge acquired from observing the environment and their peers, and to refine these models by assessing the outcomes of their actions[12].

In this seminar we are interested in studying the common practice and learning techniques that best fit the learning-enabled collective self-adaptive systems and the variation of their applications.


  1. N. Villegas, G. Tamura, H. Müller, L. Duchien, and R. Casallas, “DYNAMICO: A Reference Model for Governing Control Objectives and Context Relevance in Self-Adaptive Software Systems,” in SoftwarE Engineering for Self-Adaptive Systems II, ser. LNCS, R. de Lemos, H. Giese, H. Müller, and M. Shaw, Eds. Springer, January 2013, vol.7475, pp. 265–293.
  2. J. O. Kephart and D. Chess, “The Vision of Autonomic Computing,” Computer, vol. 36, no. 1, pp. 41–50, January 2003. [Online]. Available: http://portal.acm.org/citation.cfm?id=642200
  3. Z. Chen and B. Liu, “Lifelong machine learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 10, no. 3, pp. 1–145,2016.
  4. P. I. Frazier, “A tutorial on bayesian optimization,” arXiv preprint arXiv:1807.02811, 2018.
  5. B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proceedings of the IEEE, vol. 104, no. 1, pp. 148–175, 2015.
  6. A. Ramirez, J. R. Romero, and C. Simons, “A systematic review of interaction in search-based software engineering,” IEEE Transactions on Software Engineering, 2018.
  7. D. Meignan, S. Knust, J.-M. Frayret, G. Pesant, and N. Gaud, “A review and taxonomy of interactive optimization methods in operations research,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 5, no. 3, p. 17, 2015.
  8. J. J. Dudley and P. O. Kristensson, “A Review of User Interface Design for Interactive Machine Learning,” ACM Trans. Interact. Intell. Syst., vol. 8, no. 2, pp. 8:1–8:37, June 2018. [Online]. Available: http://doi.acm.org/10.1145/3185517
  9. M. Mitchell, “Self-awareness and control in decentralized systems.” in AAAI’05, 2005, pp. 80–85.
  10. E. Pournaras, P. Pilgerstorfer, and T. Asikis, “Decentralized collective learning for self-managed sharing economies,” ACM Transactions on Autonomous and Adaptive Systems (TAAS), vol. 13, no. 2, p. 10, 2018.
  11. J. O. Kephart, A. Diaconescu, H. Giese, A. Robertsson et al., “Selfadaptation in collective self-aware computing systems,” in Self-Aware Computing Systems. Springer, 2017, pp. 401–435.
  12. A. Rodrigues, R. D. Caldas, G. N. Rodrigues, T. Vogel, and P. Pelliccione, “A learning approach to enhance assurances for real-time selfadaptive systems,” in SEAMS’18. ACM, 2018, pp. 206–216.

Lern- und Lehrformen

The course is a seminar, which has two introductory sessions comprised by two initial lectures on foundations as well as the proposed focus areas.
After the introductory sessions on basic concepts and seminar topics, the students will pick their topic of interest and will be assigned certain literature to work on. Finally each student is required to write a report about his/her findings and have a presentation. 


For the final mark of the seminar participants, the report contributes to the 50% of the mark . The presentation of the findings contributes to the remaining 50% of the final mark. Please note that students are required to participate in the seminar for the introductory sessions as well as the presentations of the other students in the form of questions and feedback.


After the introductory lectures we will identify and agree on the topics for each student and then there will not be any regular meetings. Individual meeting of the students with the supervisor(s) will be scheduled to provide intermediate feedback. Presentations will be given on the same day (date to be determined) usually near to the end of the lecture time of the semester.

The lectures will take place on 15.10.2019 and 22.10.2019 at 9:15-10:45 in room A-2.2.

  • Link to the slides of the first lecture 
  • Link to the slides of the second lecture (seminar topics)