Hasso-Plattner-InstitutSDG am HPI
Hasso-Plattner-InstitutDSG am HPI

Runtime Monitoring (Sommersemester 2018)

Dozent: Prof. Dr. Holger Giese (Systemanalyse und Modellierung) , Lucas Sakizloglou (Systemanalyse und Modellierung)

Allgemeine Information

  • Semesterwochenstunden: 2
  • ECTS: 3
  • Benotet: Ja
  • Einschreibefrist: 20.04.2018
  • Lehrform: Seminar
  • Belegungsart: Wahlpflichtmodul

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-K Konzepte und Methoden
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-S Spezialisierung
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-T Techniken und Werkzeuge


The correct and predictable behavior of software is crucial to many key aspects of society, for instance the banking sector, healthcare, or transportations. Formal verification (referred to in the following as verification) is the discipline of software engineering that aims to prove the correctness of software with respect to a specification of its behavior. In a nutshell, verification entails a) the modeling of a system as an abstract mathematical object b) the capturing of desired behavior as properties, and c) the automated proving (or disproving) that the system model satisfies said properties.

Verification takes place at design-time and aims to prove that all possible software executions satisfy a given property. There are various reasons why in practice this goal is often intractable. Firstly, examining all possible software executions require either immense resources or is sometimes impossible [4]. Secondly, some information is sometimes available only at runtime or is more conveniently checked at runtime, e.g. when third-party library code is part of the software [3]. Finally, in modern software, behavior might heavily depend on the environment of the system. For example, in a hospital where the medical staff is supposed to routinely interact with software, at design-time only assumptions can be made regarding this interaction. Verification relies on these assumptions although they might be falsified during runtime.

Runtime monitoring (sometimes also named runtime verification) [1] is an automated technique that has emerged to complement verification and address the limitations mentioned previously. Runtime monitoring aims to (dis)prove that only the actual execution (and not all possible ones) guarantees a given property. Therefore, compared to verification, runtime monitoring is light-weight [2,4] and conveniently also checks the assumptions made at design-time, e.g. with respect to the functionality of third-party code [3]. In addition, it also covers the expectations on how the environment interacts with the software [2]. Runtime monitoring is increasingly popular and is either used as stand-alone or in combination with verification techniques especially in safety-critical systems.

In this seminar, we will study the basics of runtime monitoring, that is, methods to model software execution, property languages, and of course tools and techniques to do the monitoring. The goal is to attain a general understanding and a hands-on experience with runtime monitoring.


[1] Havelund et al, Verify your Runs, Working Conference on Verified Software: Theories, Tools, and Experiments, 2005

[2] Leucker, Teaching Runtime Verification, International Conference on Runtime Verification, 2011

[3] Leucker et al, A Brief Account of Runtime Verification, The Journal of Logic and Algebraic Programming, 2009

[4] Sokolsky et al, Introduction to the Special Issue of Runtime Verification, Formal Methods in System Design, 2012

Lern- und Lehrformen

The course is a seminar. There will be a few introductory meetings presenting basic concepts of runtime monitoring. Afterwards, students will be given a selection of techniques/papers and will be asked to choose one on which they will write a short review and hold a presentation. The techniques/papers will also be briefly introduced. Also, during the introductory meetings, there will be implementations and hands-on exercises using available tools. Students should develop examples to demonstrate their selected techniques/papers and could optionally implement these examples.


We will grade the student's reports (50%) and presentations (50%). Participation in the seminar during other students' presentations in the form of questions and feedback is also required.


Besides individual feedback meetings with supervisors and the introductory meetings, there will not be any other regular meetings during the semester. Presentations will be given on the same day (date to be determined) usually near to the end of the lecture time of the semester.

The first meeting will be on Tuesday the 17th of April at 13:30 in room A-2.2.