Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
Login
 

Runtime Verification: Online Testing Powered by Formal Methods (Wintersemester 2018/2019)

Dozent: Prof. Dr. Holger Giese (Systemanalyse und Modellierung) , Lucas Sakizloglou (Systemanalyse und Modellierung) , Christian Zöllner (Systemanalyse und Modellierung)

Allgemeine Information

  • Semesterwochenstunden: 4
  • ECTS: 6
  • Benotet: Ja
  • Einschreibefrist: 26.10.2018
  • Lehrform: Seminar / Projekt
  • Belegungsart: Pflichtmodul
  • Lehrsprache: Deutsch

Studiengänge, Modulgruppen & Module

IT-Systems Engineering MA
  • IT-Systems Engineering
    • HPI-ITSE-A Analyse
  • IT-Systems Engineering
    • HPI-ITSE-E Entwurf
  • IT-Systems Engineering
    • HPI-ITSE-K Konstruktion
  • IT-Systems Engineering
    • HPI-ITSE-M Maintenance
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-K Konzepte und Methoden
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-S Spezialisierung
  • SAMT: Software Architecture & Modeling Technology
    • HPI-SAMT-T Techniken und Werkzeuge
  • BPET: Business Process & Enterprise Technologies
    • HPI-BPET-K Konzepte und Methoden
  • BPET: Business Process & Enterprise Technologies
    • HPI-BPET-T Techniken und Werkzeuge

Beschreibung

Verification comprises all techniques that aim to show that software satisfies its intended behavior (also known as specification) [Ieee15]. Ever since the beginning of computer science, verification has never stopped being an active and challenging research field.  A field that is even more relevant nowadays that software is used in domains that are key for the functioning of society, e.g. healthcare, banking, transportation. In this project seminar, we consider such critical software and focus on two traditional verification techniques, namely formal methods and testing. 

Formal methods are based on fundamentals of theoretical computer science.  In a nutshell, it is a mathematics-based technique that entails a) the modeling of a system as an abstract mathematical object b) the capturing of desired behavior in a formal specification, and c) the automated proving (or disproving) that the system model satisfies said specification. Formal methods techniques are applied at design-time and aim to prove that every possible execution will satisfy a given specification, thus allowing for complete coverage. On the other hand, testing is of an applied nature. Typically, it requires that code is actually executed with a given set of manually created test cases. It thus cannot establish that the software-under-test will satisfy the specification in every execution but that only certain states of a single execution, the current one, will (not) be reached.

Each technique has its own merits and aims to address different concerns. In the context of critical application domains, like the ones specified above, the ideal goal would be to conclusively prove that some bad things will never happen during an execution. Therefore the most suitable verification method would be formal methods. However, there are various reasons why the application of formal methods on complex systems is often infeasible. First, examining all possible software executions require either immense resources or is sometimes impossible [Sokolsky12]. Secondly, some information about the software is sometimes only available at runtime e.g. when third-party library code is used [Leucker09]. Finally, formal methods are based on a model describing the system behavior. All models make assumptions that, especially in modern software and the complex environments in which it is deployed, could be falsified. Testing on the other hand is a practical and potentially inexpensive technique. It is, however, dependent on human capabilities and it only allows for limited coverage, which is obviously (and, unfortunately, as proven in the past[Tan16]) inadequate for critical domains .

A variant of testing, called model-based testing aims to address the coverage shortcoming: like formal methods, it employs a system model to create a comprehensive set of test cases. However, the problem of modeling assumptions still persists. Another variant of testing that aims to resolve the problem of assumptions is called online testing. Online testing is conducted while the system is running, hence all the assumptions made at design-time could be checked. Runtime Verification (RV) [Havelund05] is an automated technique that has emerged as a form of formal yet scalable online testing. RV aims to prove that the current execution (every possible version of it) satisfies a specification. It is a feasible combination of formal methods and testing that provides sufficient coverage and a means to check the assumptions made at design-time. RV is increasingly popular and, in safety-critical systems, is often used in combination with formal methods and testing. In this seminar, we will study the basics of RV, that is, methods to model software execution, specification languages, and of course tools and techniques to do the verification. The goal is to attain a general understanding of verification requirements, techniques, and tools for critical systems.

Literatur

[Ieee15] IEEE Std 1012, IEEE standard for software verification and validation IEEE Std 1012-2004, 2005

[Havelund05] Havelund et al, Verify your Runs, Working Conference on Verified Software: Theories, Tools, and Experiments, 2005

[Leucker09] Leucker et al, A Brief Account of Runtime Verification, The Journal of Logic and Algebraic Programming, 2009

[Sokolsky12] Sokolsky et al, Introduction to the Special Issue of Runtime Verification, Formal Methods in System Design, 2012

[Tan16] Tan, A collection of Well-Known Software Failures, http://www.cse.psu.edu/~gxt29/bug/softwarebug.html, accessed 2018-09-28.

Lern- und Lehrformen

The course is a project seminar. There will be a few introductory mandatory meetings introducing basic concepts of RV. Afterwards, students will be given a selection of tools and standard benchmarks*. The tools and benchmarks will also be introduced. Students are then required to form teams and a) use the tools to check the benchmarks b) compare solutions with respect to usability and performance, and c) document their findings in the form of a short report and a presentation. Finally, d) students will present their findings in a final meeting Throughout the seminar, further meetings can be arranged, in order to answer questions or cover cross-cutting topics. Finally, for the supervision of topics there will be some mandatory milestone meetings --- however, students have the option to be supervised by the teaching assistant (TA) in an agile, scrum-like fashion: in that case, the TA will organize regular meetings, discuss scrum roles within the team, and the splitting of the project into development tasks.

*Benchmarks comprise input execution traces and a specification (correctness properties).

Leistungserfassung

We will grade the student projects (50%), reports (40%) and presentations (10%). Participation in the seminar during other students' presentations in the form of questions and feedback is also required.

Termine

Besides individual feedback meetings with supervisors and the introductory meetings, there will not be any other regular meetings during the semester. Presentations will be given on the same day (date to be determined) usually near to the end of the lecture time of the semester.

The first meeting will be on Tuesday, October 16, at 09:15 in room A-2.2.

Zurück