Hasso-Plattner-Institut
Hasso-Plattner-Institut
  
Login
 

Winter Semester 2018/2019

24.10.2018 - Christopher Weyand

Analysis and Engineering of Scale-Free Network Models

Many real-world networks, although from fundamentally different domains, exhibit common properties such as a high clustering coefficient, large hubs, a power law degree distribution (also referred to as scale-freeness), a giant component, small average distances, and a small diameter.Numerous models were proposed to produce a subset or all properties of real-world networks. Unfortunately, models that generate all properties lack a convincing explanation and do not relate to their observed emergence in the real world. On the other hand, the intuitive explanatory power inherent in the field of Game Theory inspired recent research in this direction. The main questions of my research are: 1. How do formal models provide insights and explanations about real-world networks? 2. Is there a mechanism that guides the creation of real-world networks and is easily expressible by game-theoretic means? 3. To which extend does such a model help in the creation of tools or concepts to further improve real networks

31.10.2018 - no meeting (Reformationstag)

07.11.2018 - Francesco Quinzan

Escaping Large Deceptive Basins of Attraction with Heavy Mutation Operators

In many Evolutionary Algorithms (EAs), a parameter that needs to be tuned is that of the mutation rate, which determines the probability for each decision variable to be mutated. Typically, this rate is set to 1/n for the duration of the optimization, where n is the number of decision variables. This setting has the appeal that the expected number of mutated variables per iteration is one. In a recent theoretical study, it was proposed to sample the number of mutated variables from a power-law distribution. This results into a significantly higher probability on larger numbers of mutations, so that escaping local optima becomes more probable. In this paper, we propose another class of non-uniform mutation rates. We study the benefits of this operator in terms of average-case black-box complexity analysis and experimental comparison. We consider both pseudo-Boolean artificial landscapes and combinatorial problems (the Minimum Vertex Cover and the Maximum Cut). We observe that our non-uniform mutation rates significantly outperform the standard choices, when dealing with landscapes that exhibit large deceptive basins of attraction.

14.11.2018 - no meeting (SAP / Gebäudesperrung)

21.11.2018 - Johannes Wolf

Techniques for Multi-source Data Visualization and Analysis in Geospatial Applications

Cadastral offices and companies responsible for infrastructure such as roads or railway networks usually deal with many different data sources for planning and preservation purposes. Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization and analysis can be used for infrastructure maintenance tasks. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information, e.\,g., about manholes in the street, that can be directly accessed during manual and automated data evaluation. This talk will highlight individual use cases for the combined data analysis of the aforementioned sources.

28.11.2018 - Lan Jiang / Christiane Hagedorn

Suggesting useful data preparation steps with Data-Knoller

Data preparation is an essential prerequisite of any data-driven application. However, this process is reported to account for 50% to 80% of the time expended across a data analysis lifecycle. The typical data preparation practice requires a user to conduct several trials and errors to figure out which preparation step to perform. This process may take potentially a lot of time.

Suggesting ideal preparation step to users shall reduce the time consumed on these trials. We recognize the data preparation step suggestion problem under the Data-Knoller framework. In this talk, we will describe the our initial ideas of the methodology, metrics and evaluation. We will also present the recognized challenges at this stage.

Using Gameful Learning in Scalable E-Learning Environments

Do you know what gameful learning is and how 'gamification' and 'game-based learning' are connected to it? Lots of people mix up both terms and simply call all kinds of gameful learning experiences gamification. Let's find out together how well you can distinguish between these two!

In this interactive talk, differences and similarities between game-based learning and gamification are presented. Furthermore, findings on how to use both in scalable e-learning environments will be discussed. As a research basis, multiple prototypes for games potentially used in such environments were created. One of these prototypes has been programmed in a university seminar. Later on, this game prototype was used in a programming MOOC to teach Boolean Logic. Closing the talk, we get the chance to discuss most important findings of the seminar as well as first insights of the course evaluation.

05.12.2018 - Johannes Henning

Improving the development of data-intensive applications in high-level programming languages

Databases are highly optimized systems, as are runtime environments for programming languages. Yet, when we process large amounts of data stored inside a database with algorithms written in a high-level programming language, we lose many of the optimizations on both sides. The reason for this loss is the information hiding in both directions. We believe that it is possible to utilize more of the optimizations already in place by redesigning the interface over which database and programming language exchange data to expose more of the underlying implementation. In this talk we will motivate this approach and its relation to existing
solutions.

12.12.2018 - Konrad Felix Krentz / Kiarash Diba

A Denial-of-Sleep-Resilient Medium Access Control Layer for IEEE 802.15.4 Networks

With the emergence of the Internet of things (IoT), plenty of battery-powered and energy-harvesting devices are being deployed to fulfill sensing and actuation tasks in a variety of application areas, such as smart homes, precision agriculture, smart cities, and industrial automation. In this context, a critical issue is that of denial-of-sleep attacks. Such attacks deprive low-power devices of entering energy-saving sleep modes, thereby draining their charge. At the very least, a successful denial-of-sleep attack causes a long outage of the victim device. Moreover, as for battery-powered devices, successful denial-of-sleep attacks necessitate replacing batteries, which is tedious and sometimes even infeasible if a battery-powered device is deployed at an inaccessible location. While the research community came up with numerous defenses against denial-of-sleep attacks, most present-day IoT protocols include no denial-of-sleep defenses at all, presumably due to a lack of awareness and unsolved integration problems. After all, despite there are many denial-of-sleep defenses, effective defenses against certain kinds of denial-of-sleep attacks are yet to be found. The overall contribution of my PhD research, which I outline in this talk, is a denial-of-sleep-resilient medium access control (MAC) layer for IoT devices that communicate over IEEE 802.15.4 links. Internally, my MAC layer comprises two main components. The first component is a denial-of-sleep resilient protocol for establishing session keys among adjacent IEEE 802.15.4 nodes. The established session keys serve the dual purpose of implementing (i) basic wireless security and (ii) complementary denial-of-sleep defenses that belong to the second component. The second component is a denial-of-sleep-resilient MAC protocol. Notably, this MAC protocol not only incorporates novel denial-of-sleep defenses, but also state-of-the-art mechanisms for achieving low energy consumption, high throughput, and high delivery ratios. Altogether, my MAC layer resists, or at least greatly mitigates, all denial-of-sleep attacks against it we are aware of. Furthermore, my MAC layer is self-contained and thus can act as a drop-in replacement for IEEE 802.15.4-compliant, yet insecure MAC layers.

Toward a comprehensive framework for process mining

Process mining enables extraction of knowledge concerning the underlying processes from event data recorded in various information systems. In order to be able to perform process mining techniques data need to be in the form the so called, event log. As a relatively young research field, many techniques have been developed during the past few years and have proven to be able to extract useful insight across various domains. However, most of the focus of the research community have concentrated on the development of new and more efficient techniques, and other important aspects of the whole process of knowledge discovery such as event log extraction and transformation have often been neglected. In this talk we motivate the benefits of developing a comprehensive framework consisting of the full spectrum of process mining activities. Furthermore, I position my future research plan in order to collect feedback from fellow research school members.

19.12.2018 - Shahryar Khorasani / Christian Adriano

Neuroimaging Biomarker Prediction with Convolutional Neural Networks

Studying neurological diseases requires multimodal large-scale data analysis. Here, I propose a project to develop a platform for integrating high dimensional phenotypes of brain structures and genetic data. In this project, a convolutional neural network (CNN) model will be used to extract structural biomarkers represented in magnetic resonance imaging (MRI) scans. Additionally, another model will be developed to analyze genetic data and perform Genome-Wide Association Studies (GWAS), incorporating latent representations of high dimensional structural phenotypes. Subsequently, the two models will be put inside a pipeline, connecting the genome and phenome of each subject. This prototype can open new avenues for epidemiological research and personalized medicine.

Investigating the Fault Understanding Assumption

Industry surveys estimated that programmers spend 20% to 40% of their time debugging software, which corresponds from $59 to $312 billion a year. Studies investigated why debugging is so time-consuming and various debugging techniques were developed. However, techniques rely on the programmers' ability to recognize the software fault among a list of suspicious program statements. This was named the "perfect fault understanding assumption". Proposed solutions mitigated this problem by applying human judgment to inspect suspicious program statements interactively. Although solutions were effective in locating faults, they cannot predict the accuracy of fault understanding. Predicting accuracy is important to make decisions about matching programmers with tasks and the need to replicate tasks to collect multiple opinions about a suspicious program statement. In my talk I will discuss predictors for the accuracy and the replication of fault understanding tasks. I will present some preliminary results and the implications for my future work.

09.01.2018 - Lukas Pirl / Sumit Shekhar

Title tbd

We evaluate trade-offs and best practices for experimental dependability assessments in the emerging area of the Internet of Things (IoT). Our investigations are conducted in different environments (i.e., simulation, lab, and field experiments) and leverage the concept of software fault injection (SFI). Of particular interest is how to reduce discrepancies between the three environments and thereby increase the representativeness of the respective experiments. Additionally, this research will contribute concrete results from assessments of IoT technologies. In Rail2X – a joint project with rail services industries – the wireless network standard IEEE 802.11p for intelligent transportation systems is being investigated. Experiments in the HPI IoT Lab quantified and confirmed 802.11p's design goal of a low latency in comparison to traditional standards. The lower latency could also be confirmed in early SFI experiments (i.e., wireless "jamming"). In first field experiments with moving stations however, we observed a noticeably high standard deviation of the latency (up to four times the mean). In simultaneous efforts, we analyze the 802.11p network stack implemented in the network simulator ns-3 and how to tune its parameters to match aforementioned observations.

Multi-Dimensional-Image Abstraction

Image abstraction is an inherent part of different image processing and computer vision pipeline and has applications in the field of visualization, aesthetics and non-photorealism. As of now researchers have mainly focused on using the color channel information for the purpose of image abstraction. However, it would be interesting to analyze how we can also use data such as depth, lighting, shape etc. for image abstraction. In coming days, the advancement in mobile devices (e.g., stereo camera in recent iOS devices) and commodity hardware (e.g.,  Microsoft  Kinect), with sophisticated  sensors, will  provide  us  with  more such information. In my talk I will give a brief overview as to how we can use multi-dimensional-image data for novel or potentially better image abstraction techniques.

16.01.2019 - Norman Kluge / He Xu

Optimising Bundled-Data Balsa Circuits

Balsa provides an open-source design flow where asynchronous circuits are created from high-level specifications, but the syntax-driven translation often results in performance overhead. The talk presents an automated design flow tackling core drawbacks of the original Balsa design flow: To improve the performance of the circuits, the fact that bundled-data circuits can be divided into data and control path is exploited. Hence, tailored optimisation techniques can be applied to both paths separately. For control path optimisation, STG-based resynthesis has been used (applying logic minimisation). To optimise the data path a standard combinatorial optimiser is used. However, this removes the matched delays needed for a properly working bundled-data circuit. Therefore, two algorithms to automatically insert proper matched delays are used. Additionally, transistor sizing is applied to improve performance and/or energy consumption. Circuit latency improvements of up to 48%, average/peak power improvements of up to 75%/61% resp., and energy consumption improvements of up to 63% compared to the original Balsa implementation can be shown.

Run-time Verification and Validation for Self-adaptive System

The software is now the backbone of human activity. Software systems play important roles in industrial facilities, automobile, and aircraft etc. In self-adaptive systems, the software has to deal with the rapidly changing environment conditions and the failures of its own system. How to guarantee the functional and non-functional requirements of the system during and after the adaptation process is a crucial problem. Verification and Validation theory is wildly adopted in the whole cycle of software system development. Expanding these techniques into run-time verification and validation for self-adaptive systems is a great challenge. Run-time V&V can ensure, during or after the adaptation, the system’s requirements and its core qualities will not be compromised, and at the same time, the goals of adaptation process will be satisfied. Run-time V&V methods and tools are critical for the success of autonomous, autonomic, smart, self-adaptive and self-managing systems. In this talk, I will introduce the general runtime verification and validation concepts and then the more specific “runtime model checking” technique, which is the topic I want to do. After that, I will introduce a special concept, namely "envelope”, which is mostly used in aviation system design and also may help the self-adaptive system and runtime V&V design.

23.01.2019 - Fabio Niephaus / Bjarne Pfitzner (preliminary)

Title tbd

abstract tbd

Title tbd

abstract tbd

30.01.2019 - Robert Kovács / Robert Schmid (preliminary)

Title tbd

abstract tbd

Title tbd

abstract tbd

06.02.2019 - Sven Köhler / Lin Zhou

Title tbd

abstract tbd

Title tbd

abstract tbd