Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
Login
  • de
 

Winter Semester 2018/2019

24.10.2018 - Christopher Weyand

Analysis and Engineering of Scale-Free Network Models

Many real-world networks, although from fundamentally different domains, exhibit common properties such as a high clustering coefficient, large hubs, a power law degree distribution (also referred to as scale-freeness), a giant component, small average distances, and a small diameter.Numerous models were proposed to produce a subset or all properties of real-world networks. Unfortunately, models that generate all properties lack a convincing explanation and do not relate to their observed emergence in the real world. On the other hand, the intuitive explanatory power inherent in the field of Game Theory inspired recent research in this direction. The main questions of my research are: 1. How do formal models provide insights and explanations about real-world networks? 2. Is there a mechanism that guides the creation of real-world networks and is easily expressible by game-theoretic means? 3. To which extend does such a model help in the creation of tools or concepts to further improve real networks

31.10.2018 - no meeting (Reformationstag)

07.11.2018 - Francesco Quinzan

Escaping Large Deceptive Basins of Attraction with Heavy Mutation Operators

In many Evolutionary Algorithms (EAs), a parameter that needs to be tuned is that of the mutation rate, which determines the probability for each decision variable to be mutated. Typically, this rate is set to 1/n for the duration of the optimization, where n is the number of decision variables. This setting has the appeal that the expected number of mutated variables per iteration is one. In a recent theoretical study, it was proposed to sample the number of mutated variables from a power-law distribution. This results into a significantly higher probability on larger numbers of mutations, so that escaping local optima becomes more probable. In this paper, we propose another class of non-uniform mutation rates. We study the benefits of this operator in terms of average-case black-box complexity analysis and experimental comparison. We consider both pseudo-Boolean artificial landscapes and combinatorial problems (the Minimum Vertex Cover and the Maximum Cut). We observe that our non-uniform mutation rates significantly outperform the standard choices, when dealing with landscapes that exhibit large deceptive basins of attraction.

14.11.2018 - no meeting (SAP / Gebäudesperrung)

21.11.2018 - Johannes Wolf

Techniques for Multi-source Data Visualization and Analysis in Geospatial Applications

Cadastral offices and companies responsible for infrastructure such as roads or railway networks usually deal with many different data sources for planning and preservation purposes. Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization and analysis can be used for infrastructure maintenance tasks. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information, e.\,g., about manholes in the street, that can be directly accessed during manual and automated data evaluation. This talk will highlight individual use cases for the combined data analysis of the aforementioned sources.

28.11.2018 - Lan Jiang / Christiane Hagedorn

Suggesting useful data preparation steps with Data-Knoller

Data preparation is an essential prerequisite of any data-driven application. However, this process is reported to account for 50% to 80% of the time expended across a data analysis lifecycle. The typical data preparation practice requires a user to conduct several trials and errors to figure out which preparation step to perform. This process may take potentially a lot of time.

Suggesting ideal preparation step to users shall reduce the time consumed on these trials. We recognize the data preparation step suggestion problem under the Data-Knoller framework. In this talk, we will describe the our initial ideas of the methodology, metrics and evaluation. We will also present the recognized challenges at this stage.

Using Gameful Learning in Scalable E-Learning Environments

Do you know what gameful learning is and how 'gamification' and 'game-based learning' are connected to it? Lots of people mix up both terms and simply call all kinds of gameful learning experiences gamification. Let's find out together how well you can distinguish between these two!

In this interactive talk, differences and similarities between game-based learning and gamification are presented. Furthermore, findings on how to use both in scalable e-learning environments will be discussed. As a research basis, multiple prototypes for games potentially used in such environments were created. One of these prototypes has been programmed in a university seminar. Later on, this game prototype was used in a programming MOOC to teach Boolean Logic. Closing the talk, we get the chance to discuss most important findings of the seminar as well as first insights of the course evaluation.

05.12.2018 - Johannes Henning

Improving the development of data-intensive applications in high-level programming languages

Databases are highly optimized systems, as are runtime environments for programming languages. Yet, when we process large amounts of data stored inside a database with algorithms written in a high-level programming language, we lose many of the optimizations on both sides. The reason for this loss is the information hiding in both directions. We believe that it is possible to utilize more of the optimizations already in place by redesigning the interface over which database and programming language exchange data to expose more of the underlying implementation. In this talk we will motivate this approach and its relation to existing
solutions.

12.12.2018 - Konrad Felix Krentz / Kiarash Diba

A Denial-of-Sleep-Resilient Medium Access Control Layer for IEEE 802.15.4 Networks

With the emergence of the Internet of things (IoT), plenty of battery-powered and energy-harvesting devices are being deployed to fulfill sensing and actuation tasks in a variety of application areas, such as smart homes, precision agriculture, smart cities, and industrial automation. In this context, a critical issue is that of denial-of-sleep attacks. Such attacks deprive low-power devices of entering energy-saving sleep modes, thereby draining their charge. At the very least, a successful denial-of-sleep attack causes a long outage of the victim device. Moreover, as for battery-powered devices, successful denial-of-sleep attacks necessitate replacing batteries, which is tedious and sometimes even infeasible if a battery-powered device is deployed at an inaccessible location. While the research community came up with numerous defenses against denial-of-sleep attacks, most present-day IoT protocols include no denial-of-sleep defenses at all, presumably due to a lack of awareness and unsolved integration problems. After all, despite there are many denial-of-sleep defenses, effective defenses against certain kinds of denial-of-sleep attacks are yet to be found. The overall contribution of my PhD research, which I outline in this talk, is a denial-of-sleep-resilient medium access control (MAC) layer for IoT devices that communicate over IEEE 802.15.4 links. Internally, my MAC layer comprises two main components. The first component is a denial-of-sleep resilient protocol for establishing session keys among adjacent IEEE 802.15.4 nodes. The established session keys serve the dual purpose of implementing (i) basic wireless security and (ii) complementary denial-of-sleep defenses that belong to the second component. The second component is a denial-of-sleep-resilient MAC protocol. Notably, this MAC protocol not only incorporates novel denial-of-sleep defenses, but also state-of-the-art mechanisms for achieving low energy consumption, high throughput, and high delivery ratios. Altogether, my MAC layer resists, or at least greatly mitigates, all denial-of-sleep attacks against it we are aware of. Furthermore, my MAC layer is self-contained and thus can act as a drop-in replacement for IEEE 802.15.4-compliant, yet insecure MAC layers.

Toward a comprehensive framework for process mining

Process mining enables extraction of knowledge concerning the underlying processes from event data recorded in various information systems. In order to be able to perform process mining techniques data need to be in the form the so called, event log. As a relatively young research field, many techniques have been developed during the past few years and have proven to be able to extract useful insight across various domains. However, most of the focus of the research community have concentrated on the development of new and more efficient techniques, and other important aspects of the whole process of knowledge discovery such as event log extraction and transformation have often been neglected. In this talk we motivate the benefits of developing a comprehensive framework consisting of the full spectrum of process mining activities. Furthermore, I position my future research plan in order to collect feedback from fellow research school members.

19.12.2018 - Shahryar Khorasani / Christian Adriano

Neuroimaging Biomarker Prediction with Convolutional Neural Networks

Studying neurological diseases requires multimodal large-scale data analysis. Here, I propose a project to develop a platform for integrating high dimensional phenotypes of brain structures and genetic data. In this project, a convolutional neural network (CNN) model will be used to extract structural biomarkers represented in magnetic resonance imaging (MRI) scans. Additionally, another model will be developed to analyze genetic data and perform Genome-Wide Association Studies (GWAS), incorporating latent representations of high dimensional structural phenotypes. Subsequently, the two models will be put inside a pipeline, connecting the genome and phenome of each subject. This prototype can open new avenues for epidemiological research and personalized medicine.

Investigating the Fault Understanding Assumption

Industry surveys estimated that programmers spend 20% to 40% of their time debugging software, which corresponds from $59 to $312 billion a year. Studies investigated why debugging is so time-consuming and various debugging techniques were developed. However, techniques rely on the programmers' ability to recognize the software fault among a list of suspicious program statements. This was named the "perfect fault understanding assumption". Proposed solutions mitigated this problem by applying human judgment to inspect suspicious program statements interactively. Although solutions were effective in locating faults, they cannot predict the accuracy of fault understanding. Predicting accuracy is important to make decisions about matching programmers with tasks and the need to replicate tasks to collect multiple opinions about a suspicious program statement. In my talk I will discuss predictors for the accuracy and the replication of fault understanding tasks. I will present some preliminary results and the implications for my future work.

09.01.2018 - Lukas Pirl / Sumit Shekhar

Title tbd

We evaluate trade-offs and best practices for experimental dependability assessments in the emerging area of the Internet of Things (IoT). Our investigations are conducted in different environments (i.e., simulation, lab, and field experiments) and leverage the concept of software fault injection (SFI). Of particular interest is how to reduce discrepancies between the three environments and thereby increase the representativeness of the respective experiments. Additionally, this research will contribute concrete results from assessments of IoT technologies. In Rail2X – a joint project with rail services industries – the wireless network standard IEEE 802.11p for intelligent transportation systems is being investigated. Experiments in the HPI IoT Lab quantified and confirmed 802.11p's design goal of a low latency in comparison to traditional standards. The lower latency could also be confirmed in early SFI experiments (i.e., wireless "jamming"). In first field experiments with moving stations however, we observed a noticeably high standard deviation of the latency (up to four times the mean). In simultaneous efforts, we analyze the 802.11p network stack implemented in the network simulator ns-3 and how to tune its parameters to match aforementioned observations.

Multi-Dimensional-Image Abstraction

Image abstraction is an inherent part of different image processing and computer vision pipeline and has applications in the field of visualization, aesthetics and non-photorealism. As of now researchers have mainly focused on using the color channel information for the purpose of image abstraction. However, it would be interesting to analyze how we can also use data such as depth, lighting, shape etc. for image abstraction. In coming days, the advancement in mobile devices (e.g., stereo camera in recent iOS devices) and commodity hardware (e.g.,  Microsoft  Kinect), with sophisticated  sensors, will  provide  us  with  more such information. In my talk I will give a brief overview as to how we can use multi-dimensional-image data for novel or potentially better image abstraction techniques.

16.01.2019 - Norman Kluge / He Xu

Optimising Bundled-Data Balsa Circuits

Balsa provides an open-source design flow where asynchronous circuits are created from high-level specifications, but the syntax-driven translation often results in performance overhead. The talk presents an automated design flow tackling core drawbacks of the original Balsa design flow: To improve the performance of the circuits, the fact that bundled-data circuits can be divided into data and control path is exploited. Hence, tailored optimisation techniques can be applied to both paths separately. For control path optimisation, STG-based resynthesis has been used (applying logic minimisation). To optimise the data path a standard combinatorial optimiser is used. However, this removes the matched delays needed for a properly working bundled-data circuit. Therefore, two algorithms to automatically insert proper matched delays are used. Additionally, transistor sizing is applied to improve performance and/or energy consumption. Circuit latency improvements of up to 48%, average/peak power improvements of up to 75%/61% resp., and energy consumption improvements of up to 63% compared to the original Balsa implementation can be shown.

Run-time Verification and Validation for Self-adaptive System

The software is now the backbone of human activity. Software systems play important roles in industrial facilities, automobile, and aircraft etc. In self-adaptive systems, the software has to deal with the rapidly changing environment conditions and the failures of its own system. How to guarantee the functional and non-functional requirements of the system during and after the adaptation process is a crucial problem. Verification and Validation theory is wildly adopted in the whole cycle of software system development. Expanding these techniques into run-time verification and validation for self-adaptive systems is a great challenge. Run-time V&V can ensure, during or after the adaptation, the system’s requirements and its core qualities will not be compromised, and at the same time, the goals of adaptation process will be satisfied. Run-time V&V methods and tools are critical for the success of autonomous, autonomic, smart, self-adaptive and self-managing systems. In this talk, I will introduce the general runtime verification and validation concepts and then the more specific “runtime model checking” technique, which is the topic I want to do. After that, I will introduce a special concept, namely "envelope”, which is mostly used in aviation system design and also may help the self-adaptive system and runtime V&V design.

23.01.2019 - Fabio Niephaus / Bjarne Pfitzner

Seminar on Polyglot Programming

Polyglot runtime environments such as GraalVM allow developers to build and extend applications using multiple languages, which gives them a much broader choice in terms of frameworks and libraries they can use. However, there are still many open questions on how to actually use these systems to combine libraries and frameworks written in different programming languages. In this talk, we will present our ideas for a seminar on polyglot programming. As part of the seminar, we want students not only to build polyglot apps, but more importantly, we want them to provide feedback we can use to advance the polyglot programming experience.

Non-Invasive Detection of Hypertension & Federated Learning

High blood pressure, or hypertension, is a common chronic cardiovascular disease that can lead to heart attack or stroke. In 2010, hypertension was the leading cause of death and disability-adjusted life years world wide. Early detection is crucial in order to reduce the risk of further complications. There are a number of biomarkers measurable with non-invasive devices that can give insight on the blood pressure level without having to actually measure the blood pressure invasively. In my talk I will explain our data collection protocol, hypotheses, and processing plan to train a machine learning system for hypertension detection. In the second part of my talk, I want to show the federated learning paradigm and my plans to investigate its capabilities and use for medical application in my PhD. Federated learning is the idea of training a machine learning system in a distributed way, without centralised data storage. The global model is trained by receiving only weight updates from all end-users, which is essential for data privacy and may also be quicker in training than a fully centralised system.

30.01.2019 - Robert Kovács / Robert Schmid (preliminary)

Energy-efficient Large-Scale Devices (WIP)

When creating large-scale moving devices, one of the main challenges is to provide sufficient energy for their movement. One way to reduce energy use is to optimize the structure for the motion it is designed for. When the goal is to create repetitive motion, the energy can be reused by incorporating springs into the design. Other scenarios may require to remove the energy from the system, for example when stopping the motion. For these purposes damping elements are used that convert the energy of the movement into heat. Designing such devices, however, requires substantial engineering skills. Our goal is to provide non-engineers with a software system that helps them designing energy-efficient devices. We achieve this by an interactive editor that offers predefined spring-damper components for specific tasks and helps users verifying their design by its integrated physical simulation.

Metal FS: File System and Data Stream Operators for FPGAs

As the transistor densities on general-purpose processors increase with new hardware generations, processor manufacturers are approaching physical limits that make it more and more difficult to improve on the performance of traditional processors. Instead, performance improvements in computer systems are expected to come mostly from increasing parallel processing capabilities as well as accelerators that are specialized for certain computation tasks. Consequently, aside from GPGPU computing, interest in FPGAs has been growing because of their energy efficiency and tight integration possibilities into existing system architectures (imagine ‘computing wires’ in between the CPU, memories, the network interface). This change in system architecture raises the question to what degree traditional operating system concepts can be applied to systems with such heterogenous processing resources. To this end, we developed Metal FS which combines a data-stream oriented programming model for FPGA accelerators with a new representation of FPGA processing resources in Linux and a file system implementation for NVMe storage that is attached to the FPGA.

06.02.2019 - Sven Köhler / Lin Zhou

Programming Models for FPGAs with Coherent Memory Access

Field Programmable Gate Arrays (FPGAs) are power-efficient hardware accelerators, compared to GPU cards, while at the same time being highly versatile due to their reconfigurability, unlike on-chip accelerators. In recent years, integrated systems emerged consisting of FPGAs equipped with non-volatile memory, Ethernet connectivity and/or simple processors. Yet, FPGAs main-stream adoption has been hindered by their inherently different programming process and the employed hardware description languages. Furthermore, accelerators stay second-class citizens in terms of integration into existing operating systems, runtime environments or software architectures in general. We compare existing programming models for FPGAs in terms of their integration into host software. We argue, that the Actor model is an apt abstraction for heterogeneous computing and storage environment. We showcase an exemplary integration of an FPGA into an Erlang cluster and cover implementation details in coherently shared memory systems, such as POWER CAPI.

Training recurrent neural networks for reaching tasks and presonalized rehabilitation

In the first part of this presentation, I will talk about my master's thesis project. As a prevalent daily task in primates, goal-directed reaching movement requires effective integration of spatial- and contextual information that involves interactions of multiple brain areas, and the exact role of information processing in these areas remains unclear. In this study, a recurrent neural network (RNN) was implemented and trained to simulate brain regions that are involved reaching movement planning. The goal of the project was to build an aritifical neural network model for the brain regions, and test experiment designs on this model. In the second part, I will talk about my resarch ideas on developing a personlised rehabilitation support system. Rehabilitation exercises are essential for allieviating patient symptoms, however, home-based exercises lack supervision for a more effective rehabilitation process. A personlised rehabilitation support system would be able to monitor exercise and give feedback in real time, and based on the medical history of the patient, as well as information from electronic health records, provide suggestions for rehabilitation exercise plans.