Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
Login
  • de
 

Summer Semester 2018

25.04.2018 - Ankit Chauhan

Efficiency of local search on real-world networks

Many real-world networks follow structural properties like power-law degree distribution, high clustering coefficient, small-world etc. It has been observed that the very simple local search algorithms are based on searching in the k-exchange neighborhood perform very well on real-world instances. In this work, we make an attempt to understand the nice behavior of local search algorithms on real-world networks.​

02.05.2018 - Oliver Schneider

DualPanto: a Haptic Device that Enables Blind Users to Continuously Interact with Virtual Worlds

We have been developing a new haptic device that enables blind users to interact with spatial virtual environments that contain objects moving in real-time, as is the case in sports or shooter games. Users interact with DualPanto by operating its me handle with one hand and by holding on to its it handle with the other hand; these two handles are spatially registered with respect to each other. I previously presented our user study where blind participants reported very high enjoyment when using the device to play (6.5/7). We have since developed three new prototypes, two large DualPantos for higher fidelity rendering, and one mobile form factor, and developed a new software stack to support more complex applications.

09.05.2018 - Andreas Fricke

Servicification – trend or paradigm shift in geospatial data processing?

Currently, we are witnessing profound changes in the geospatial domain. Driven by recent ICT developments, such as web services, service-oriented computing or open-source software, an explosion of geodata and geospatial applications as well as rapidly growing communities of non-specialist users, the crucial issue is the provision and integration of geospatial intelligence in these rapidly changing, heterogeneous developments.

In this talk I will introduce the concept of Servicification into geospatial data processing. Its core idea is the provision of expertise through a flexible number of web-based software service modules. Selection and linkage of these services to user profiles, application tasks, data resources, or additional software allow for the compilation of flexible, time-sensitive geospatial data handling processes. Encapsulated in a string of discrete services, the approach presented here aims to provide non-specialist users with geospatial expertise required for the effective, professional solution of a defined application problem.

Providing users with geospatial intelligence in the form of web-based, modular services, is a completely different approach to geospatial data processing. This novel concept puts geospatial intelligence, made available through services encapsulating rule bases and algorithms, in the centre and at the disposal of the users regardless of their expertise.

16.05.2018 - Lung-Pan Cheng

Human Actuation

My research is about advancing immersion. Today users see and hear virtual worlds; I want users to also feel virtual worlds. Researchers have sought to better convey not only visual and auditory but also haptic feedback to enhance immersion in virtual worlds. One approach is to use mechanical equipment to provide haptic feedback, e.g. robotic arms, exoskeletons and motion platforms.

However, the size and the weight of such mechanical equipment tends to be proportional to its target’s size and weight, i.e. providing human-scale haptic feedback requires human-scale equipment, often constraining them to arcades and lab environments.

The key idea behind my research is to bypass mechanical equipment by instead leverage human power. Humans are more generic, flexible, and versatile. I thus create software systems that orchestrate humans in doing such mechanical labor—this is what I call human actuation.

23.05.2018 - Mina Rezai

Multi-Agent Generative Adversarial DomainAdaptation for Learning Multiple Clinical Tasks​

In this work, we propose a novel adversarial net-work called Radiomic-GANs for learning a joint distribution of multi-domain clinical data. We introduce multi-agent generative adversarial networks to address multiple clinical tasks end-to-end. The Radiomic-GANs comprises four components: dual generators and a couple discriminators where we consider two adaptation policy for communicating and sharing radiomic data between agents which reduce the difference between the training and test domain distributions and thus improve generalization performance. Firstly, we fix discriminators domain adaptationand only generators communicating and shared intelligence. Secondly, the generators are fixed and only intelligence sharebetween discriminators. One generator is trained on sequential multi-modal magnetic resonance images (MRI) to learn statisticaland quantitative features that results in conversion of image intomineable data and radiomic features. Second generator combines radiomic data from first generator with other patient data todevelop models for multiple clinical routine practice. Meanwhilethe discriminators are trained to distinguish generators output, coming from the ground truth or from the generator network. Our framework is generalized in the sense that it can be used in different types of clinical tasks, such as the BraTS-2017 benchmark for multiple tasks of heterogeneous tumor segmentation and prediction of patient overall survival (OS) and ACDC-2017 for tasks of cardiac MRI semantic segmentation andc ardiac disease diagnosis.

 

30.05.2018 - Erik Scharwächter

Low redundancy estimation of correlation matrices for time series using triangular bounds

The dramatic increase in the availability of large collectionsvof time series requires new approaches for scalable time series analysis. Correlation analysis for all pairs of time series is a fundamental first step of analysis of such data but is particularly hard for large collections of time series due to its quadratic complexity. State-of-the-art approaches focus on efficiently approximating correlations larger than a hard thresh-old or compressing fully computed correlation matrices in hindsight. In contrast, we aim at estimates for the full pairwise correlation structure without computing and storing all pairwise correlations. We introduce the novel problem of low redundancy estimation for correlation matrices to capture the complete correlation structure with as few parameters and correlation computations as possible. We propose a novel estimation algorithm that is very efficient and comes with formal approximation guarantees. Our algorithm avoids the computation of redundant blocks in the correlation matrix to drastically reduce time and space complexity of estimation. We perform an extensive empirical evaluation of our ap-proach and show that we obtain high-quality estimates with drastically reduced space requirements on a large variety of datasets.

06.06.2018 - Thomas Brand

Focus with generic adaptive monitoring

Advances in information technology allow capturing and processing vast and further increasing amounts of data.  While technical constraints vanish other aspects gain weight in the decision making about data. Especially economical aspects become eventually significant, when deciding how much and which data to capture and process.

Focusing on valuable data addresses this issue. But the value of data can change over time. Large scales, high speeds and frequencies of changes suggest automating the finer grained adaptation of data capturing and processing mechanisms.

This talk provides further insights in our current work on generic adaptive monitoring. It deals with capturing and processing data about software systems. The monitoring configuration is automatically adjusted during the system runtime. Adaptations can be triggered due to detected changing data demands but also by observed changes in the behavior or structure of the monitored system.

Existing work on adaptive monitoring is specific to particular monitoring purposes, such as service level agreement monitoring or anomaly detection. In order to reduce the effort of applying adaptive monitoring for other purposes the presented approach aims to be more generic.

13.06.2018 - Stefan Ramson

A Push-based Reactive Implementation of Implicit Layer Activation

Context-oriented programming directly addresses context variability by providing dedicated language concepts: layers, units of modularity, store context-dependent behavior. During runtime, layers can be applied dynamically depending on the current context of the program.

Various activation means for layers have been proposed. Most of them require developers to model context switches explicitly. In contrast, implicit layer activation allows developers to bind the activation status of a layer to a boolean predicate. The associated layer stays automatically active as long as the given predicate evaluates to true.

Despite its declarative semantics, implicit layer activation is usually implemented in an imperative fashion. In this talk, we present and compare two implementation variants for implicit layer activation in ContextJS: an imperative and a reactive implementation. Furthermore, we discuss their trade-offs regarding code complexity as well as runtime overhead.

20.06.2018 - Julian Risch

Online Comment Analysis

Online comment sections have enabled millions of users to share and discuss their opinions at any time. However, these comment sections are under attack: insults, threats, and hate speech posted by individuals or interest groups poison the discussions. A manual moderation process to filter toxic comments is very expensive and many online news providers have shut down their comment sections. Our research aims to foster engaging, respectful, and informative online discussions. Therefore, we analyze news articles, comments, and users and leverage deep neural networks and topic models to capture the semantics of texts. In this talk, we present our approaches for the prediction of moderation effort and the detection of toxic comments.

27.06.2018 - Ralf Rothenberger

Why is SAT easy in practice?

Propositional satisfiability (SAT) is one of the most fundamental problems in computer science.
The worst-case hardness of SAT lies at the core of computational complexity theory, for example in the form of NP-completeness or the (Strong) Exponential Time Hypothesis.
In contrast to its assumed hardness, real-world instances with millions of Boolean variables can often be solved efficiently.
In this talk I present some approaches which try to explain this discrepancy as well as my own contributions to this field of research.

04.07.2018 - Martin Krejca

<cancelled talk: Shekar Arvind Kumar>

Drift Theory

A major focus in the theoretical analysis of randomized search heuristics is on determining the expected time until an optimal solution is found. Over the last decade, the essential tool used to derive such results was drift theory. It is easy to use, yet yields very strong results. In this talk, we present the most basic drift theorem, sketch its proof, and give an example of an application.

11.07.2018 - Vladeta Stojanovic

Interactive Visualization for Facility Management 4.0

The ability to capture and visualize Building Information Modelling (BIM) data is becoming increasingly important in the field of Facility Management (FM). Facility managers can make use of as-is BIM data in order to create and enhance planning and operational documentation for a building throughout its lifetime. Generation of as-is BIM data from point cloud data poses a particular challenge. The visualization-based analytical output of combined as-designed and as-is BIM, point cloud and sensor data based allows for representation of a Digital Twin (DT)for lifecycle management topics for stakeholder engagement within the emerging Real Estate 4.0 realm. The use of service-based interactive visualization extends the outputs of the visualization scenario to thin clients. This could be potentially beneficial for increasing stakeholder engagement by allowing complex visualization results to be transmitted to clients in real-time on mobile and portable computer devices and accessed through a simple web-based portal. This research investigates the development of new methods and techniques for development of semantically rich 3D point cloud representations for use as basis data for DT representation along with combined as-is BIM and sensor analytics visualization.

18.07.2018 - Pedro Lopes

Interactive Systems based on Electrical Muscle Stimulation

How can interactive devices connect with users in the most immediate and intimate way? This question has driven interactive computing for decades. In recent years, wearables brought computing into constant physical contact with the user’s skin. By moving closer to users devices started to perceive more of the user. allowing devices to act more personal. The main question that drives my research is: what is the next logical step? How can computing devices become even more personal?

My approach is to create devices that intentionally borrow parts of the user’s body for input and output, rather than adding more technology to the body. I call this concept “devices that overlap with the user’s body”. I’ll demonstrate my work in which I explored one specific flavor of such devices, i.e., devices that borrow the user’s muscles.

in this line of work, I created computing devices that interact with the user by reading and controlling muscle activity. My devices are based on medical-grade signal generators and electrodes attached to the user’s skin that send electrical impulses to the user’s muscles; these impulses then cause the user’s muscles to contract. While electrical muscle stimulation (EMS) devices have been used to regenerate lost motor functions in rehabilitation medicine since the ’60s, during my PhD I explored EMS as a means for creating interactive systems. My devices form two main categories: (1) Devices that allow users eyes-free access to information by means of their proprioceptive sense, such as a variable, a tool, or a plot. (2) Devices that increase immersion in virtual reality by simulating large forces, such as wind, physical impact, or walls and heavy objects.