Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
  • de

18.10.2017 - no meeting

25.10.2017 - Retreat Feedback + Introduction to Semester

In the first meeting, we will hear some feedback on the fall retreat first.

In addition, we want to have more intense feedback rounds for every speaker starting this semester. We will discuss your ideas how you will benefit most from those feedback rounds in this meeting and introduce our ideas for a new feedback system.

01.11.2017 - Andreas Grapentin

MemSpaces: Evaluating the Tuple Space Paradigm in the Context of Memory-Centric Architectures

Memory-centric architectures are appearing on the horizon as potential candidates for future computing architectures. I will discuss the properties of these systems and outline why traditional operating system APIs are insufficient to manage the system resources. I introduce MemSpaces as an implementation of the tuple spaces API to   address these issues. To demonstrate the feasibility of this approach, I will outline the integration of MemSpaces into the operating system and describe how the API could be utilized by application developers to efficiently allocate and share resources.

08.11.2017 - Sona Ghahremani

On The Importance and Impact of Failure Profiles on the Performance of Self-Healing Approaches

The evaluation of self-adaptive software and in particular self-healing software is complicated due to the adaptation feedback loop and the interaction of the system with the environment. Therefore, the community often employs simulators to mimic the environment behavior including in particular fault injection to evaluate self-healing software. However, using simulators come with the problem that the simulator will always only mimic the behavior of the real system and therefore, for example, the injection of failures in the simulator may be quite different than for the occurrence of failures in the real system. In my talk I will report on our study regarding the impacts failure profiles can have on the performance of self-healing applications and the cost of ignoring them.

15.11.2017 - Sebastian Marwecki

Overloading Multiple Virtual Reality Users Into The Same Physical Space

Although virtual reality hardware is now widely available, the uptake of real walking is hindered by the fact that it requires often impractically large amounts of physical space. To address this, we present SpaceEvader, a novel system that allows overloading multiple users immersed in different VR experiences into the same physical space. SpaceEvader accomplishes this by containing each user in a subset of the physical space at all times, which we call tiles; app-invoked maneuvers then shuffle tiles and users across the entire physical space. This allows apps to move their users to where their narrative requires them to be while hiding from users that they are confined to a tile. We show how this enables SpaceEvader to pack four users into 16m2. In our study, participants experienced more freedom to roam around than in a control condition with static, pre-allocated space.

22.11.2017 - Anja Perlich (external)

Redesigning the Treatment Documentation in Psychotherapy with Tele-Board MED

In psychotherapy, treatment outcomes can be decisively improved by enhancing the relationship between patient and therapist. We developed the interactive documentation system Tele-Board MED (TBM) as an adjunct to talk-based mental health interventions. The system offers a whiteboard-inspired graphical user interface which allows doctor and patient to take notes jointly during the treatment session. I will present the results of TBM evaluation studies by comparing different prototypes and testing scenarios, reaching from early simulations to attempts of real-life implementations in clinical routines. While patient feedback was thoroughly positive, therapists seem to need more incentives to change their documentation behavior towards a digital and cooperative approach. Thus, we suggest a feature for automatic medical report generation which shall support therapists in speeding up administrative documentation tasks.

29.11.2017 - Zhe Zuo (external)

From unstructured to structured: Context-based Named Entity Mining from Text

With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. In this talk, we will present the entire pipeline of a named entity mining system and focus on introduce our relation extraction approach, which extracts a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relationship by using as few as five labeled seed pairs.

06.12.2017 - Thijs Roumen

Literature supporting grafter, remixing 3D printed Machines

In the previous months I have been sharing the idea of reusing mechanisms and remixing 3D printed machines, a prevalent question that came up after the talk was whether I could explain more of the related work backing this research agenda. Since there is a lot to discuss in related contents I decided to use this weekly talk slot to provide some of the underlying basics and literature background of my project and give you all a chance to grasp the context of grafter better to understand how it is novel and where it makes key contributions. The discussed material will also help to reveal where I think I should be heading to in term of research agenda.

13.12.2017 - Toni Grütze (external)

Adding Value to Text with User-Generated Content

In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts grew significantly. The research field of text mining comprises various application areas that have the goal of extracting high-quality information from textual data. Harvesting entity knowledge from these large text collections is one of the major challenges. 

In this talk, we will present CohEEL, a method for linking textual mentions of entities in the documents with their representation in user-generated knowledge bases such as Wikipedia and YAGO. Solutions to this entity linking problem have typically aimed at balancing the rate of linking correctness (precision) and the linking coverage rate (recall). While entity links in texts could be used to improve various Information Retrieval tasks, such as text summarization, document classification, or topic-based clustering, the linking precision is the decisive factor. 

Our algorithm CohEEL, an efficient linking method, uses a random walk strategy to combine a precision-oriented and a recall-oriented classifier in such a way that a high precision is maintained, while recall is elevated to the maximum possible level. CohEEL and further algorithms solving different text mining problems based on user-generated content are discussed in my PhD thesis.​

20.12.2017 - Sebastian Kruse (external)

Scalable Data Profiling

In our technology-driven world, datasets are becoming more and more complex, which hampers their data management considerably. As a result, selecting, understanding, and cleaning datasets have become the most time-consuming activities in data-related projects. Data profiling, which means the automatic extraction of structural metadata from datasets, can effectively support data management; however, most data profiling problems are computationally very hard.

To this end, we propose efficient and scalable algorithms for core problems of data profiling that incorporate novel algorithmic strategies and that are designed for distributed execution. This dual strategy can outperform state-of-the-art methods significantly in terms of performance. To demonstrate the value of data profiling for data management, we furthermore introduce Metacrate, a system to organize, analyze, and visualize structural metadata.

27.12.2017 - Christmas Break

03.01.2018 - Christmas Break

10.01.2018 - Alexandra Ion

Metamaterial Textures

We present metamaterial textures—3D printed surface geometries that can perform a controlled transition between two or more textures. Metamaterial textures are integrated into 3D printed objects and allow designing how the object interacts with the environment and the user’s tactile sense. Inspired by foldable paper sheets (“origami”) and surface wrinkling, our 3D printed metamaterial textures consist of a grid of cells that fold when compressed by an external global force. Unlike origami, however, metamaterial textures offer full control over the transformation, such as in between states and sequence of actuation. This allows for integrating multiple textures and makes them useful, e.g., for exploring parameters in the rapid prototyping of textures. Metamaterial textures are also robust enough to allow the resulting objects to be grasped, pushed, or stood on. This allows us to make objects, such as a shoe sole that transforms from flat to treaded, a textured door handle that provides tactile feedback to visually impaired users, and a configurable bicycle grip. We present an editor assists users in creating metamaterial textures interactively by arranging cells, applying forces, and previewing their deformation.

17.01.2018 - Christoph Matthies

Data-Driven Software Development Processes (in Teaching and Industry)

While software development teams work, they create development artifacts, such as commits, tickets or test runs. This data has a primary purpose in communicating to other team members what and why changes occurred.
However, it also represents a treasure trove of information on how teams work and collaborate. Especially when combining and connecting different sources of development artifacts, can insights into teams be gained.
This approach is useful in a teaching context, where knowledge of the inner workings of teams is necessary in order to give feedback and help the student teams improve. With large amounts of students, teams and created data, manual oversight does not scale well.
Furthermore, the collected development data can be used to evaluate new teaching concepts. The perceptions of students towards a certain software development practice can be compared to the implementation of said practice, as evidenced in data.
While we have shown that analyses of development data can be successfully employed in teaching contexts, we want to explore how these ideas translate to professional software development.

24.01.2018 - Fabio Niephaus

Polyglot Programming: Exploring Opportunities of Language Implementation Frameworks

When building software, developers usually have to choose a programming language depending on their use case. This decision, however, defines the list of software frameworks and libraries, tools and deployment environments that can be used in a project. Common cross-language integration techniques, such as foreign function interfaces or inter-process communication, allow developers to reuse software written in other languages, but are often complicated and inconvenient to use. On the other hand, language implementation frameworks such as RPython and Truffle allow to compose languages at runtime-level.In this talk, we will explain how these kind of language compositions allow for better software reuse and how we plan to explore new opportunities for polyglot programming and tooling.

31.01.2018 - Research School Event Planning

07.02.2017 - Toni Mattis

Semantic Code Models for Concept-aware Programming Environments

To design and implement a program, programmers choose analogies and metaphors to explain and understand programmatic concepts. In source code, they manifest themselves as a particular choice of names. The apparent imprecision that comes with names drawn from natural language is greatly outweighed by their role in program comprehension: reading such names is a starting point to understand the role of each software module in the domain and build testable hypotheses what the code at hand is doing.

On the one hand, understanding a program by looking for names that suggest a particular analogy can be a time-consuming process. On the other hand, a lack of awareness which concepts are present can lead to modularity issues, such as redundancy and architectural drift if concepts are misaligned with respect to the current module decomposition. If programming environments were aware of the meaning of names, tools could not only help programmers understand a code base and its evolution in terms of high-level concepts, but offer a wide range of concept-aware assistance while editing and debugging.

The challenge we are addressing is to automatically detect and relate high-level semantic concepts or metaphors from code. Recent approaches made use of topic models from the field of text mining, however, topic models are designed to mine concepts from collections of natural-language documents but not from programs. In this talk, we discuss graph-based concept models designed to explain static, dynamic, and evolutionary aspects of programs and to address common information needs in programming environments.