Hasso-Plattner-InstitutSDG am HPI
Hasso-Plattner-InstitutDSG am HPI
Login
  • de
 

07.10.2015 - No weekly meeting

14.10.2015 - Opportunity for practise of talks Offsite

21.10.2015 - Anne Kayem

Cases of Adversarial Manipulations in Computational Resource Constrained Networks

Computational resource constrained or lossy networks (LNs) are increasingly being used as a low-cost alternative to supporting community-based service oriented architectures (SoAs) in economically challenged and/or technologically poor areas. LNs are typically characterized by embedded devices with limited power, memory, and processing resources. Application areas range from building automation to wireless healthcare and energy metering on smart/microgrids.Services provided over LNs are to large degree supported by user participation and so service providers need to provide users with firm guarantees of network reliability and dependability. Failure to do so, results in a loss of trust on the part of the users and a refusal to participate in or support the network. Lack of user cooperation can ultimately result in a breakdown of the network, which is undesirable. This talk considers security and privacy from the perspective of LN adversarial manipulations that provoke disruptions to affect LN stability negatively. Examples of why and how such adversarial situations are provoked and some of preliminary solutions are discussed.

28.10.2015 - Dr. Noga

Guest lecture of Dr. Noga from SAP. Please attend this special meeting!

04.11.2015 - Tim Felgentreff

Lively Groups: Shared Behavior in a World of Objects without Classes or Prototypes

Development environments which aim to provide short feedback loops to developers must strike a balance between immediacy and the ability to abstract and reuse behavioral modules. The Lively Kernel, a self-supporting, browser-based environment for explorative development supports standard object-oriented programming with classes or prototypes, but also a more immediate, object-centric approach for modifying and programming visible objects directly. This allows users to quickly create graphical prototypes with concrete objects. However, when developing with the object-centric approach, sharing behavior between similar objects becomes cumbersome. Developers must choose to either abstract behavior into classes, scatter code across collaborating objects, or to manually copy code between multiple objects. That is, they must choose between less concrete development, reduced maintainability, or code duplication. In this paper, we propose an extension to the object-centric development tools of Lively to work on multiple concrete objects. In our approach, developers may dynamically group live objects that share behavior using tags. They can then modify and program such groups as if they were single objects. Our approach scales the Lively Kernel’s explorative development approach from one to many objects, while preserving the maintainability of abstractions and the immediacy of concrete objects.

11.11.2015 - Sören Discher

Preserving Cultural Heritage using Terrestrial Laser Scanning

In-situ and remote sensing technology is commonly used to preserve and document cultural heritage. As an example, terrestrial laser scanning allows for the creation of highly detailed digital models of arbitrarily sized historical sites -ranging from small caves up to large temple complexes or complete landscapes- in a time and cost efficient manner. Processing and visualizing the resulting 3D point clouds poses high requirements on underlying hard and software systems due to the massive amount of data that has to be handled (e.g., several billion points or terabytes of data for a complete scan). The focus of this talk is on the scalable visualization, editing, and collaborative exploration of arbitrary large terrestrial laser scans on a broad range of hardware devices including mobile platforms without dedicated high performance graphic processing units (gpus). We will look into advantages and disadvantages of different so-called out-of-core rendering techniques and discuss how these techniques can be combined with existing client-server based rendering approaches as well as interaction and editing techniques.

18.11.2015 - Martin Lorenz

Deciding on mapping strategies for databases

Class inheritance is a powerful concept in object-oriented modeling. Persisting objects from an inheritance hierarchy into a relational database is not straight forward, because the concept of inheritance is not supported by relational data stores. An accepted solution are object-relational mapping strategies. The problem is that each strategy varies in terms of its non-functional characteristics e.g., usability, maintainability, efficiency. Software developers base the decision, what mapping strategy to chose, on experience and best practices. Most of these best practices can be found in programming guides for object-relational mapping frameworks or books and publications of experienced software architects. However, these best practices are based on experiences with row-oriented database systems. With the advent of new database technologies, such as column-stores, these best practices become obsolete. In my Ph.D. thesis I am investigating the influence of a database's data layout (row- vs. column) on the non-functional characteristics of object-relational mapping strategies.

25.11.2015 - Aragats Amirkhanyan

Real-Time Clustering of Massive Geodata for Online Maps to Improve Visual Analysis.

Nowadays, we have a lot of data produced by social media services, but more and more often these data contain location information that gives us the wide range of possibilities for analyzing them. Therefore, we can be interested not only in the content, but also in the location where this content was produced. For good analyzing of massive geo-spatial data, we need to provide visual analysis of such data on online maps. And to improve visual analysis, we need to find the best approaches for geo clustering. The best approach means real-time clustering of massive geodata with high accuracy of detecting the center of the cluster in the densest area. Therefore, we propose a new approach of clustering geodata for online maps, such as Google Maps, OpenStreetMap and others. The proposed approach is the server-side online algorithm that performs clustering of massive geodata in two steps and does not need the entire data to start clustering.

02.12.2015 - Thomas Brand

Runtime data-driven software evolution in enterprise software ecosystems

Often new ideas for optimization and additional functional requirements emerge shortly after a software system is put into service. To which extend the software system and its underlying software products can maintain or extend their relevance depends significantly on how they evolve and get adapted to changing conditions and feedback.

Thus, here focusing on the manufacturers of software products, the following tasks are crucial:

Understand customers’ change requests and requirements.

Generalize customer requests, prioritize and integrate them into existing software products. Additionally foster the maintainability and adaptability of the products.

Offer and provide the resulting changes to the customers.

With our research project we want to investigate, how feedback from running software systems can help to adapt and enhance the underlying products incrementally over time.  The whole feedback cycle shall be considered: How to gain and utilize feedback from running software systems and how to rollout resulting software product changes to the customers’ systems?

In this talk the research goal shall further be explicated, for example by linking it to two related works. Also an idea for a first experimental prototype shall be discussed, which is supposed to contribute to the proof of concept.

09.12.2015 - Ekaterina Bazhenova

Optimization of decision making in business processes

Organizations use business process and decision management techniques to run their businesses more efficiently. A firm’s value chain is directly affected by how well it designs and coordinates its decision making. Following the separation of concerns principle, recognized both by industry and academia, the decision logic should be separated from the process logic. While the business process modeling and execution perspective is well understood, there are still no mature approaches for modeling and executing decisions complementary to business processes. In order to overcome this gap, in our previous work we proposed a semi-automatic approach to derive decision models from the process models and their execution logs. The applicability of the approach was demonstrated on a business process from the banking domain.

Within the context of the work mentioned above, I have also just finished a visiting internship in The Leuven Institute for Research on Information Systems at KU Leuven (Belgium) and I would like to share my expertise learnt with the HPI Research School colleagues. The main outcomes of the internship are:- Validation of the approach on decision model mining from the process event logs through interviews with a local bank;- Obtaining a data set on the credit application process of a bank, which could be used for identifying further possibilities for the business process optimization;  - Preparation of a collaborative journal paper on the Challenges Relating to the Integration of Decision and Business Process Management.

16.12.2015 - Prof. Naumann on Writing and Submitting a Research Paper – Technical Aspects

23.12.2015 - Christmas Break

30.12.2015 - Christmas Break

06.01.2016 - Anja Jentzsch

Exploring Linked Data Graph Structures

The true value of Linked Data becomes apparent when datasets are analyzed and understood already at the basic level of data types, constraints, value patterns etc. Such data profiling is especially challenging for RDF data, the underlying data model on the Web of Data. In particular, graph analysis can be used to gain more insight into the data, induce schemas, or build indices. Graph patterns are of interest to many communities, e.g. for protein structures, network traffic, crime detection, and modeling object-oriented data.

We extended our RDF profiling framework ProLod++ to allow exploring the graphical structures of Linked Datasets by visualizing the connected components and the frequent graph patterns mined from them. Given the underlying graph for a Linked Dataset, containing all entities as nodes and object properties between them as links, we detect graph patterns for its directed as well as undirected version. Bigger graph components are mined for subgraph patterns using three different approaches: gSpan, GRAMI, and a new approach that mines for predefined patterns. Our goal is to define a set of graph patterns that can be considered the core of most Linked Datasets. We identify graph patterns such as paths, cycles, stars, siamese stars, antennas, caterpillars, and lobsters.

Patterns are grouped when isomorphic, first based on their underlying structure and then based on the class membership (color). This allows for finding not only common, re-occurring patterns but also patterns that are dominant for certain class combinations. E.g., astronomers in DBpedia are often to be found in star patterns, surrounded by their discovered astronomical objects. Based on the graph features and patterns profiled, an overall model for Linked Datasets can be derived.

13.01.2016 - Andreas Grapentin

Previous work and plans for research school

In my talk, I will present an excerpt of my previous work as a background of my research interests. I will talk briefly about my master thesis on embedded systems instrumentation and I will outline some of my earlier projects in the in areas of software profiling, infrastructure as a service providers, and operating systems.

Afterwards, I will talk about my current work, which involves looking at future hardware architectures based on nonvolatile memory in rack-scale computing. An example of these upcoming architectures is the 'The Machine' project by HP. These shared-something machines with limited cache coherence pose new challenges to operating system resource management and memory access apis. We are investigating these challenges using an emulator created by HP.

20.01.2016 - Stefanie Müller

Interactive Fabrication

In anticipation of 3D printing hardware reaching millions of users, I am investigating how to allow future users to interact with the new hardware. I present a series of interactive software+hardware systems that I created to answer this question. They are characterized by two main properties. First, they produce physical output quickly, allowing users not only to see their results, but also to touch and test their mechanical properties as users work towards a solution. Second, the systems allow users to interact directly with the workpiece, i.e., rather than interacting through a PC or tablet computer, users manipulate the workpiece located inside the 3D printer by pointing at it, causing the machine to then modify the workpiece accordingly. I put my research into perspective by drawing analogies to the evolution of interactive computing from batch processing, to time sharing systems that support turn taking, to direct manipulation.

27.01.2016 - Daniel Richter

Trading Something In for an Increased Availability: Implementation Methods for Flexible, Imprecise, and Resilient Computation

One way to increase the availability - the readiness for and continuity of correct service - of an IT system is to trade something in for it. "Something" could be properties such as consistency guarantees, service quality, preciseness, or even correctness. I want to give an overview about exemplary implementation methods such as flexible, imprecise, and resilient computing, so an application can deal with faults, failures, and resource shortages without relying on highly-available infrastructure and fault-free components.

03.02.2016 - Mirela Alistar

Fault-tolerant biochips

Biochips are electromechanical devices that use arrays of electrodes to manipulate small amounts of liquids, aka droplets. They can dispense such droplets, transport, mix, split, and dilute them and they can execute sequences of such actions, aka a "bioprotocol". Biochips have huge future potential as a means for replacing the existing laboratory equipment and by integrating all the necessary functions for biochemical analysis. In this talk, I will focus on how to make biochips work reliably. During the execution of a bioprotocol, operations could experience transient faults, leading to the failure of the application. I will present several recovery strategies, suitable for offline and online use, including time redundancy and space redundancy.