Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
Login
  • de
 

Winter Semester 2019/2020

16.10.2019 - No meeting (Fall Retreat)

23.10.2019 - Organizational meeting

 

  • Afterthoughts on the retreat
  • Talks / program for Cape Town
  • Service-Oriented Computing – joint projects
  • HPI 20 year anniversary: Who is responsible for what

30.10.2019 - No Meeting (HPI Anniversary)

06.11.2019 - Prof. Dr. Patrick Baudisch

How to apply and interview for faculty jobs

13.11.2019 - Dorina Bano

Discovering Handovers in Business Process based on Event Logs

With the big amount of data generated in the healthcare it becomes more important to clearly capture the communication between different departments and their respective systems. Several mining techniques are proposed to discover and represent the complexity of this communication from a static perspective. However, there is a need to describe the order in which these different departments within one hospital interact with each other. We investigate process mining techniques to derive behavioural models that represent the departments interactions in order to reach their goal.

20.11.2019 - Tobias Pape

An Introduction to Writing Research School Reports.

27.11.2019 - Christoph Matthies

Counteracting Agile Retrospective Problems with Retrospective Activities

Retrospective meetings are a fundamental part of Agile software development methods. Effective retrospectives can positively impact teamwork, productivity, and work satisfaction. We focus on problems that commonly occur during these meetings. To address them, we suggest retrospective activities that are already used in practice. These activities provide structure and guide the team through the meeting. We created a mapping between a selection of these activities and the problems they attempt to solve. We evaluated the created mapping through multiple case studies with software development teams in educational and professional contexts. Our results verify the existence of a relationship between specific retrospective activities and connected retrospective problems. Furthermore, using observational studies we could verify parts of the created problem-activity mapping.

04.12.2019 - Andreas Fricke

Mapping the status quo of a Middle Eastern urban agglomeration and turning it into something visible - but how?

Almost every data has an indirect or direct spatial reference. Simplified access to increasingly ubiquitous data poses new challenges to societies. Domains blend and the role of the professional user becomes less and less important. The handling of geodata in existing complex geoinformation systems is usually not designed to amalgamate and process heterogeneous, multidimensional geodata from different sources. This characterises a major problem of effectively and universally adding value to data in order to turn it into usable and visible information. In this talk, I will present a service-oriented approach to a novel interpretation of geographic information systems (GIS) coupled with an insight into the complex work involved by an international and interdisciplinary R&D project focusing on the infrastructural and statistical capture and utilisation of an entire region in the Middle East.

11.12.2019 - Christian Adriano

Building Causal Inference Models to Predict Fault Understanding and Intervene in Code Inspection Tasks

In this talk I will present the theory and results on two methodological issues that I face in my research: (1) how to build models for counterfactual thinking (with respect to the accuracy of fault understanding)? (2) how to evaluate modeling assumptions (through sensitivity analysis, choices for Bayesian priors, and instances of class imbalance)?

Structured abstract

[Context] Software programmers can spend up to 40 percent of their time searching for the causes of software failures. To alleviate that, programmers use debugging tools and techniques to reduce the search space for software faults (bugs).

[Research Problem] However, these techniques assume ”perfect fault understanding”, which means that the programmer will always correctly identify the bug if the programmer is presented with the source code lines containing the bug. Since this is not true, programmers waste time with invalid bug fixes and, hence, lose confidence in the debugging tools.

[Approach] I study how to mitigate this problem by predicting when programmers are accurate in their identification of the software fault.

[Data] I performed two large field experiments with real bugs from popular open source software projects and more than one thousand programmers recruited on the Amazon Mechanical Turk platform. I used the data from one of these experiments to validate assumptions via a third controlled experiment with programmers recruited among industry practitioners and graduate students.

[Results] My results are promising in a sense that I could uncover a set of prediction factors based on task attributes and programmers’ profiles. These factors were combined into distinct models that can predict with different levels of precision and recall when a fault is correctly identified.

[Application of Results] I used two of these prediction models to develop a voting and sequencing algorithm that parallelizes the search for the software fault. Simulations showed that the algorithm can coordinate the work of hundreds of programmers to identify and explain different types of real software bugs. The outcomes in terms of speed and costs were competitive when compared with a single professional programme

18.12.2019 - Lukas Wenzel

DFACCTO - Improving Accessibility of FPGA Accelerators using Dataflow Abstractions

Their flexibility to define efficient, application specific microarchitectures and superior power efficiency make FPGA accelerators a promising platform. Nevertheless, they are only reluctantly targeted by application developers. The lack of convenient and portable runtime environments is a major factor inhibiting their widespread adoption. The DataFlow ACCelerator TOolkit addresses these shortcomings by providing a flexible and lightweight environment to build FPGA designs according to the dataflow model. DFACCTO provides a set of reusable components compatible with the industry standard AMBA interconnect architecture as well as a simplification of the development workflow. In this talk I will introduce DFACCTO by means of an example design workflow building an accelerator for database operations.

08.01.2020 - Muhammad Abdullah

infill: Automatic Reinforcement of Closed Box Structures

Laser cut closed-box structures were shown to allow creating sturdy objects, such as chairs or tables. However, while earlier work suggested that such structures withstand hundreds of kilograms of a load, we found many objects created in this vain to break under a fraction of that load. The underlying problem is that certain types of closed box structures tend to disassemble under load, as the underlying finger joints are subjected to tension. We present infill, a software tool that identifies such points of potential disassembly and prevents disassembly by extending selected plates into adjacent volumes and anchoring them there. infill requires very little computational re-sources. Unlike finite element analysis, infill can thus be run continuously in the background, reinforcing the user’s model automatically during editing.

15.01.2020 - Vladeta Stojanovic

Practical Applications of Deep Learning for Semantic Enrichment of Indoor Point Clouds

Point clouds are able to capture the physical state of the built environment, and are thus useful for assessing its current state and being used for further analysis and decision making. However, point clouds are ambiguous by nature, and such require the introduction of semantics into specific point sets that compose the complete point cloud. Deep-learning approaches can be utilized for this task, specifically with the use of Convolutional Neural Networks (CNNs), trained to classify either 3D points or images of point clusters.

This presentation will describe a practical an approach for solving the problem of attempting to classify point cloud representations of indoor environments - particularly areas of clutter, noise, and missing segments that are often featured in point clouds captured using commodity mobile devices with photogrammetric or depth-sensing capabilities. Two deep-learning approaches are described and evaluated for classification of point clouds: multiview and object-based classification. A case study is presented and discussed that deals with classifying furniture items using two different CNN architectures (Inception V3 and PointNet++).

The classification accuracy and the practicality of using both approaches is reported and discussed, as well as an experimental approach for generation of Industry Foundation Classed (IFC) data from semantically-enriched point clouds. The experimental results of the case study indicate that the use of multiview-based classification offers a better alternative in terms of computational requirements and practicality, and that semantically enriched point clusters can be reconstructed as Building Information Model (BIM) data at Level of Detail (LOD) 300.

22.01.2020 - Sumit Shekhar

Inverse Rendering in Your Hands

Inverse Rendering refers to the problem of estimating intrinsic scene characteristics such as illumination, reflectance etc. given a single image of the scene. The inverse problem setting in this case is highly ill-constrained considering multiple unknowns with a single known pixel-value. When provided with an image and the corresponding depth data intrinsic scene decomposition can be enhanced using depth based priors. Nowadays acquisition of image and depth data is easily enabled using high-end smartphones with depth sensor. To this end we make use of iPhone front facing camera and TrueDepth sensor to acquire RGB and depth information respectively. Our idea is to use this information along with iOS Augmented-Reality APIs for real-time intrinsic decomposition on mobile devices. Intrinsic layers obtained via above methodology can be used for various applications such as recoloring, relighting, appearance editing etc. In my talk I will present the current state of the above project and also discuss the immediate next steps.

29.01.2020 - Sven Köhler

Power Consumption Models for Energy-Aware Workload Execution on Heterogeneous Hardware

The recent restructuring of the electricity grid (i.e., smart grid) introduces challenges for today’s large-scale computing systems. To operate reliably and efficiently, computing systems must adhere not only to technical limits (i.e., thermal constraints) but they must also reduce operating costs, for example, by increasing their energy efficiency. Efforts to improve energy efficiency, however, are often hampered by inflexible software components that hardly adapt to underlying hardware characteristics. We propose a set of utilities for uniform assessment of energy characteristics and integrate them into the libpapi library. Based on our measurements we can deduct energy models for different workloads that allow us to design adaptive software components that dynamically adapt to heterogeneous processing units (i.e., accelerators) during runtime to improve the energy efficiency of computing systems.

05.02.2020 - Johannes Wolf

Road Marking Analysis in Mobile Mapping 3D Point Clouds

Up-to-date geospatial information is of great interest in many use cases such as for the creation of 3D city models and landscape models for many different use cases in urban planning for local authorities, companies, or individuals. Cadastral data can be combined with point clouds to create interactive visualization tools for analysis and exploration. Autonomously driving cars use a regularly updated base map in which current information about the environment is integrated. The detection and classification of road markings is essential to create a precise base map that knows about the number of lanes, turn permissions and pedestrian crossing locations.

Point clouds are widely used for storing geospatial information. They have proven to be a valuable data source for analyses as they are easy to handle and hold great detail of the captured environment. Technically, they are stored as an unordered collection of measurement points each featuring three-dimensional coordinates and additional attributes, e. g., intensity values when being measured via LIDAR. The unordered and unstructured points of a point cloud usually require a semantic classification for successive usage. When road markings and their respective type have been identified, they can be put to further use in many applications.

This talk presents the process and a number of challenges for extracting information about road markings from large 3D mobile mapping point clouds.