Hasso-Plattner-Institut
Prof. Dr. h.c. Hasso Plattner
  
 

Master Thesis Topic Areas

Please find our list of available master's thesis topics below. Should you be interested in any of those topics please feel free to contact the responsible research assistant for further information.

Comparison of Index Selection Algorithms

The index selection problem is to find the set of indexes which minimizes the overall workload costs of given SQL queries on a database. Existing work propose different index selection algorithms (for example [1, 2]).  We offer to compare these algorithms with regard to their assumptions, performance (to calculate solutions), and (of course) goodness of solutions.

[1] S. Chaudhuri and V. R. Narasayya. An Efficient Cost-Driven Index Selection Tool for Microsoft SQL Server. In VLDB 1997, pp. 146-155.

[2] D. Dash, N. Polyzotis, and A. Ailamaki. CoPhy: a scalable, portable, and interactive index advisor for large workloads. PVLDB vol. 4, no. 6, pp. 362-372, 2011.

Contact: Stefan Klauck

Workload-driven Replication

In replication schemes, replica nodes process queries on snapshots of the master. By analyzing the workload, we can identify query access patterns and replicate data depending to its access frequencies.  We offer to investigate how to optimize individual replication nodes in scale-out scenarios,

  • e.g., to lower the overall memory footprint by partial replication,
  • or to increase the analytical throughput by specialized indexes.

Contact: Stefan Klauck

Combining Machine Learning and External Knowledge for Analyzing Gene Expression Profiles

Gene expression is the cell process by which information from specific sections of the DNA, i.e. genes, is used to synthesized functional products like proteins, which are catalyzing the metabolic processes in our cells. Analyzing gene expression profiles is of particular interest for researchers, as they provide insights on cell processes and gene functions and can thus improve disease diagnosis and treatment.

Nowadays, gene expression profiles from several thousand genes of several hundred tissue samples can be generated. These data sets require computational tools applying Machine Learning techniques for a meaningful analysis. On the other hand, there exist many publicly available databases containing curated biomedical information, e.g. on protein-disease interactions. Contact: Cindy Perscheid

Topic Area: Association Rule Mining on Gene Expression Data (Contact: Cindy Perscheid)

Association Rule Mining, or Itemset Mining, is applied on gene expression data to identify correlations between the expression levels of different genes. A derived rule would have the form of GeneA (up) —> GeneB (up), meaning that if GeneA is upregulated, then typically GeneB is upregulated as well. This information helps researchers to derive unknown gene functions and better understand regulatory processes in cells for different disease types. The amount of rules resulting from those analyses are typically filtered with standard interestingness measures, e.g. support and confidence. These measures are driven by statistical analyses of the data sets. However, the interestingness of a gene or its resulting rule for gene expression data should also take into account its biological relevance, which can only be derived from external sources. Possible topcis for a Master Thesis are:

  • Application of association rule mining to gene expression data, considering computational feasibility, e.g. high data dimensionality with comparably low numbers of transactions
  • Definition of a subjective interestingness measures for association rules with special focus on their biological relevance, e.g. by incorporating external knowledge

Topic Area: Biclustering on Gene Expression Data (Contact: Cindy Perscheid)

Currently, clustering and classification is applied to gene expression data to identify specific expression profiles, e.g. for a particular cancer type. Traditional clustering assigns each gene to a single cluster. A gene, however, participates on average in 10 processes of a cell. This said, traditional clustering cannot appropriately reflect the correlations of genes, as it would only show one specific view on the data. Biclustering allows to identify overlapping clusters and subspaces in gene expression data, reflecting the underlying cell processes much better. The amount of resulting biclusters must be filtered with interestingness measures. These measures are driven by statistical analyses of the data sets. However, the interestingness of a bicluster should also take into account its biological relevance, which can only be derived from external sources. Possible topcis for a Master Thesis are:

  • Visualization of biclustering results
  • Definition of a subjective ranking measure for biclusters with special focus on their biological relevance, e.g. corresponding to known cell processes

Tracing and Sampling Memory Accesses and the Conflict between Accuracy and Performance

High-capacity NVRAM will soon enter the storage pyramid between DRAM and SSDs. It allows for cheaper main memory, but will first be slower than DRAM. We expect data structures to be placed either on DRAM or NVRAM, depending on how they are used and with the goal of minimizing the impact of NVRAM’s higher latency. In our research group, we developed a system that automatically migrates data between DRAM and NVRAM. To do so efficiently, we need to understand how data is accessed. This includes the frequency and recency of accesses as well as their type, such as sequential versus random accesses.

Many approaches exist to trace memory accesses during runtime. These vary in their accuracy and in the overhead imposed on the execution. For instance, breaking the program on every load and store can be done to capture all memory accesses, but comes with a runtime cost that is prohibitive for live applications. Various other approaches exist that use hardware counters, modifications to the page management, and code hot patching.

The goal of this work is to (1) compare and evaluate different approaches and (2) build a library that unifies different approaches behind a common frontend.

Contact: Markus Dreseler

Data Management for Non-Volatile Memories

Storage Class Memories (SCM) are a new class of byte-addressable persisted storage media that blur the line between memory & storage due to their memory-like (~100ns) latency performance. They are expected to lead to new revolutionary programming paradigms that give memory-like byte level access to non-volatile storage. On the memory side, sharing data across processes and ensuring consistent address spaces across server reboots become important issues to be addressed. On the storage side, atomicity of updates, controlling the visibility of in-flight updates, versioning & failure/disaster recovery become key data management challenges to be addressed.

  • Data Structures for In-Memory Column Stores using Non-Volatile Memories
    The goal of this master's thesis will be to investigate the applications of SCM in the context of in-memory column stores. How can in-memory databases profit from large amounts of SCM and what kind of data structures are needed to address the scale and possible distribution of data in such systems, especially in the context of transaction processing, logging and recovery. David Schwalb
  • Distributed In-Memory Column Stores using Non-Volatile Memories
    Distributed database systems leveraging fast interconnects and keeping all data in DRAM scale well, but as memory is volatile, such systems typically achieve durability by replicating data across multiple machines. This thesis will investigate the potential of distributed systems using non-volatile memories as well as how concepts and data structures can be adapted to exploit the durability of SCM. David Schwalb

Optimized Data Structures for In-Memory Trajectory Data Management

In recent years, rapid advances in location-acquisition technologies have led to large amounts of time-stamped location data. Positioning technologies like Global Positioning System (GPS)-based, communication network-based (e.g., 4G or Wi-Fi), and proximity-based (e.g., Radio Frequency Identification) systems enable the tracking of various moving objects, such as vehicles and people.  A trajectory is represented by a series of chronologically ordered sampling points. Each sampling point contains a spatial information, which is represented by a multidimensional coordinate in a geographical space, and a temporal information, which is represented by a timestamp. Trajectory data is the foundation for a wide spectrum of services driven and improved by trajectory data mining. By analyzing the movement behavior of individuals or groups of moving objects in large-scale trajectory data, improvements in various fields of applications could be achieved. 

However, it is a challenging task to manage, store, and process trajectory data. Based on the characteristics of spatio-temporal trajectory data, there exist four key challenges: the data volume, the high update rate (data velocity), the query latency of analytical queries, and the inherent inaccuracy of the data. For these reasons, it is a nontrivial task to manage and store vast amounts of these data, which are rapidly accumulated. Especially, if we consider hybrid transactional and analytical workloads (so-called HTAP or mixed workloads), which are challenging concerning space and time complexity. 

  • Compression
    The scope of this topic is the analysis, implementation, and evaluation of different trajectory compression techniques for columnar in-memory databases.
  • Partitioning
  • This topic aims to evaluate the effects of different partition strategies for trajectory data and to develop a corresponding cost model for columnar in-memory databases.

Contact: Keven Richly

Indices for In-Memory Databases

Indices are the only way to achieve competitive throughput for transactional applications. Since modern databases keep all data in main memory, the focus of indices has shifted. A complete scan of the raw data is not prohibitively slow anymore, however, indices are crucial to achieve throughput. The presented topics aim to quantify the impact of indices, as well as the development of efficient maintenance strategies for in-memory indices.

  • Composite Indices
    Scope of this topic is the implementation and evaluation of composite indices (Multi Column Indices) in our Open Source In-Memory Database Hyrise. Martin Faust
  • Primary Key Clustered Indices
    This topic aims to evaluate the costs of a clustered index (e.g. sorting the relation by its index key) for main memory databases with a main/delta architecture. Martin Faust