Hasso-Plattner-Institut
  
Hasso-Plattner-Institut
Prof. Dr. Felix Naumann
  
 

Stratosphere

Stratosphere is a joint DFG project conducted by the Technische Universität Berlin, Humboldt Universität Berlin, and the Hasso-Plattner-Institut. It explores how the elasticity of clouds can be exploited for processing analytic queries massively in parallel. Unlike most traditional DBMS, Stratosphere inherently supports text-based and semi-structured data.

Official Project Site

The sub-projects at HPI focus on data quality improvements of linked open data, efficient and scalable data profiling, and knowledge discoevry.

Data Cleansing

We defined the declarative data cleansing language Meteor, implement the underlying basic operations, and develop cost estimations for the operations. Furthermore, we provide test data sets and example queries to evaluate the efficiency and effectivity of the data cleansing process.

Data Profiling

Detecting dependencies in the evergrowing amounts of data has a high computational complexity. One way to cope with this complexity is to distribute the computational work among multiple interconnected computers. However, most existing data profiling algorithms are not designed for parallel execution on computer clusters but rather to run on a single machine. Therefore, we research distributed modifications of existing algorithms as well as new algorithms that can be efficiently executed on computer clusters and that scale out on the number of the cluster nodes.

Knowledge Discovery

Driven by applications such as social media analytics, Web search, advertising, recommendation, mobile sensoring, genomic sequencing, astronomical observations, etc., the need for scalable learning, mining, and knowledge discovery methods is steadily growing. Often the challenge is to automatically process and analyze TBs of evolving data. Extracting value (e.g., understanding the underlying structure and making predictions) from such data, before it is outdated, is a major concern. Therefore, the goal is to enable the scalability of such applications based on Stratosphere.

Please contact Felix Naumann, Toni Grütze (Knowledge Discovery on Stratosphere), or Sebastian Kruse (Data Profiling on Stratosphere) for further questions.

Former members

Publications

Scalable Discovery of Unique Column Combinations

Arvid Heise, Jorge-Arnulfo Quiane-Ruiz, Ziawasch Abedjan, Anja Jentzsch, Felix Naumann
In Proceedings of the VLDB Endowment (PVLDB), Hangzhou, China, 2013 Jorge's presentation at VLDB 2014 was awarded the "Excellent Presentation Award".

Abstract:

The discovery of all unique (and non-unique) column combinations in a given dataset is at the core of any data profiling effort. The results are useful for a large number of areas of data management, such as anomaly detection, data integration, data modeling, duplicate detection, indexing, and query optimization. However, discovering all unique and non-unique column combinations is an NP-hard problem, which in principle requires to verify an exponential number of column combinations for uniqueness on all data values. Thus, achieving efficiency and scalability in this context is a tremendous challenge by itself. In this paper, we devise DUCC, a scalable and efficient approach to the problem of finding all unique and non-unique column combinations in big datasets. We first model the problem as a graph coloring problem and analyze the pruning effect of individual combinations. We then present our hybrid column-based pruning technique, which traverses the lattice in a depth-first and random walk combination. This strategy allows DUCC to typically depend on the solution set size and hence to prune large swaths of the lattice. DUCC also incorporates row-based pruning to run uniqueness checks in just few milliseconds. To achieve even higher scalability, DUCC runs on several CPU cores (scale-up) and compute nodes (scale-out) with a very low overhead. We exhaustively evaluate DUCC using three datasets (two real and one synthetic) with several millions rows and hundreds of attributes. We compare DUCC with related work: Gordian and HCA. The results show that DUCC is up to more than 2 orders of magnitude faster than Gordian and HCA (631x faster than Gordian and 398x faster than HCA). Finally, a series of scalability experiments shows the efficiency of DUCC to scale up and out.

BibTeX file

@inproceedings{heise_duxx_2014,
author = { Arvid Heise, Jorge-Arnulfo Quiane-Ruiz, Ziawasch Abedjan, Anja Jentzsch, Felix Naumann },
title = { Scalable Discovery of Unique Column Combinations },
year = { 2013 },
month = { 0 },
abstract = { The discovery of all unique (and non-unique) column combinations in a given dataset is at the core of any data profiling effort. The results are useful for a large number of areas of data management, such as anomaly detection, data integration, data modeling, duplicate detection, indexing, and query optimization. However, discovering all unique and non-unique column combinations is an NP-hard problem, which in principle requires to verify an exponential number of column combinations for uniqueness on all data values. Thus, achieving efficiency and scalability in this context is a tremendous challenge by itself. In this paper, we devise DUCC, a scalable and efficient approach to the problem of finding all unique and non-unique column combinations in big datasets. We first model the problem as a graph coloring problem and analyze the pruning effect of individual combinations. We then present our hybrid column-based pruning technique, which traverses the lattice in a depth-first and random walk combination. This strategy allows DUCC to typically depend on the solution set size and hence to prune large swaths of the lattice. DUCC also incorporates row-based pruning to run uniqueness checks in just few milliseconds. To achieve even higher scalability, DUCC runs on several CPU cores (scale-up) and compute nodes (scale-out) with a very low overhead. We exhaustively evaluate DUCC using three datasets (two real and one synthetic) with several millions rows and hundreds of attributes. We compare DUCC with related work: Gordian and HCA. The results show that DUCC is up to more than 2 orders of magnitude faster than Gordian and HCA (631x faster than Gordian and 398x faster than HCA). Finally, a series of scalability experiments shows the efficiency of DUCC to scale up and out. },
note = { Jorge's presentation at VLDB 2014 was awarded the "Excellent Presentation Award". },
address = { Hangzhou, China },
booktitle = { Proceedings of the VLDB Endowment (PVLDB) },
priority = { 0 }
}

Copyright Notice

last change: Wed, 17 Feb 2016 08:29:47 +0100