Hasso-Plattner-Institut
Prof. Dr. Felix Naumann
  
 

Metanome - Data Profiling

Data profiling comprises a broad range of methods to efficiently analyze a given data set. In a typical scenario, which mirrors the capabilities of commercial data profiling tools, tables of a relational database are scanned to derive metadata, such as data types and value patterns, completeness and uniqueness of columns, keys and foreign keys, and occasionally functional dependencies and association rules. Individual research projects have proposed several additional profiling tasks, such as the discovery of inclusion dependencies or conditional functional dependencies.

The Metanome project is a project at HPI in cooperation with the Qatar Computing Reserach Institute (QCRI). Metanome provides a fresh view on data profiling by developing and integrating efficient algorithms into a common tool, expanding on the functionality of data profiling, and addressing performance and scalabilities issues for Big Data. A vision of the Metanome project appeared in SIGMOD Record "Data Profiling Revisited" and demo of the Metanome profiling tool was given at VLDB 2015 "Data Profiling with Metanome".

Algorithm Research

Active:

Past:

Tool Development

Active:

  • Tanja Bergmann (Backend, Frontend, and Architecture)
  • Vincent Schwarzer (Backend and Architecture)
  • Maxi Fischer (Backend and Frontend)

Past:

  • Moritz Finke (Backend and Architecture)
  • Carl Ambroselli (Frontend)
  • Jakob Zwiener (Backend and Architecture)
  • Claudia Exeler (Frontend)

Projects within Metanome

  • Unique column combination discovery
    As prerequisite for unique constraints and keys, UCCs are a basic piece of metadata for any table. The problem is of particular complexity when regarding the exponential number of column combinations. We adress the problem by parallelization and pruning strategies.
    This work is in collaboration with QCRI
  • Inclusion dependency discovery
    As prerequisite of foreign keys, INDs can tell us how tables within a schema can be connected. When regarding tables of different data sources, conditional IND discovery is of particular relevance.
    See also the completed Aladin project and publications by Jana Bauckmann et al., in particular our Spider algorithm.
  • Incremental dependency discovery
    We are extending our work on UCC and IND discovery to tables that receive incremental updates. The goal is to avoid a complete re-computation and restrict processing to relevant columns, records, and dependencies.
  • Profiling and Mining RDF data
    The <subject, predicate, object> data model of RDF necessitates new approaches to basic profiling and data mining methods. 
    See also: ProLOD++ demo
  • Functional dependency discovery
    Functional dependencies express relationships between attributes of a database relation and are extensively used in data analysis and database design, especially schema normalization. We contribute to research in this area by evaluating current state-of-the-art algorithms and developing faster and more scalable approaches.
    See also: FD algorithms
  • Order dependency discovery
    Order dependencies (ODs) describe a relationship of order between lists of attributes in a relational table. ODs can help to understand the semantics of datasets and the appli- cations producing them. The existence of an OD in a table can provide hints on which integrity constraints are valid for the domain of the data at hand. Moreover, order dependen- cies have applications in the field of query optimization by suggesting query rewrites.
    See also: OD algorithms

Teaching Data Profiling

Student projects

  • Master's project "Profiling Dynamic Data" (4 students, winter 16/17)
  • Master's project "Approximate Data Profiling" (10 students, summer 2015)
  • Master's project "Metadata Trawling" (4 students, winter 14/15)
  • Master's project "Joint Data Profiling" (4 students, winter 13/14)
  • Master's project "Piggy-back Profiling" (6 students, winter 13/14)
  • Bachelor's project "ProCSIA: Profiling column stores with IBM's Information Analyzer" (8 students, summer 2011)

Current and past master theses

  • Please see these links for ongoing and completed master's theses, many of which are in the data profiling area. All theses are available as pdf - just contact Felix Naumann.

Courses

Publications

Data-driven Schema Normalization

Papenbrock, Thorsten; Naumann, Felix in Proceedings of the International Conference on Extending Database Technology (EDBT) EDBT , 2017 .

Ensuring Boyce-Codd Normal Form (BCNF) is the most popular way to remove redundancy and anomalies from datasets. Normalization to BCNF forces functional dependencies (FDs) into keys and foreign keys, which eliminates duplicate values and makes data constraints explicit. Despite being well researched in theory, converting the schema of an existing dataset into BCNF is still a complex, manual task, especially because the number of functional dependencies is huge and deriving keys and foreign keys is NP-hard. In this paper, we present a novel normalization algorithm called Normalize, which uses discovered functional dependencies to normalize relational datasets into BCNF. Normalize runs entirely data-driven, which means that redundancy is removed only where it can be observed, and it is (semi-)automatic, which means that a user may or may not interfere with the normalization process. The algorithm introduces an efficient method for calculating the closure over sets of functional dependencies and novel features for choosing appropriate constraints. Our evaluation shows that Normalize can process millions of FDs within a few minutes and that the constraint selection techniques support the construction of meaningful relations during normalization.
paper-89.pdf
Further Information
Tags data-driven functional_dependencies hpi hyfd isg normalization profiling schema unique_column_combinations
BibTeX