Prof. Dr. Felix Naumann
Youri Kaminsky (exercises)
According to Wikipedia, data profiling is the process of examining the data available in an existing data source [...] and collecting statistics and information about that data. It encompasses a vast array of methods to examine data sets and produce metadata. Among the simpler results are statistics, such as the number of null values and distinct values in a column, its data type, or the most frequent patterns of its data values. Metadata that are more difficult to compute usually involve multiple columns, such as inclusion dependencies or functional dependencies between columns. More advanced techniques detect approximate properties or conditional properties of the dataset at hand.
Data profiling is relevant as a preparatory step to many use cases, such as query optimization, data mining, data integration, and data cleansing. Topics include an introduction, data structures, unique column combinations, functional dependencies, inclusion dependencies, order dependencies, denial constraints, and semantic interpretation of profiling results.
In this lecture, we will study efficient algorithms and data structures to handle the typically vast search spaces of data profiling tasks. The techniques are applicable not only to data profiling, but provide general ideas of algorithmically dealing with large-scale data. The topics include:
- Foundations: data structures and basic algorithms
- Single-column profiling
- Discovery of unique column combinations
- Functional dependency discovery
- Order dependency discovery
- Inclusion dependency discovery
- Query optimization using dependencies
- Semantic data profiling
- Denial constraint discovery
- Violation detection