Hasso-Plattner-Institut
Prof. Dr. Felix Naumann
  
 

Thorsten Papenbrock

Research Assistant, PhD Candidate

Hasso-Plattner-Institut
für Softwaresystemtechnik
Prof.-Dr.-Helmert-Straße 2-3
D-14482 Potsdam
Room: G-3.1.09

 

Phone: +49 331 5509 294
Email:  thorsten.papenbrock(a)hpi.de
Profiles: Xing
Research: GoogleScholar, DBLP, ResearchGate


Projects

Metanome

Research Interests

Data Profiling:

Solving computationally complex tasks is a challenge and a central activity in data profiling. This involves primarily the discovery of metadata in many gigabyte-sized datasets, which is why algorithms developed for this purpose need to be efficient and robust. Because data profiling offers such a plethora of challenging, yet unsolved tasks, I have chosen it as my primary research area. I am in particular interested in the discovery of data dependencies, such as inclusion dependencies, unique column combinations, functional dependencies, order dependencies, matching dependencies, and many more.

Data Cleansing:

Data is one of the most important assets in any company. Therefore, it is crucial to ensure its quality and reliability. Data cleansing and data profiling are two essential tasks that - if performed correctly and frequently - help to guarantee data fitness. In this area, I am particularly interested in (semi-)automatic duplicate detection methods and normalization techniques as well as their efficient implementation.

Parallel and Distributed Systems:

Due to the complexity of many tasks in IT, a clever algorithm alone is often not able to deliver a solution in time. In these cases, parallel and distributed systems are needed. Especially when facing ever larger datasets, i.e., big data, we need to consider technologies such as map-reduce (e.g. Spark and Flink), actors (e.g. Akka), and GPUs (e.g. CUDA and OpenCL) to implement scalability into our solutions.

Teaching

Lectures:

  • Database Systems I (2013, 2014, 2015, 2016, 2017)
  • Database Systems II (2013)
  • Data Profiling and Data Cleansing (2014)
  • Information Integration (2015)
  • Data Profiling (2017)
  • Distributed Data Analytics (2017)

Seminars:

  • Advanced Data Profiling (2013)
  • Proseminar Information Systems (2014)

Bachelor Projects:

  • Data Refinery - Scalable Offer Processing with Apache Spark (2015/2016)

Master Projects:

  • Joint Data Profiling - Holistic Discovery of INDs, FDs, and UCCs (2013)
  • Metadata Trawling - Interpreting Data Profiling Results (2014)
  • Approximate Data Profiling - Efficient Discovery of approximate INDs and FDs (2015)
  • Profiling Dynamic Data - Maintaining Matadata under Inserts, Updates, and Deletes (2016)

Master Thesis:

    • Discovering Matching Dependencies (Andrina Mascher, 2013)
    • Discovery of Conditional Unique Column Combination (Jens Ehrlich, 2014)
    • Spinning a Web of Tables through Inclusion Dependencies (Fabian Tschirschnitz, 2014)
    • Multivalued Dependency Detection (Tim Draeger, 2016)

    Online Courses:

    • Datenmanagement mit SQL (openHPI, 2013)

    Publications

    Data Anamnesis: Admitting Raw Data into an Organization

    Kruse, Sebastian; Papenbrock, Thorsten; Harmouch, Hazar; Naumann, Felix in IEEE Data Engineering Bulletin 2016 .

    Today’s internet offers a plethora of openly available datasets, bearing great potential for novel applications and research. Likewise, rich datasets slumber within organizations. However, all too often those datasets are available only as raw dumps and lack proper documentation or even a schema. Data anamnesis is the first step of any effort to work with such datasets: It determines fundamental properties regarding the datasets’ content, structure, and quality to assess their utility and to put them to use appropriately. Detecting such properties is a key concern of the research area of data profiling, which has developed several viable instruments, such as data type recognition and foreign key discovery. In this article, we perform an anamnesis of the MusicBrainz dataset, an openly available and com- plex discographic database. In particular, we employ data profiling methods to create data summaries and then further analyze those summaries to reverse-engineer the database schema, to understand the data semantics, and to point out tangible schema quality issues. We propose two bottom-up schema quality dimensions, namely conciseness and normality, that measure the fit of the schema with its data, in contrast to a top-down approach that compares a schema with its application requirements.
    Data_Anamnesis-_Admitting_Raw_Data_into_an_Organization.pdf
    Further Information
    Tags data_anamnesis data_profiling isg schema_discovery