Hasso-Plattner-Institut
Prof. Dr. Felix Naumann
 

Authors

Melanie Weis, Felix Naumann

Abstract

Duplicate detection is the problem of detecting different entries in a data source representing the same real-world entity. While research abounds in the realm of duplicate detection in relational data, there is yet little work for duplicates in other, more complex data models, such as XML. In this paper, we present a generalized framework for duplicate detection, dividing the problem into three components: candidate definition defining which objects are to be compared, duplicate definition defining when two duplicate candidates are in fact duplicates, and duplicate detection specifying how to efficiently find those duplicates.

Using this framework, we propose an XML duplicate detection method, DogmatiX, which compares XML elements based not only on their direct data values, but also on the similarity of their parents, children, structure, etc. We propose heuristics to determine which of these to choose, as well as a similarity measure specifically geared towards the XML data model. An evaluation of our algorithm using several heuristics validates our approach. [more]

Test data

  • Dataset 1
    500 non-duplicate CD objects extracted from the FreeDB dataset + 500 artificially generated duplicates (1 for each CD)
  • Dataset 2
    500 non-duplicate Movies extracted from IMDB + the same 500 movies from Film-Dienst
  • Dataset 3
    10.000 CDs randomly extracted from FreeDB

Related work from our Information Systems group