Hasso-Plattner-Institut
Prof. Dr. Felix Naumann
  
 

Abstract

The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of same entities into large data collections. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually generate such measures, which is both time-consuming and requires extensive domain expertise.

We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We test our method on multiple datasets and compare our approach to state of the art methods. Our approach outperforms competitors by up to +26 percent, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in f-measure of up to +4.7 percent.

Datasets

DatasetAllDuplicatesNon-Duplicates
CD
Census
Cora
Restaurants

Remarks

Due to copyright restrictions, we regret that we cannot release the Movies dataset.