Prof. Dr. Felix Naumann

The Effects of Data Quality on Machine Learning Performance

Lukas Budach, Moritz Feuerpfeil, Nina Ihde, Andrea Nathansen, Nele Sina Noack, Hendrik Patzlaff, Felix Naumann, and Hazar Harmouch.

Modern artificial intelligence (AI) applications require large quantities of training and test data. This need creates critical challenges not only concerning the availability of such data, but also regarding its quality. For example, incomplete, erroneous or inappropriate training data can lead to unreliable models that produce ultimately poor decisions. Trustworthy AI applications require high-quality training and test data along many dimensions, such as accuracy, completeness, consistency, and uniformity.

We explore empirically the relationship between six of the traditional data quality dimensionss, namely consistent representation, completeness, feature accuracy, target accuracy, uniqueness, and target class balance and the performance of fifteen widely used machine learning (ML) algorithms covering the tasks of classification, regression, and clustering, with the goal of explaining their performance in terms of data quality. Our experiments distinguish three scenarios based on the AI pipeline steps that were fed with polluted data: polluted training data, test data, or both. We conclude the paper with an extensive discussion of our observations.

Supporting Material

Results as a Technical Report

All results are available in our technical report from here.

Source Code

The code and documentatioin of how to repreduce the results can be found in the following repository in Github.