AI applications have long been part of our everyday lives and are making more and more decisions for humans. How good and value-free these are, however, depends largely on the quality of the data with which the AI technologies work and train. But how can high data quality be consistently ensured?
The Hasso Plattner Institute (HPI) is working as a partner in the new collaborative research project "AI Test and Training Data Quality in the Digital Work Society" (KITQAR) to develop standards and measurement criteria for high data quality. KITQAR deals with the quality requirements for AI test, validation, and training data in the digital work and knowledge society and investigates these data in a scientific-technical consortium from an informatics, ethical, legal, standardization, and practical perspective. The goal of the project is the exemplary development of a practically applicable and scientifically based model for AI testing, validation and training data quality. The project, funded by the German Federal Ministry of Labor and Social Affairs (BMAS), started on December 1, 2021, and will run for 20 months.
"The quality of the data used is a growing concern among artificial intelligence developers and users. In addition to developing classic data cleaning methods, it is important to define the quality of data more generally and thus also take ethical and legal limits into account," said Professor Felix Naumann, who is leading the project on the HPI side at his Department of Information Systems.
The KITQAR research project is funded by the Denkfabrik Digitale Arbeitsgesellschaft at the German Federal Ministry of Labor and Social Affairs (BMAS). The consortium leader of the joint research project is the German Association for Electrical, Electronic & Information Technologies (VDE). (VDE). In addition to the Hasso Plattner Institute (HPI), the European University Viadrina and the International Center for Ethics in Science (IZEW) are also partners in the project.