ARTI 20

Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges

Abstract

This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make three novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use ML models. Second, we argue, however, that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. Third, in a technical case study, we implement a variety of machine learning models to empirically evaluate the (legally required) trade-off between accuracy and explainability in the context of spam classification.

Full Paper

ARTI20.pdf

BibTex Entry