Skip to main content

Abstract

Explainable AI (XAI) is not only relevant from the perspective of developers who want to understand how their system or model is working in order to debug or improve it. XAI is also a LEGAL ISSUE: For those affected by an algorithmic decision, it is important to comprehend why the system arrived at this decision in order to understand the decision, develop trust in the technology and - if the algorithmic decision making process is illegal - initiate appropriate remedies against it. Last but not least, XAI enables experts (and regulators) to audit decisions and verify whether legal regulatory standards have been complied with. All these arguments strike in favor for OPENING THE BLACK BOX. On the other hand, there are a number of legal arguments against full transparency of AI systems, esp. the interest to protect trade secrets, national security, and privacy.

Against this background, I will try to explore the European legal framework for XAI in my short talk.