Skip to main content

Abstract

Explainability of an AI system is needed to build user's trust. However, explainability is not a feature that could be added to existing AI black-box system. We claim that AI systems have to be build explainable by design. To achieve this goal, they should be designed as hybrid systems, where the machine learning component is integrated with a knowledge-based component. We demonstrate how we achieved it in the area of context-aware systems, where we proposed a knowledge-driven human-computer interaction process of context mediation. Furthermore, trust and explainability cannot be addressed only on the technical level. In our interdisciplinary work on the intersection of AI and law, we consider the legal notion of liability. We claim that the analysis of legal liability is needed for building trust to AI systems. We analyze how it can be applied to AI systems, as it is plays crucial role in certain application areas. Moreover, we emphasize that explainability of AI system should be in fact a requirement from the legal point of view.