Skip to main content

Abstract

The growing impact that artificial intelligence is having on our everyday lives, combined with the opaqueness of deep learning technology, has brought the problem of reliability and explainability of predictive systems to the top of the AI community agenda. However, most research on explainable learning focuses on post-hoc explanation, where one aims at explaining the reasons for the predictions made by a learned model. The case of Alice and Bob, two chatbots that Facebook shut down after discovering that they ended up developing their own “secret language” to communicate, is a clear example of the limitations of this approach to explainability. We argue that online explainability, in which the user is involved in the learning loop and interactively provides feedback to guide the learning process to the desired direction, is crucial in order to develop truly reliable and trustworthy AI.