Skip to main content

Abstract

Explainability has been investigated in several ways in the field of machine learning: there are more interpretable models (e.g., decision trees) and more accurate models (e.g., deep networks), and one can try to explain the behavior of even complex models in a more understandable way.

However, when embedded in large AI systems, explainability is much less well studied. Even if we can explain the behavior of a predictive model, we may fail in explaining the actions which a system takes or recommends. Still, understanding actions forms a large part of what humans expect from explanations by AI, e.g., when researchers in collaborative projects need to decide on a next action, when patients want to understand the possible treatments, or when data subjects want to understand the effects of privacy agreements.

I will suggest a number of ideas for research towards AI-based explainability of actions (or more generally policies) of systems exploiting artificial and human intelligence.

Search