A right to obtain an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. Different stakeholders (e.g. patients, clinicians, developers, auditors, etc.) may have different background knowledge, competencies and goals, thus requiring different kinds of explanations. Fortunately, there is a growing armoury of ways of interpreting ML models and explaining decisions. Let us use the phrase ‘explanation strategy’ to refer collectively to interpretable models, methods for visualisation, and algorithms for explaining the predictions of models that have been built by Machine Learning (ML). As these explanation strategies mature, practitioners will gain experience that helps them know which strategies to use in different circumstances.
Whilst existing XAI libraries provide interfaces to a limited number of explanation strategies, these efforts remain disconnected and provide no easy route to reusability at scale. Our aim goes well beyond the development of a library. We aim to transform the XAI landscape through an open platform that can assist a spectrum of users (knowledge engineers, domain experts, novice users) in the selection and application of appropriate explanation strategies given an AI problem-solving task.
The iSee Project will show how end-users of AI can capture, share and re-use their explanation experiences with other users who have similar explanation needs. We hypothesise that episodes of explanation strategy experience can be captured and reused in similar future task settings.
Our idea is to create a unifying platform, underpinned by case-based reasoning (CBR), in which successful experiences of applying an explanation strategy to an ML task can be captured as cases and retained in a case base for future reuse. Our cases will encode knowledge about the decisions made by a user and the effectiveness of the strategy, so that our CBR system can recommend how best to explain ML predictions to other users in similar circumstances. We recognise that explanation strategies can be foundational, of the kind found in the research literature, and these can seed the case base. However, user needs are often multi-faceted. We will show how new cases that capture composite strategies can be composed from foundational ones, by extending the CBR technique of constructive reuse.
Our proposal describes how we will develop an ontology for describing a library of explanation strategies; develop metrics to evaluate their acceptability and suitability and use these in a case representation that we will develop to capture experiences of using explanation strategies. Cases record the objective and subjective experience of different users of different ML explanation strategies, so that they can be shared and re-used.
We include a number of high-impact use cases, where we work with real-world users to co-design the representations and algorithms described above, and to evaluate and validate our approach. These use cases will also seed the case base.
The target outcome in the CHIST-ERA call that it most directly addresses is the following: “Developing a means to measure the effectiveness of explainable systems for different stakeholders (objective benchmarks and evaluation strategies for research in this domain).” But it underpins this by providing a platform that enables these measures of effectiveness to be recorded and that guides stakeholders in the future deployment of explanation strategies.
Additionally, also drawing form the CHIST-ERA call, we will argue that our proposal
- fosters explanation strategy performance evaluation and experiment reproducibility
- exhibits international collaboration
- is based on co-creation of representations and evaluation criteria with our partners
- develops a framework that promotes explanation strategy re-use; and
- meets the best research standards in terms of open access to software and published results.
Start date: (36 months)
Funding support: 846 318 €