Skip to main content

Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies aiming at explaining machine learning (ML) models, and enabling humans to understand, trust, and manipulate the outcomes produced by artificial intelligent entities effectively. Although these initiatives have advanced over the state of the art, several challenges still need to be addressed to apply XAI in real-life scenarios adequately. In particular, two key aspects that need to be addressed are the personalization of XAI and the ability to provide explanations in decentralized environments where heterogeneous knowledge is prevalent. Firstly, personalization of XAI is particularly relevant, due to the diversity of backgrounds, contexts, and abilities of the subjects receiving the explanations generated by AI-systems (e.g., patients and healthcare professionals). Henceforth, the need for personalization must be coped with the imperative need for providing trusted, transparent, interpretable, and understandable outcomes from ML processing. Secondly, the emergence of diverse AI systems collaborating on a given set of tasks relying on heterogeneous datasets opens to questioning how explanations can be combined or integrated, considering that they emerge from different knowledge assumptions and processing pipelines.

In this project, we want to address those two challenges, leveraging on the multi-agent systems (MAS) paradigm, in which decentralized AI agents will extract and inject symbolic knowledge from/in ML-predictors, which, in turn, will be dynamically shared composing custom explanations. The proposed approach combines inter-agent, intra-agent, and human-agent interactions to benefit from both the specialization of ML agents and the establishment of agent collaboration mechanisms, which will integrate heterogeneous knowledge/explanations extracted from efficient black-box AI agents.  

The project includes the validation of the personalization and heterogeneous knowledge integration approach through a prototype application in the domain of food and nutrition monitoring and recommendation, including the evaluation of agent-human explainability, and the performance of the employed techniques in a collaborative AI environment.

Call Topic: Explainable Machine Learning-based Artificial Intelligence (XAI), Call 2019
Start date: (36 months)
Funding support: 888 456 €