Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods, to design more understandable, reconfigurable and personalisable explanations.
Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches.
Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quicky understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.
Start date: (36 months)
Funding support: 1 080 590 €