Developing and testing methodologies that allow to interpret the predictions of the AI algorithms in terms of transparency, interpretability, and explainability has become today one of the most important open questions in AI.
In this proposal we bring together researchers from different fields with complementary skills, essential to be able to understand the behaviour of the AI algorithms, that will be studied with an interesting set of multidisciplinary use-cases in which explainable AI can play a crucial role and that will be used to quantify strengths and highlight, and possible solve, weakness of the available explainable AI methods in different applicative contexts. One aspect hindering so far substantial progress towards explainability is the fact that several proposed solutions in explainable AI proved to be effective after being tailored to specific applications, and frequently not easily transferred to other domains. In this project, we will test the same array of techniques for explainability to use-cases intentionally chosen to be quite heterogeneous with respect to the types of data, learning tasks, scientific questions.
The proposed use-cases range from High Energy Physics AI applications, to applied AI in medical imaging, to applied AI for the diagnosis of pulmonary, to tracheal and nasal airways, to machine-learning techniques of explainability used to improve analysis and modelling in neuroscience.
For each use-case, the research project will consist of three phases. In the first part, we will apply state-of-the-art explainability techniques, properly chosen based on the requirements, to the case under consideration. In the second part, shortcomings of the techniques will be identified. Most notably, issues of scalability to high-dimensional and raw data, where noise can be prevalent compared to the signal of interest, will be taken into consideration, as long as the level of certifiability afforded by each algorithm. In the final phase, new algorithmic methodologies adequate to HEP, medical, and neuroscientific use cases will be designed, based on these considerations
Start date: (36 months)
Funding support: 890 250 €