Skip to main content

Abstract

The focus of this talk is today’s challenges of Artificial Intelligence in Medicine (AIM) and the need of explainability to support the global strategies recently defined by international healthcare authorities.

From a machine learning perspective, the support of multidisciplinary medical teams in such global healthcare problems imply the integration of: (1) a myriad of clinical data sources; and (2) knowledge from multiple levels of the healthcare administration.

We claim that the trust on AIM is the baseline of successful decision support systems in real clinical settings. Indeed, learned AIM models can be trustworthy when they have the validation of a clinical team. However, due to the complexity and the variety of clinicians involved in these scenarios, we believe that a formal research on explainable AIM is required to build trust mechanisms from a technical point of view.

In particular, following the WHO’s recommendations, the EU is implementing the European ONE-health action plan, drawing their attention to global antimicrobial resistance. We show our experience in developing a clinical decision support system for antimicrobial stewardship medical teams and its evaluation in 9 hospitals. We identify current needs, technical requirements to scale AIM systems and the need of explainability.

Search