Skip to main content

Abstract

European regulations require that products marketed in the territory must be safe and reliable. In this context, it is necessary for AI developers to be able to design systems that meet sets of criteria to ensure that products comply with existing and soon-to-come regulatory requirements. From the point of view of the lawyer and the evaluator, it is also essential that system inspection be made possible, in particular for the purpose of liability determination or conformity assessment.

However, the AI stakeholders face two main issues: first, the scarcity of reference frameworks (optimal characteristics, test methods, performance thresholds, etc.), such as normative and regulatory standards, is a limitation to the development and deployment of intelligent systems. On the other hand, system analysis requires methods for querying and testing AI algorithms that may be provided by explainability solutions (whether by design or with software overlays).

In the spirit of the European New Approach Directives, technical characteristics of explainability solutions should not be constrained by regulations, but guidance may be provided through standards. As an independant evaluating and certifying body, LNE contributes to the development of standards and test methods for the qualification of AI systems. LNE wishes to explore the design of a reference framework for explainability, including the type of information to be extracted for compliance assessment, the conditions of the application of explainability solutions, but also the assessment of the performance of these solutions.

LNE expects to lead a proposal gathering partners offering different implementation methods for explainable AI, for at least one application which may be subject to specific regulations (such as medical applications).