Skip to main content

Abstract

The certification of data-processing systems (including artificial intelligence systems) is of paramount importance to guarantee their regulatory compliance in Europe. The definition of certification strategies is a current topic, and one can expect that a part of the certification procedures will rely on the evaluation on representative data. However, the data on which compliance tests have to be performed is still an issue for public authorities, and research has not yet produced a sufficient answer. These data need to be conceived in an “open data” spirit, since the process must be reasonably transparent, and industry must be aware of the nature of the compliance tests (in the case of certification by an accredited third party) or even be able to perform these tests (for self-certification in non-critical domains). Evaluators must then have test datasets at their disposal. However, there are obvious limitations to creating evaluation data sets for certification: how can one collect datasets that are sufficiently representative of the variety of systems to be evaluated, unknown to the developers (to prevent training), that do not hinder innovation by imposing too many constraints on the developers, based on data both collected from sources sufficiently different from those used for learning, but also sufficiently similar to ensure comparability... Building and storing pools of test datasets for certification seems to be a really limited option. We envision one possible solution: designing open access models of test data, that will allow the collection of adapted data sets for the evaluation in the context of certification, re-certification (evolving systems) and self-certification. Europe needs to be structured to be able to face the fast evolution of data technologies; a model of governance should also be designed to enhance the interactions between industry, public authorities and research.

Search