Skip to main content

Abstract

Example-based explanation methods select particular instances of the dataset to explain the behavior of machine learning models or to explain the underlying data distribution. That is, once the model is build, it is intended to be explained based on instances whose information has been used to buid it. For instance, a training instance is called influential when its deletion from the training data significantly changes the parameters or predictions of the model.

Implicitly, some machine learning methods work example-based. Support Vector Machines look for those instances (vectors) that define the frontier (hyperplane) between two different classes. Given a new non-labeled instance, Knn methods locates the k closest labeled instances in the training set to predict class for the new instance. Thus, it is possible to explain this machine learning approaches using the relevant instances. In fact, it has been shown that example-based explanations performe significantly better than feature-based explanation in order to help the user to understands the reasons behind a predictions, to provide the user with relevant information, to increase the confidenciability of the users, etc.

Here we discuss the need increase the resources to build new Explainable Machine Learning methods based on Instances, where the focus of the development of the method is on the interpretability based on examples.