Skip to main content

Abstract

On the one hand, there has been a growing interest towards the application of AI-based for self-adaptation under uncertainty. On the other hand, self-explanation is one of the self-* properties that has been neglected. This is paradoxical as self-explanation is inevitably needed when using such techniques. We argue that a self-adaptive system (SAS) needs an infrastructure and capabilities to look at its own history to explain and reason why the system has reached its current state. Such an infrastructure and capabilities need to be built based on the right conceptual models in such a way that the system's history can be stored, queried to be used in the context of the decision-making algorithms. We framed explanation capabilities in four architectural incremental levels, from forensic self-explanation to automated history-aware (HA) systems. Incremental capabilities imply that capabilities at level n should be available for capabilities at level n+1. The poster shows results for the first two levels using temporal graph-based models in the domain of Bayesian learning. Future work is also outlined.