Abstract
Physical and technical systems are peculiar for machine learning.
The solution of a variety of engineering problems necessitates the identification of an accurate system model serving as a basis of its supervisory control. While physical laws provide a way of generalizable modeling, the main danger originates in the incompleteness of factors taken into account.
Observation-based identification learns a good phenomenological model in terms of numerical approximation. However, it is tough to associate such a model with a priori knowledge. This background is frequently formulated as complex engineering models, partly describing the system only qualitatively. This discrepancy is a significant cause of prohibiting explainability.
The proposed approach originates in our experience in the performability analysis of critical infrastructures. Here a qualitative model summarizes the interactions in the system under evaluation, as extracted of operation logs and outcomes of benchmark experiments. (This model-building process can be well-supported by Inductive Logic Programming for Answer Set Programming, a special kind of machine learning). These models are well-interpretable, as they enrich an a priori engineering model with the newly extracted knowledge, thus delivering a representation directly consumable for the domain expert.
Evolving technologies, like physically constrained neural networks, integrate this model into the learning model to assure, that the phenomenological model complies with the engineering one. Moreover, the qualitative model may serve as a checking automaton during runtime in critical applications.