Skip to main content

Abstract

Interpretability of artificial intelligence (AI) models is one of the most discussed topics in contemporary AI research (Holm, 2019). Leading architects of AI, like Turing Award winner Judea Pearl are very critical with the current machine learning (ML) concentration on (purely data-driven) deep learning and its non-transparent structures (Ford, 2018). "These and other critical views regarding different aspects of the machine learning toolbox, however, are not a matter of speculation or personal taste, but a product of mathematical analyses concerning the intrinsic limitations of data-centric systems that are not guided by explicit models of reality" (AAAI-WHY 2019). In order to achieve a human-like AI, it is necessary to tell the AI how humans come up with decisions, how they plan and how they imagine things. Humans do that through causal reasoning (Pearl & Mackenzie, 2018). Therefore, in this talk (and project proposal), we will focus on aspects for integrating causal inference wit h machine learning, stimulated, among others, by Pearl's New Science of Cause and Effect, in order to come up with know-how that is complementary to the current deep learning expertise.

Specifically, based on the Software Competence Center Hagenberg's (SCCH) experience of carrying out AI-related research projects together with industry partners, the following research topics are relevant from an industrial point of view:

  • Learning causal models from industrial data sets with applications for, e.g., imputation of missing data based on causal inference
  • Extraction and generation of causal models from knowledge graphs and large heterogeneous and unstructured data sets, e.g. for identifying cause-effect relationships of system failures from system logs and development artifacts (code, architecture/requirements/test specifications)
  • Research on potential integration of several causal models to create comprehensive domain knowledge models