Conference 2019 Abstracts

The abstracts will be added to this page until June on a continuous basis.

Explainable AI with Knowledge Graphs and Semantic Explanations

Semantic technologies, such as knowledge graphs, ontologies and reasoning have been developed as a bridge between human and machine conceptualizations of a domain of interest.

They may provide human-centric and semantic interpretation and be used to produce semantic explanations, i.e. explanations based on semantic concepts coming from knowledge graphs and ontologies.

I will show several ideas and proposals on how knowledge graphs, ontologies and schemas, both from arbitrary domain and from the domain of machine learning in particular, may be combined with machine learning for: i) providing semantic explanations, ii) facilitating generation of other types of explanations (textual or visual explanations).

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Green Intelligence

It is essential to understand and precisely model our urban environment in order for it to serve ever deteriorating mental health of urban dwellers. Researchers consent that exposure to nature has tremendous benefit on mental health and well-being. Contact with natural environments reduces stress, fatigue and triggers positive emotions. Nevertheless, not all green/blue urban spaces have the same potential, as has been confirmed by inconsistency in comparative studies between the quantity of green and the positive health outcomes and neuroscience research showing that different types of landscapes generate different levels of restorative response.It is then important to create tools able to distinguish the landscapes most beneficial to our health. One such tool is the CLQ, tested with electroencephalography (EEG) studies by the NeuroLandscape team, where the appropriate aggregation of contemplative landscape components within the view induced the brain activity associated with the reduction of stress and attention restoration significantly stronger than the other green views. However, CLQ is an expert-based approach which has been deemed inefficient because of low level of precision and reliability, and high cost. Due to these limitations, there is a clear need for a reliable assessment tool of the quality of green/blue urban spaces, which would consider the complexity of types and components of different landscapes perceived from the human point of view. Moreover, there is a need to automate this assessment tool making it useful for broad spectrum of contexts. Our goal is to develop an A.I. tool — Contemplative Landscape Automated Scoring System (CLASS) based on the CLQ, trainable first by data from panel of experts, later capable to learn directly from public data — an innovative, non-pharmacological, cost-effective way to promote mental well-being in cities, through exposure to healthy environments. CLASS will be as credible as panel of experts, user-friendly and free for public. It will be calibrated to different urban morphologies across Europe; and able to analyze and map results from both individual and sets of photos. It will continuously learn adjusting to the changing realm. We will share CLASS with various stakeholders through publications and social campaigns and incorporate a platform in future NeuroLandscape activities.

Short talk or poster (to be defined)

Novel Computational Approaches for Environmental Sustainability (June 12)

Green Intelligence

The research project that will transform how we benefit from our urban environment. The goal is to develop the artificial intelligence tool - CLASS (Contemplative Landscape Automated Scoring System), in order to digitally score all environment exposures that we may encounter in our cities. That is directly linked to our mental health because, as the science shows, there are some certain landscape types and components, which can positively influence our mental health and well being. There are also ones which contribute to ever growing burden of mental health disease in our built-up world...

Poster

Novel Computational Approaches for Environmental Sustainability (June 12)

Explainable artificial intelligence for physical and technical systems

Physical and technical systems are peculiar for machine learning.

The solution of a variety of engineering problems necessitates the identification of an accurate system model serving as a basis of its supervisory control. While physical laws provide a way of generalizable modeling, the main danger originates in the incompleteness of factors taken into account.

Observation-based identification learns a good phenomenological model in terms of numerical approximation. However, it is tough to associate such a model with a priori knowledge. This background is frequently formulated as complex engineering models, partly describing the system only qualitatively. This discrepancy is a significant cause of prohibiting explainability.

The proposed approach originates in our experience in the performability analysis of critical infrastructures. Here a qualitative model summarizes the interactions in the system under evaluation, as extracted of operation logs and outcomes of benchmark experiments. (This model-building process can be well-supported by Inductive Logic Programming for Answer Set Programming, a special kind of machine learning). These models are well-interpretable, as they enrich an a priori engineering model with the newly extracted knowledge, thus delivering a representation directly consumable for the domain expert.

Evolving technologies, like physically constrained neural networks, integrate this model into the learning model to assure, that the phenomenological model complies with the engineering one. Moreover, the qualitative model may serve as a checking automaton during runtime in critical applications.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Online explainability

The growing impact that artificial intelligence is having on our everyday lives, combined with the opaqueness of deep learning technology, has brought the problem of reliability and explainability of predictive systems to the top of the AI community agenda. However, most research on explainable learning focuses on post-hoc explanation, where one aims at explaining the reasons for the predictions made by a learned model. The case of Alice and Bob, two chatbots that Facebook shut down after discovering that they ended up developing their own “secret language” to communicate, is a clear example of the limitations of this approach to explainability. We argue that online explainability, in which the user is involved in the learning loop and interactively provides feedback to guide the learning process to the desired direction, is crucial in order to develop truly reliable and trustworthy AI.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

On Combing Deep-Learning and Classical Control Theoretic Approaches

End to end deep learning of control policies have gathered much attention in recent times. Their attractiveness stems from the fact that in principal they can work with very minimal assumptions on the problem structure (robot dynamics, environment model etc.). But at the same time, control policies learned through end-to-end approaches have a certain opaqueness, i.e if it does not work, it is difficult to pinpoint the exact reason.

On the other hand, conventional control theoretical motion planning and control comes with certain guarantees and explainability. For example, we can answer questions like, whether robot state converges from a certain set of initial conditions under the given feedback control policy. However, classical approaches require more information and assumptions about the problem structure.

In our research group, we are developing ways to optimally integrate deep-learning based approaches with classical control theoretic algorithms for safety critical applications like autonomous driving and human-robot collaborative manufacturing. The key focus is on understanding on what parameters of control theoretic algorithms needs to be learned in order to make them reliable in real world setting. The explainability of our approach stem from the fact that learned parameters have a physical meaning and thus the performance shortcomings can be clearly explained and analyzed.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Exploring Internal Representations and Extracting Rules from Deep Neural Networks

Artificial deep neural networks are a powerful tool, able to extract information from large datasets and, using this acquired knowledge, make accurate predictions on previously unseen data. As a result, they are being applied in a wide variety of domains ranging from genomics to autonomous driving, from speech recognition to gaming. Many areas, where neural network-based solutions can be applied, require a validation, or at least some explanation, of how the system makes its decisions. This is especially true in the medical domain where such decisions can contribute to the survival or death of a patient. Unfortunately, the very large number of parameters required by deep neural networks is extremely challenging to cope with for explanation methods, and these networks remain for the most part black boxes. This demonstrates the real need for accurate explanation methods able to scale with this large quantity of parameters and to provide useful information to a potential user. Our research aims at providing tools and methods to improve the interpretability of deep neural networks.

In this context, we developed a method allowing a user to interrogate a trained neural network and reproduce internal representations, at various depths within the network. This allows for the discovery of biases that might have been overlooked in the training dataset and enable the user to verify and potentially discover new features that have been captured from the data by the network.

Another tool, based on rule extraction, is a method that emphasizes the regions of an image that are relevant to a certain class, through a local approximation of a neural net. This method is of particular interest when the detection of a certain feature or characteristic is particularly complex, and where artificial neural nets exceed human performance. This is especially the case in some medical diagnosis tasks.

To understand how features extracted by the network are combined to produce specific predictions, a third approach aims at extracting logical rules that reflect the behavior of the network’s fully connected layers. Such approach consists in (1) using a trained network to extract features from a set of images, (2) training a Random Forest to create a set of rules, based on those features, that behave in the same manner than the network, and (3) ranking those rules according to their contribution to the prediction. An analyst can then select the top-N rules allowing for an interpretation.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

From shallow to deep learning for inverse imaging problems: Some recent approaches

In this talk we discuss the idea of data-driven regularisers for inverse imaging problems. We are in particular interested in the combination of model-based and purely data-driven image processing approaches. In this context we will make a journey from “shallow” learning for computing optimal parameters for variational regularisation models by bilevel optimization to the investigation of different approaches that use deep neural networks for solving inverse imaging problems. Alongside all approaches that are being discussed, their numerical solution and available solution guarantees will be stated.

Keynote talk

Explainable Machine Learning-based Artificial Intelligence (June 11)

Towards an explainable and convivial AI based tools: Illustration on medicine applications

Since 2010, the numerical Artificial Intelligence (AI) based on Machine Learning (ML) has produced impressive results, mainly in the fields of the pattern recognition and the natural language processing, succeeding to the previous dominance of the symbolic AI, centered on the logical reasoning. The integration of ML methods into industrial processes gives hope for new growth drivers. These impressive results could be considered in a first approach as the end of the mathematical models as the statistical analysis is able to reproduce phenomena. In true, Machine Learning is based on inductive models theorized by Francis Bacon in 1620. The use of inductive models requires to explain the prediction obtained on data, which is currently not often the case for industrial Machine Learning applications.

Consequently, the operational benefit of using Machine Learning methods is reco\-gnized but is hampered by the lack of understanding of their mechanisms, at the origin of operational, legal and ethical operational problems. This affects highly the operational acceptability of AI tools. This is largely dependent on the ability of engineers, decision-makers and users to understand the meaning and the properties of the results produced by these tools. In addition, the increasing delegation of decision-making offered by AI tools competes with tried and tested business rules, sometimes constituting certified expert systems. Machine Learning could be thus consider now as a colossus with a feet of clay. It is important to note that this difficult problem will not be solved only by mathematicians and by computer scientists. Indeed, it requires a large scientific collaboration for example with philosophers of science to investigate the properties of the inductive model, cognitive psychologists to evaluate the quality of an explanation and anthropologists to study the relation and the communication between humans and these AI tools.

The first part of the talk presents the challenges and the benefits coming from Artificial Intelligence for Industry and Services, in particular for the medicine. Medicine is changing in depth its paradigm moving from a reactive to a proactive discipline for reducing the costs while improving the healthcare quality. It is useful to remember that, before the success of Machine Learning, some automatic healthcare tools have been developed. For example, the MyCin healthcare program , developed in the seventies at Standford University, was developed to identify bacteria causing severe infections, such as bacteremia and meningitis and recommend antibiotics, with the dosage adjusted for patient's body weight. It was based on the Good Old-Fashioned AI (expert system). It is relevant to see that MyCin was never actually used in practice not for any weakness in its performance but largely for ethical and legal issues related to the use of computers in medicine. It was also already difficult to explain the logic of its operation and even more to detect contradictions.

The second part of the talk summarizes our research activities conducted with Frank Varenne, philosopher of science, and Judith Nicogossian, anthropobiolgist. Its main objectives is to provide and evaluate explanations of ML methods tools considered as a black box. The first step of this project, presented in this talk, is to show that the validation of this black box differs epistemologically from the one set up in the framework of mathematical and causal modeling of physical phenomena. The form of the explanation has to be evaluated and chosen to minimize the cognitive bias of the user. This also raises an ethical problem about the possible drift of producing more persuasive and transparent explanations. The evaluation must therefore take into account the management of the compromise between the need for transparency and the need for intelligibility. Another important point concerns the conviviality of the AI based tool that is to say the user's capability to work with independent efficiency. A philosophical and anthropological approach is required to define the conviviality of an AI tool which will be translated in terms of rules guiding its conception. Last but not least, an anthropological standpoint will be summarized in particular in the definition of the nature and the properties of the "phygital" communication, between IA and users.

Finally, the last part of the talk proposes some future research directions needed in our opinion to be included the CHIST-ERA program.

Keynote talk

Explainable Machine Learning-based Artificial Intelligence (June 11)

Explainable and Interpretable Models

Discuss recent progress made by the author in explainable and interpretable deep learning.

Will touch on methods for generating counterfactual explanations for deep learning systems (as cited by the GDPR). Interpretable deep learning methods for econometrics and mapping; and finally recent work on fusing inverse problem formulations with deep learning systems.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Explanation of Smart 5G Network Intrusion Detection using Attack Trees

Successful Intrusion Detection systems heavily rely on machine learning to detect anomaly. However, particularly in 5G networks, detected attacks contain complex information representing technical details about the network components (e.g., virtual BBU (vBBU), virtual RRH (vRRH), controllers, NFV orchestrator, involved EPC functions, etc.), its heterogeneous structure, security policies, and involved actors and their capabilities. This heterogeneous 5G infrastructure makes it hard for users to interpret machine generated attack data. Explanation is needed to clarify the attacks to users. This can happen using visualization techniques, for example, interactive tree graphs for improved user interaction allowing zooming in and out of details of attacks. In addition, explanation is needed to highlight which parts of attacks target which parts of the 5G network infrastructure and what parts of the security policies are violated. The challenge is to link up the Intrusion Detection Intelligence, analyse it, explain it, and feed back incident response decisions to users as well as to different levels of the 5G network infrastructure to enforce security policies in response to detected attacks. An important backbone for this process is to have models in which these heterogeneous scenarios can be encoded adequately yet concisely. A possibility is to use logical representation. However, the logics need to be powerful enough to represent entities, structures, and policies and yet rigorous and sufficiently supported with analysis and verification capabilities. Candidates are higher order temporal logics extended with attack trees and other security notions. A demonstrator platform will be provided using a cloud-native 5G set-up and Software Defined Network controllers.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Trust through explainability: Technical and legal perspective

Explainability of an AI system is needed to build user's trust. However, explainability is not a feature that could be added to existing AI black-box system. We claim that AI systems have to be build explainable by design. To achieve this goal, they should be designed as hybrid systems, where the machine learning component is integrated with a knowledge-based component. We demonstrate how we achieved it in the area of context-aware systems, where we proposed a knowledge-driven human-computer interaction process of context mediation. Furthermore, trust and explainability cannot be addressed only on the technical level. In our interdisciplinary work on the intersection of AI and law, we consider the legal notion of liability. We claim that the analysis of legal liability is needed for building trust to AI systems. We analyze how it can be applied to AI systems, as it is plays crucial role in certain application areas. Moreover, we emphasize that explainability of AI system should be in fact a requirement from the legal point of view.

Poster

Explainable Machine Learning-based Artificial Intelligence (June 11)

Explainable Machine Learning based on Instances

Example-based explanation methods select particular instances of the dataset to explain the behavior of machine learning models or to explain the underlying data distribution. That is, once the model is build, it is intended to be explained based on instances whose information has been used to buid it. For instance, a training instance is called influential when its deletion from the training data significantly changes the parameters or predictions of the model.

Implicitly, some machine learning methods work example-based. Support Vector Machines look for those instances (vectors) that define the frontier (hyperplane) between two different classes. Given a new non-labeled instance, Knn methods locates the k closest labeled instances in the training set to predict class for the new instance. Thus, it is possible to explain this machine learning approaches using the relevant instances. In fact, it has been shown that example-based explanations performe significantly better than feature-based explanation in order to help the user to understands the reasons behind a predictions, to provide the user with relevant information, to increase the confidenciability of the users, etc.

Here we discuss the need increase the resources to build new Explainable Machine Learning methods based on Instances, where the focus of the development of the method is on the interpretability based on examples.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Experts-based Recommendation System for Explainable Machine Learning methods in Data Science projects

In this poster, we discuss an interdisciplinary, open educational resource to provide help for Data Science researchers and practitioners looking for Explainable Machine Learning methods.

A significant set of success cases studies of Explainable Machine Learning will be collected and organized. Related publications, corresponding code, etc will be included. A free resource for researchers and practitioners to find and follow the latest state-of-the-art Explainable Machine Learning methods will be created.

In addition, a recommendation system based on experts opinions will be developed using the variety of information previously collected. Given an input information regarding the problem under consideration (as complete as possible) the system will look for the most similar explainable solutions and provides a guide for the resarcher.

In our opinion, to develop this project, an online community of data scientists, machine learners and experts on a number of application domains should be involved as part of a CHIST-ERA project.

Poster

Explainable Machine Learning-based Artificial Intelligence (June 11)

Explaining personalisation for a happier life: Recommender systems for wellbeing and leisure

This talk introduces the fundamentals of recommender systems as a data-driven AI tool for driving personalised user experiences. We then move onto the 'not-so-conventional' personalisation domains beyond e-commenrce, namely health, wellbeing, leisure and tourism in cities, highlighting state-of-the-art and ongoing challenges. A discussion on how explainability can be used to motivate personalised and more convincing recommendations for the end user.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

From explaining models to explaining decisions and systems

Explainability has been investigated in several ways in the field of machine learning: there are more interpretable models (e.g., decision trees) and more accurate models (e.g., deep networks), and one can try to explain the behavior of even complex models in a more understandable way.

However, when embedded in large AI systems, explainability is much less well studied. Even if we can explain the behavior of a predictive model, we may fail in explaining the actions which a system takes or recommends. Still, understanding actions forms a large part of what humans expect from explanations by AI, e.g., when researchers in collaborative projects need to decide on a next action, when patients want to understand the possible treatments, or when data subjects want to understand the effects of privacy agreements.

I will suggest a number of ideas for research towards AI-based explainability of actions (or more generally policies) of systems exploiting artificial and human intelligence.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Needs of explainable AI in global healthcare challenges

The focus of this talk is today’s challenges of Artificial Intelligence in Medicine (AIM) and the need of explainability to support the global strategies recently defined by international healthcare authorities.

From a machine learning perspective, the support of multidisciplinary medical teams in such global healthcare problems imply the integration of: (1) a myriad of clinical data sources; and (2) knowledge from multiple levels of the healthcare administration.

We claim that the trust on AIM is the baseline of successful decision support systems in real clinical settings. Indeed, learned AIM models can be trustworthy when they have the validation of a clinical team. However, due to the complexity and the variety of clinicians involved in these scenarios, we believe that a formal research on explainable AIM is required to build trust mechanisms from a technical point of view.

In particular, following the WHO’s recommendations, the EU is implementing the European ONE-health action plan, drawing their attention to global antimicrobial resistance. We show our experience in developing a clinical decision support system for antimicrobial stewardship medical teams and its evaluation in 9 hospitals. We identify current needs, technical requirements to scale AIM systems and the need of explainability.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Bias and Discrimination in AI – Towards more transparent and explainable attribute-sensitive decisions

With the widespread and pervasive use of AI for automated decision-making systems, AI bias is becoming more apparent and problematic. One of its negative consequences is discrimination: the unfair, unequal, or simply different treatment of individuals based on certain characteristics. However, the relationship between bias and discrimination is still unclear. In this talk, I will discuss current research we are conducting under the frame of an EPSRC-funded project about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions. I will show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations that will advance on the task of making AI more transparent and explainable to help assess whether AI systems discriminate against users and how to mitigate that.

Keynote talk

Explainable Machine Learning-based Artificial Intelligence (June 11)

The Need to Empirically Evaluate Explanation Quality

Organisations face growing legal and social responsibilities to be able to explain decisions they have made using autonomous systems. Though there is much focus on how these decisions impact the public, there is also a need for these decisions to be clear and interpretable internally for employees. In many sectors, this means provisioning textual explanations around decisions made with technical or expertise-driven information in such a way that non-expert users can understand, thus supporting problem-solving in real-time. As an example, our current work with a telecommunications organisation is centred on empowering desk-based agents to better understand autonomous decision-making using specialist field-engineer notes. In this domain we have implemented various low-level (word-matching between problem and solution, confidence metrics) high-level (summarisation of similarities/differences) and co-created (hazard identification) textual explanation methods.

Increasingly we face difficulties in empirically evaluating the quality of these explanations; a problem which becomes even more challenging as the complexity of the provisioned explanation grows. Though we can easily examine whether an explanation contains the necessary content, it is more difficult to determine whether this content is placed in a suitable context to answer the user’s need for an explanation (e.g. its subjective quality). In this talk we will discuss our current work on eXplainable AI (XAI) and position it with the state-of-the-art by examining the output of several national and international workshops on the subject. In particular, we will highlight an important gap in current XAI research; the ability to empirically evaluate the quality of an explanation. We will present our findings in this domain and why we believe that empirical evaluation of explanation quality is key for the growth of XAI methods in future.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Causal-AI: Explainability of AI Models through Cause and Effect Reasoning

Interpretability of artificial intelligence (AI) models is one of the most discussed topics in contemporary AI research (Holm, 2019). Leading architects of AI, like Turing Award winner Judea Pearl are very critical with the current machine learning (ML) concentration on (purely data-driven) deep learning and its non-transparent structures (Ford, 2018). "These and other critical views regarding different aspects of the machine learning toolbox, however, are not a matter of speculation or personal taste, but a product of mathematical analyses concerning the intrinsic limitations of data-centric systems that are not guided by explicit models of reality" (AAAI-WHY 2019). In order to achieve a human-like AI, it is necessary to tell the AI how humans come up with decisions, how they plan and how they imagine things. Humans do that through causal reasoning (Pearl & Mackenzie, 2018). Therefore, in this talk (and project proposal), we will focus on aspects for integrating causal inference wit h machine learning, stimulated, among others, by Pearl's New Science of Cause and Effect, in order to come up with know-how that is complementary to the current deep learning expertise.

Specifically, based on the Software Competence Center Hagenberg's (SCCH) experience of carrying out AI-related research projects together with industry partners, the following research topics are relevant from an industrial point of view:

  • Learning causal models from industrial data sets with applications for, e.g., imputation of missing data based on causal inference
  • Extraction and generation of causal models from knowledge graphs and large heterogeneous and unstructured data sets, e.g. for identifying cause-effect relationships of system failures from system logs and development artifacts (code, architecture/requirements/test specifications)
  • Research on potential integration of several causal models to create comprehensive domain knowledge models
Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Operning the Black Box? - The European Legal Framework

Explainable AI (XAI) is not only relevant from the perspective of developers who want to understand how their system or model is working in order to debug or improve it. XAI is also a LEGAL ISSUE: For those affected by an algorithmic decision, it is important to comprehend why the system arrived at this decision in order to understand the decision, develop trust in the technology and - if the algorithmic decision making process is illegal - initiate appropriate remedies against it. Last but not least, XAI enables experts (and regulators) to audit decisions and verify whether legal regulatory standards have been complied with. All these arguments strike in favor for OPENING THE BLACK BOX. On the other hand, there are a number of legal arguments against full transparency of AI systems, esp. the interest to protect trade secrets, national security, and privacy.

Against this background, I will try to explore the European legal framework for XAI in my short talk.

Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Quality of data for computer vision algorithm

Focus on cognitive multimedia processing, open challenges and standard:

  1. Collecting "good" data for AI using AI
  2. Qualifying AI based computer vision in real life scenario
Short talk or poster (to be defined)

Explainable Machine Learning-based Artificial Intelligence (June 11)

Title to be defined

Abstract to be defined

Keynote talk

Novel Computational Approaches for Environmental Sustainability (June 12)

Integrated modelling approaches to advance in the assessment of the impacts of plant protection products

Plant protection is a vital part of current agricultural and horticultural practices assuring yield and quality. Application of agrochemicals for plant protection requires dedicated practices such as spraying and seed treatments.

Sustainable plant protection required minimizing environmental risks associated with drift of agrochemicals during field operations. Mitigation measures to reduce the risk to the environment include buffer zones and drift reduction technologies. The acceptance and range of measures vary widely with limited harmonization. To assist development of more uniform measures, implement effective practices and assess new application technologies, computational modelling provides comprehensive and objective insight into the drift process affected by operational, environmental and field factors.

Challenges to overcome to achieve more reliable and effective modelling frameworks of drift include improved models of the interactions of particles/droplets with canopy and soil structures affected by environmental conditions, the temporal and spatial scales affecting dispersion of particles, droplets and vapor, and integrating into the models the properties and dynamics of application technology and operations, and impact on plants, humans, animals and ecosystems. Computational Fluid Dynamics provides a means to implement such framework. Still, the multiscale nature of the drift process requires to build dedicated, more efficient and user friendly simulation platforms that solve and integrate the models into predictive tools to support drift risk assessment.

Short talk or poster (to be defined)

Novel Computational Approaches for Environmental Sustainability (June 12)

Scalable Constraint-based Optimisation

Declarative methods for combinatorial optimisation (such as modeling as a CSP) can form the basis of highly scalable solvers. These may be used in several application contexts, some of which may combine with machine-learning techniques. Domain applications include natural resource management.

Short talk or poster (to be defined)

Novel Computational Approaches for Environmental Sustainability (June 12)