Skip to main content

Digitalisation is rapidly transforming our societies, transforming the dynamics of our interactions, transforming the culture of our debates. Trust plays a critical role in establishing intellectual humility and interpersonal civility in argumentation and discourse: without it, credibility is doomed, reputation is endangered, cooperation is compromised. The major threats associated with digitalisation - hate speech and fake news - are violations of the basic conditions for trusting and being trustworthy which are key for constructive, reasonable and responsible communication as well as for the collaborative and ethical organisation of societies. These behaviours eventually lead to polarisation, when users repeatedly attack each other in highly emotional terms, focusing on what divides people, not what unites them.

Focusing on three timely domains of interest – public health, gender equality and global warming – iTRUST will deliver (i) the largest ever dataset of online text, annotated with features relevant for ethos, pathos and reframing; (ii) a new methodology of large-scale comparative trust analytics to detect implicit patterns and trends in hate speech and fake news; (iii) a novel empirical account of how these patterns affect polarisation in online communication and in society at large; and (iv) AI-based applications that will transfer these insights into interventions against hate speech, fake news and polarisation. Given the relevance for the knowledge-based society, the project puts great emphasis on outreach activities and users’ awareness in collaboration with media, museums and other partners.

The consortium consists of five experienced PIs with expertise in rhetoric, comparative political science, corpus linguistics, natural language processing, multi-agent systems and computational argumentation. The group is complemented by senior experts (ACP and KEP) in fields that provide valuable extensions, such as media studies and AI-based technologies. Our long-term ambition is to establish a pan-European network and foundations for trustworthy AI in response to the EC priority of “Europe fit for the Digital Age”.

Call Topic: Foundations for Misbehaviour Detection and Mitigation Strategies in Online Social Networks and Media (OSNEM), Call 2021
Start date: (36 months)
Funding support: 943 358 €

Project partners

  • Warsaw University of Technology - Poland (coordinator)
  • KU Leuven - Belgium
  • UC Louvain - Belgium
  • Artificial Intelligence Research Institute - Spain
  • Università della Svizzera italiana (Lugano) - Switzerland