Skip to main content

Artificial Neural Nets (ANNs) are at the core of an increasing number of applications integrated with critical systems and using sensitive user data, making security and privacy concerns critical.  Compromising a classifier for object detection in a robotic application can lead to safety breakdowns, while leakage of sensitive data, such as medical records, raises privacy concerns for users and legal exposure for providers. Recently, Federated Learning (FL) emerged as a promising distributed learning approach that enables learning from data belonging to multiple participants, without compromising privacy since user data is never directly exchanged. While FL has been promoted as a privacy-preserving approach, recent studies show that this approach is vulnerable to sophisticated attacks that are able to jeopardize both integrity and privacy of these systems, or otherwise disrupt their operation. 
Existing defences  fall short of covering the range of threats that face FL systems,  and in some cases defending against a class of attacks increases the vulnerability to other attacks. Moreover, the state-of-the-art defenses require high power overhead that might not be practical for embedded systems and Edge nodes in a FL system. 
While ANNs are the de-facto architectures for Machine Learning (ML), neuromorphic architectures like Spiking Neural Networks (SNNs) have recently emerged as an attractive alternative, due to their biological plausibility and brain-inspired functionality. Moreover, neuromorphic hardware can exploit the asynchronous neurons’ behaviour to achieve significantly high energy efficiency. 

In TruBrain, we propose a research effort towards privacy-preserving, secure and low power distributed intelligent systems. Our research objectives are as follows:
- Objective1: Investigating the security and privacy threats for Neuromorphic nodes and characterising their inherent security and privacy-preserving characteristics
- Objective2: Building a secure brain-inspired FL architecture: We leverage brain-inspired architectures to develop provably-secure practical neuromorphic FL systems.
- Objective3: Bridging the gap between theory and practice in distributed neuromorphic learning systems’ security through a hardware-aware theoretical study. 
- Objective4: Designing and implementing a Hardware platform for neuromorphic FL nodes on FPGA and Integrating it in a RISC-V architecture. 
- Objective 5: Demonstrating our neuromorphic FL paradigm in a medical application use case, and validating its trustworthiness from a security and privacy perspective.

Call Topic: Security and Privacy in Decentralised and Distributed Systems (SPiDDS), Call 2022
Start date: (36 months)
Funding support: 1286863 €

Project partners

  • Queen's University Belfast - United Kingdom (coordinator)
  • Sorbonne Université - France
  • EPFL - Switzerland
  • TUBITAK - Turkey