Skip to main content
  • Expression of Interest

    EuroMov Digital Health in Motion, Univ Montpellier, IMT Mines Ales, Ales, France

    EuroMov Digital Health in Motion

    “EuroMov Digital Health in Motion” is a new research unit that was officially inaugurated in January 2021. This research collaboration involves the French institutions IMT Mines Alès and the University of Montpellier in partnership with the university hospitals of Montpellier and Nîmes. The research scope promotes cross-fertilization across three main domains of artificial intelligence, movement sciences and health. The research aims to understand the behavioral plasticity of humans in order to consider new therapeutic approaches and improve sensorimotor recovery, whilst providing a platform for innovation of new digital approaches.

    The main objective of study of the EuroMov Digital Health in Motion concerns human and digital plasticity seen through the prism of human movement. Human plasticity or neuroplasticity refers to the brain's ability to evolve and adapt throughout life and specific conditions. In addition to genetic factors and the environment in which a person evolves, a subject’s actions and movements play a determining role in brain plasticity. Understanding the dynamic brain-movement relationships at different levels and scales will allow to promote brain plasticity and in turn improve sensorimotor recovery. The analysis of the mechanisms underlying neuroplasticity will aid, by analogy or mimicry, the development of new models for machine learning alongside the adaptive control of complex systems, to better manage human / machine interaction, and the application of sensitive software systems.

    Dynamic neurophysiological models and deep learning for the study of cerebral connectivity of brain damaged subjects

    Progress in the development of devices for capturing human physiological signals in terms of spatial and / or temporal resolution, portability, ergonomics, autonomy and cost, suggests hitherto unexplored uses. In this context, we are interested in neurophysiological signals acquired via a brain-computer interface (BCI). Focus is on electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) signals to address neuroscientific issues. Neurophysiological signals are signals with a random component, the acquisition being disturbed by different types of noise and artefacts. To effectively process these signals, it is necessary to integrate multidisciplinary knowledge from physics, biology, neuroscience and medicine into the analysis pipeline, in addition to the disciplinary bases in signal processing, digital modeling and artificial learning.

    For clinicians, evaluating disorders of consciousness (DOC) patients following severe brain injuries or conscious quadriplegic patients defined as "Locked in Syndrome" represents a fundamental challenge in responding to requests from patients’ families and adapting treatment. The standardized Coma Recovery Scale Revised (CRS-R) assessment protocol consists of differentiating intentional and spontaneous behaviors. These patients, suffering from a severe voluntary motor control deficit of the limbs and facial muscles, are often very limited in expressing their self-awareness or the perception of their environment. Behavioral assessments are thus susceptible to diagnostic errors. A 2009 clinical study reveals that up to 40% of cases are misdiagnosed. The precise assessment of DOC patients is therefore essential for patients, their relatives and clinicians, with prognostic, therapeutic and ethical implications.

    We wish to develop an original and multidisciplinary approach to processing neurophysiological EEG and fNIRS signals to help clinicians in their assessment of DOC. The planned research will focus on multi-channel, multi-modal, multi-acquisition, multi-scale spatial and temporal signal processing by integrating digital propagation and connectivity models associated with deep learning approaches. The interest of this coupling is to identify digital model parameters and patterns representative of neurophysiological signals.

    Currently, the analysis of EEG or fNIRS signals is primarily performed using signal processing techniques to extract signals of interest that are classified in a supervised or unsupervised manner. The extraction of informative features and the precise classification of these combined signals is considerably difficult due to physiological non-stationarity, low signal-to-noise ratio, and interference from various noises etc. Furthermore, the temporal aspect is poorly acknowledged in the literature, under the assumption that the signals of interest have the same patterns, which may not be the case in reality. The resulting temporality or spatiotemporal networks formed could provide information filtered by the conventional approach. The objective of our approach is to improve the multiscale analysis and the spatio-temporal synchronization of EEG and hemodynamic (fNIRS) signals by conjointly setting-up digital propagation models and deep learning approaches.

    Our multimodal EEG-fNIRS BCI implements a deep learning algorithm that will allow a multi-scale analysis of cortical electroencephalographic and hemodynamic signals (and to a lesser extent subcortical) during cognitive mental imagery tasks (motor imagery and mental calculation ). Analysis of the spatio-temporal synchronization of EEG and fNIRS signals based on digital propagation models should allow clinicians to better understand the impacts of brain injuries on DOC.

    In clinically challenging situations,where 40% of patients misdiagnosed in a vegetative state are in reality in a minimal conscious state or even conscious, we hope to help clinicians improve their diagnostic capabilities, provide better informed responses to families’ questions and adapt patients care