Neuroscience and technology come together to support people with disabilities

Scientists at the Centre for Genomic Regulation (CRG), the research company Starlab and the group BR::AC (Barcelona Research Art & Creation) of the University of Barcelona developed a device that produces sounds from brain signals. This highly interdisciplinary team is led by Mara Dierssen, head of the Cellular & Systems Neurobiology group at CRG. Its ultimate goal is to develop an alternative communication system for people with cerebral palsy to allow them to communicate–and more specifically in this pilot phase, to communicate their emotions. Scientists are carrying out the project with volunteers who are either healthy or who have physical and/or mental disabilities, working together with the association Pro-Personas con Discapacidades Físicas y Psíquicas (ASDI) from Sant Cugat del Vallès.

“At the neuroscientific level, our challenge with Brain Polyphony is to be able to correctly identify the EEG signals–that is, the brain activity–that correspond to certain emotions. The idea is to translate this activity into sound and then to use this system to allow people with disabilities to communicate with the people around them. This alternative communication system based on sonification could be useful not only for patient rehabilitation but also for additional applications, such as diagnosis,” stated Mara Dierssen. She added, “Of course, the technological and computational aspects are also challenging. We have to ensure that both the device and the software that translates the signals can give us robust and reproducible signals, so that we can provide this communication system to any patient.”

Currently, other signal transduction systems (using brain-computer interfaces) are undergoing testing for people with disabilities. However, most of these systems require a certain level of motor control, for example, by using eye movement. This represents a major constraint for people with cerebral palsy, who often suffer from spasticity or who are unable to control any aspect of their motor system, making it impossible for them to use these systems. A further limitation is that most of these other devices do not allow real-time analysis of the signals but rather require information post-processing. The proposal put forth by the Brain Polyphony researchers now allows real-time analysis, starting from the moment the user puts on the interface device.

To read the rest of this article, published in Science Blog, please click here.