Understanding speech comprehension by grounding cortical responses in subcortical processing

Supervisors:

Dr Christoph Daube, School of Psychology & Neuroscience
Dr Fani Deligianni, School of Computing Science
Prof Marios Philiastides, School of Psychology & Neuroscience
Prof Lauren Hadley, University of Nottingham

Summary:

This project aims to create a more biologically realistic computational model of how the brain responds to speech. Current models struggle to disentangle responses to low-level acoustics from higher-level language processing, largely because they lack constraints from the brain’s early auditory pathway. Leveraging a recent breakthrough in recording subcortical auditory signals with electroencephalography (EEG), this research will combine simultaneous EEG and magnetoencephalography (MEG) during ecologically valid natural story listening. This combines the sensitivity to subcortical responses (EEG) with the superior specificity for cortical responses (MEG), allowing to study and interrelate these responses in the same participants for the first time.

The study will first test whether deep subcortical signals are detectable by MEG alone, and thereby contribute to a longstanding and increasingly controversial fundamental question. Irrespective of the outcome, the core goal is to then use models of the subcortical activity as a biologically plausible "frontend" to build a new, superior benchmark model of cortical processing. This end-to-end model, refined with deep learning, will map the acoustic waveform to cortical activity via the subcortical stage. The outcome will be a central new tool for the field, providing a more rigorous, acoustically grounded yardstick to test theories of neuronal language comprehension.