skip to primary navigation skip to content

Research themes

Hearing research at the MRC Cognition and Brain Sciences Unit

opening image

Listening in noisy situations by normal-hearing listeners and cochlear implant users

When you are trying to listen to someone in a crowded room, your brain has to perform the following quite demanding tasks.

  • The outputs of the early frequency analyses performed in the two inner ears must be sorted, so that the frequency components arising from the target voice are grouped together. * The target voice must be tracked over time.

  • Decisions must be made on how to interpret missing data, such as when part of the speech is masked by an extraneous noise.

  • Attentional mechanisms must select the target voice for further processing.

  • Linguistic analyses must be performed on the selected voice.

Our CBU research programme studies all of these processes, with the focus on identifying their neural basis and determining how they interact with each other. To do so, we combine traditional behavioural methods such as those derived from psychophysics, with electrophysiological, computational, and neuroimaging techniques.

We are particularly interested in patients whose hearing has been restored surgically by either a cochlear implant (CI) or Auditory Brainstem Implant (ABI). By directly stimulating the auditory system with electrical pulses, we aim to provide new insights into how the auditory system works, and to develop new methods for improving hearing in users of these devices.

Adaptive processes in speech and language

Human speech comprehension achieves unsurpassed accuracy and efficiency despite most of the speech that we hear everyday being acoustically degraded or ambiguous, or indeed both. A key aim in our research is to characterize the computational and neural processes that underlie successful comprehension in the face of these challenges. Mechanisms of prediction, ambiguity resolution, learning and consolidation play a critical role in explaining the success of human communication. Understanding these processes in healthy adults will provide a critical foundation for understanding and ameliorating developmental disorders and acquired deficits that impact on speech, language and literacy.

Hearing research at the Computational Perception Group, Department of Engineering

opening image


The research carried out in the Computational Perception Group lies at the interface between three fields:

  1. computer hearing which builds automatic systems for processing and understanding sounds
  2. neuroscience which is the scientific study of the nervous system
  3. machine learning which provides a theoretical framework for learning and making inferences from data
The goal of the research is to develop systems that solve important problems, drawing inspiration from the brain. For example, determining how many sound sources there are in an acoustic scene and what the individual contributions from each source are. There are medical and engineering applications of this work, such as in intelligent hearing aids and cochlear implants for the deaf. Importantly, the behaviour of these algorithms can also be compared to neural processing in the brain in order to better understand what the brain is doing. We collaborate very closely with both experimental neuroscience groups, who research the neural basis of hearing, hearing aids and cochlear implants, and also industrial partners such as Google.