Auditory Neuroscience & Speech (Miller)
The AUDITORY NEUROSCIENCE AND SPEECH RECOGNITION LAB, under the direction of Dr. Lee M. Miller) is dedicated to understanding the neural bases of auditory perception and speech recognition in human listeners. Our researchers use the most advanced non-invasive techniques to study attentive listening, including functional magnetic resonance imaging (fMRI), high-density electroencephalography (EEG), and neural network analysis. We learn how different parts of the brain cooperate to achieve perception — especially in noisy environments or with hearing loss — and what happens when comprehension fails.
Most everyday environments (restaurants, meetings, sidewalks) are cluttered with distracting noise, making speech difficult for listeners to understand. Our research focuses on brain mechanisms that improve intelligibility, both in subjects with healthy hearing and in those who have experienced hearing loss. For instance, our brains use multisensory (auditory-visual) integration to combine information from a talker's voice with her mouth movements. Distracting echoes are continually and automatically suppressed so that we hear more clearly. Selective attention focuses on the desired talker, while our expectations help reduce ambiguities and "fill in the blanks." These studies help us understand the so-called "cocktail party effect": our extraordinary ability to communicate in noisy, real world situations.
Hearing loss affects one out of ten people on the planet and constitutes a growing epidemic among older adults. Unfortunately every individual's hearing loss is different, which poses a tremendous challenge to diagnose and treat it effectively (e.g., with hearing aids). This variability stems not only from problems in the ear, but from how each individual's brain processes sound, especially complex sounds such as speech. Considering its importance, relatively little effort has been devoted to assessing neural processing of speech from the ear to auditory cortex (the "higher" auditory brain); for this profound global health problem, no clinical tool exists. We are developing a new diagnostic approach to hearing loss, using electroencephalography (EEG) coupled with specialized speech sounds, that gives a "snapshot" of a listener's entire speech processing system. The goal is to guide treatment by rapidly identifying which parts of the system are working and which are not. Results from this research may lead to other practical solutions including improved design in wearable and implanted hearing devices, better speech recovery after device fitting, and improved training on listening strategies.
Cochlear implants restore the sense of hearing when an individual's inner ear or cochlea does not function. This remarkable technology essentially bypasses the inner ear, conveying sound directly to the brain via electrical impulses. Hundreds of thousands of individuals now use cochlear implants, making it the most widespread and successful neural prosthetic ever. In a Center for Mind and Brain collaboration directed by Professor David Corina, we are studying how auditory (spoken language) and visual (sign language) exposure in infants and preschool children with cochlear implants affects how their brains develop. This research will inform both clinical and educational practices with deaf children to optimize their language outcomes. Other projects in the Miller lab seek to "neuroengineer" systems that combine non-implanted technology such as microphone arrays or handheld devices with our lab's understanding of speech perception. The goal of this research is to improve comprehension, especially in noise or with hearing loss, through neurotechnology.