Sound, Speech, and Music
Some of the most important social and cultural information enters the brain through the ears in the form of speech and music. Our research spans the breadth of auditory processing, from the neuroscience of sound perception through the emotional experience of music.
Basic Sound Processing
The rich auditory environment is condensed down into a single signal in each ear. How do we recover information about individual auditory objects from this intermixed signal, localizing each object and preventing interference from echoes? Research in the Auditory Neuroscience and Speech Recognition Laboratory uses a combination of fMRI, EEG/ERP, and psychophysical methods to answer these fundamental questions and combine this knowledge with state-of-the-art technologies to create new hearing assistance devices for individuals with hearing loss.
Integration Across Sensory Modalities
In the natural environment, sensory inputs are produced a given object or person in multiple modalities. We may both see and hear a spoon fall to the floor. We may both see and hear a person speaking. The ability to integrate these very different types of information across sensory modalities plays a key role in our experience of the world. Research in the Saron Lab focuses on the basic mechanisms of multimodal integration, and research in thee Auditory Neuroscience and Speech Recognition Laboratory extends this into the domain of speech perception.