Breadcrumb
Speech Perception & Word Recognition
The problem of perceiving meaning in speech is one of the most challenging problems in cognitive science. Even something as simple as distinguishing “b” from “p” requires listeners to combine dozens of sources of information and these cues are heavily context dependent and noisy. And all this needs to be done 5-6 times a second and in real-time as the auditory signal rolls in.
In a nutshell, speech is an immensely complex acoustic input, yet most people (in contrast to machines) seem to be doing a pretty good job at making sense out of it. Yet not all people do this so easily: understanding speech perception may offer important insight into the problems faced by people with hearing or language difficulties.
At the MACLab one of our primary research focuses is the way in which listeners perceive speech. Some of the major questions we ask are:
- What kinds of information do listeners use and how are they combined with each other?
- What kinds of units are used in speech sound processing?
- What role do high level sources of information play in speech perception and how are they integrated with low level, acoustic information?
- How do speech perception processes unfold in real time?
- What kinds of neural correlates of speech perception processes/units can we find evidence for?
- How do these abilities develop? And how do they change in populations facing language or hearing difficulties.
By using a variety of different experimental designs and technological tools we aim at approaching the study of speech perception from multiple angles. From plain, old, button-pushing behavioral studies to eye-tracking, to EEG and intracranial recording, our ultimate goal is the comprehensive description of speech perception in mechanistic terms, backed up by neurological support.
Recent Publications:
Toscano, J., and McMurray, B. (in press) Online integration of acoustic cues to voicing: Natural vs. synthetic speech. Attention, Perception & Psychophysics
McMurray, B. & Jongman, A. (2011) What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations. Psychological Review, 118(2), 219-246.
Apfelbaum, K., Blumstein, S., and McMurray (2011) Semantic priming is affected by real-time phonological competition: Evidence for continuous cascading systems. Psychonomic Bulletin & Review, 18(1), 141-149.
Toscano, J., McMurray, B., Dennhardt, J., & Luck, S. (2010) Continuous perception and graded categorization: Electrophysiological evidence for a linear relationship between the acoustic signal and perceptual encoding of speech. Psychological Science, 21(10), 1532-1540.
McMurray, B., Samelson, V., Lee, S., and Tomblin, J.B. (2010) Individual differences in online spoken word recognition: Implications for SLI. Cognitive Psychology, 60(1), 1-39.