Modeling Long-Distance Cue Integration in Spoken Word Recognition

Wednesday Bushong, T. Florian Jaeger


Abstract
Cues to linguistic categories are distributed across the speech signal. Optimal categorization thus requires that listeners maintain gradient representations of incoming input in order to integrate that information with later cues. There is now evidence that listeners can and do integrate cues that occur far apart in time. Computational models of this integration have however been lacking. We take a first step at addressing this gap by mathematically formalizing four models of how listeners may maintain and use cue information during spoken language understanding and test them on two perception experiments. In one experiment, we find support for rational integration of cues at long distances. In a second, more memory and attention-taxing experiment, we find evidence in favor of a switching model that avoids maintaining detailed representations of cues in memory. These results are a first step in understanding what kinds of mechanisms listeners use for cue integration under different memory and attentional constraints.
Anthology ID:
W19-2907
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
62–70
Language:
URL:
https://aclanthology.org/W19-2907
DOI:
10.18653/v1/W19-2907
Bibkey:
Cite (ACL):
Wednesday Bushong and T. Florian Jaeger. 2019. Modeling Long-Distance Cue Integration in Spoken Word Recognition. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 62–70, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Modeling Long-Distance Cue Integration in Spoken Word Recognition (Bushong & Jaeger, CMCL 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-2907.pdf