On the Difficulty of Segmenting Words with Attention

Ramon Sanabria, Hao Tang, Sharon Goldwater


Abstract
Word segmentation, the problem of finding word boundaries in speech, is of interest for a range of tasks. Previous papers have suggested that for sequence-to-sequence models trained on tasks such as speech translation or speech recognition, attention can be used to locate and segment the words. We show, however, that even on monolingual data this approach is brittle. In our experiments with different input types, data sizes, and segmentation algorithms, only models trained to predict phones from words succeed in the task. Models trained to predict words from either phones or speech (i.e., the opposite direction needed to generalize to new data), yield much worse results, suggesting that attention-based segmentation is only useful in limited scenarios.
Anthology ID:
2021.insights-1.11
Volume:
Proceedings of the Second Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
João Sedoc, Anna Rogers, Anna Rumshisky, Shabnam Tafreshi
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
67–73
Language:
URL:
https://aclanthology.org/2021.insights-1.11
DOI:
10.18653/v1/2021.insights-1.11
Bibkey:
Cite (ACL):
Ramon Sanabria, Hao Tang, and Sharon Goldwater. 2021. On the Difficulty of Segmenting Words with Attention. In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 67–73, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
On the Difficulty of Segmenting Words with Attention (Sanabria et al., insights 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.insights-1.11.pdf
Video:
 https://aclanthology.org/2021.insights-1.11.mp4
Data
MuST-C