Textual Supervision for Visually Grounded Spoken Language Understanding

Bertrand Higy, Desmond Elliott, Grzegorz Chrupała


Abstract
Visually-grounded models of spoken language understanding extract semantic information directly from speech, without relying on transcriptions. This is useful for low-resource languages, where transcriptions can be expensive or impossible to obtain. Recent work showed that these models can be improved if transcriptions are available at training time. However, it is not clear how an end-to-end approach compares to a traditional pipeline-based approach when one has access to transcriptions. Comparing different strategies, we find that the pipeline approach works better when enough text is available. With low-resource languages in mind, we also show that translations can be effectively used in place of transcriptions but more data is needed to obtain similar results.
Anthology ID:
2020.findings-emnlp.244
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2698–2709
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.244
DOI:
10.18653/v1/2020.findings-emnlp.244
Bibkey:
Cite (ACL):
Bertrand Higy, Desmond Elliott, and Grzegorz Chrupała. 2020. Textual Supervision for Visually Grounded Spoken Language Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2698–2709, Online. Association for Computational Linguistics.
Cite (Informal):
Textual Supervision for Visually Grounded Spoken Language Understanding (Higy et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.244.pdf
Code
 bhigy/textual-supervision
Data
Flickr Audio Caption CorpusFlickr30k