Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering

Chenyu You, Nuo Chen, Yuexian Zou


Abstract
Spoken question answering (SQA) requires fine-grained understanding of both spoken documents and questions for the optimal answer prediction. In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage. In the self-supervised stage, we propose three auxiliary self-supervised tasks, including utterance restoration, utterance insertion, and question discrimination, and jointly train the model to capture consistency and coherence among speech documents without any additional data or annotations. We then propose to learn noise-invariant utterance representations in a contrastive objective by adopting multiple augmentation strategies, including span deletion and span substitution. Besides, we design a Temporal-Alignment attention to semantically align the speech-text clues in the learned common space and benefit the SQA tasks. By this means, the training schemes can more effectively guide the generation model to predict more proper answers. Experimental results show that our model achieves state-of-the-art results on three SQA benchmarks. Our code will be publicly available after publication.
Anthology ID:
2021.findings-emnlp.3
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
28–39
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.3
DOI:
10.18653/v1/2021.findings-emnlp.3
Bibkey:
Cite (ACL):
Chenyu You, Nuo Chen, and Yuexian Zou. 2021. Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 28–39, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering (You et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.3.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.3.mp4
Data
SQuADSpoken-SQuAD