What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study

Ayush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa


Abstract
Language Models (LMs) have been ubiquitously leveraged in various tasks including spoken language understanding (SLU). Spoken language requires careful understanding of speaker interactions, dialog states and speech induced multimodal behaviors to generate a meaningful representation of the conversation. In this work, we propose to dissect SLU into three representative properties: conversational (disfluency, pause, overtalk), channel (speaker-type, turn-tasks) and ASR (insertion, deletion, substitution). We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues. Empirical results indicate that LM is surprisingly good at capturing conversational properties such as pause prediction and overtalk detection from lexical tokens. On the downsides, the LM scores low on turn-tasks and ASR errors predictions. Additionally, pre-training the LM on spoken transcripts restrain its linguistic understanding. Finally, we establish the efficacy and transferability of the mentioned properties on two benchmark datasets: Switchboard Dialog Act and Disfluency datasets.
Anthology ID:
2021.blackboxnlp-1.25
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
322–336
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.25
DOI:
10.18653/v1/2021.blackboxnlp-1.25
Bibkey:
Cite (ACL):
Ayush Kumar, Mukuntha Narayanan Sundararaman, and Jithendra Vepa. 2021. What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 322–336, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study (Kumar et al., BlackboxNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.blackboxnlp-1.25.pdf