Improving Punctuation Restoration for Speech Transcripts via External Data

Xue-Yong Fu, Cheng Chen, Md Tahmid Rahman Laskar, Shashi Bhushan, Simon Corston-Oliver


Abstract
Automatic Speech Recognition (ASR) systems generally do not produce punctuated transcripts. To make transcripts more readable and follow the expected input format for downstream language models, it is necessary to add punctuation marks. In this paper, we tackle the punctuation restoration problem specifically for the noisy text (e.g., phone conversation scenarios). To leverage the available written text datasets, we introduce a data sampling technique based on an n-gram language model to sample more training data that are similar to our in-domain data. Moreover, we propose a two-stage fine-tuning approach that utilizes the sampled external data as well as our in-domain dataset for models based on BERT. Extensive experiments show that the proposed approach outperforms the baseline with an improvement of 1.12% F1 score.
Anthology ID:
2021.wnut-1.19
Volume:
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Month:
November
Year:
2021
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
168–174
Language:
URL:
https://aclanthology.org/2021.wnut-1.19
DOI:
10.18653/v1/2021.wnut-1.19
Bibkey:
Cite (ACL):
Xue-Yong Fu, Cheng Chen, Md Tahmid Rahman Laskar, Shashi Bhushan, and Simon Corston-Oliver. 2021. Improving Punctuation Restoration for Speech Transcripts via External Data. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 168–174, Online. Association for Computational Linguistics.
Cite (Informal):
Improving Punctuation Restoration for Speech Transcripts via External Data (Fu et al., WNUT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wnut-1.19.pdf