End-to-end ASR to jointly predict transcriptions and linguistic annotations

Motoi Omachi, Yuya Fujita, Shinji Watanabe, Matthew Wiesner


Abstract
We propose a Transformer-based sequence-to-sequence model for automatic speech recognition (ASR) capable of simultaneously transcribing and annotating audio with linguistic information such as phonemic transcripts or part-of-speech (POS) tags. Since linguistic information is important in natural language processing (NLP), the proposed ASR is especially useful for speech interface applications, including spoken dialogue systems and speech translation, which combine ASR and NLP. To produce linguistic annotations, we train the ASR system using modified training targets: each grapheme or multi-grapheme unit in the target transcript is followed by an aligned phoneme sequence and/or POS tag. Since our method has access to the underlying audio data, we can estimate linguistic annotations more accurately than pipeline approaches in which NLP-based methods are applied to a hypothesized ASR transcript. Experimental results on Japanese and English datasets show that the proposed ASR system is capable of simultaneously producing high-quality transcriptions and linguistic annotations.
Anthology ID:
2021.naacl-main.149
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1861–1871
Language:
URL:
https://aclanthology.org/2021.naacl-main.149
DOI:
10.18653/v1/2021.naacl-main.149
Bibkey:
Cite (ACL):
Motoi Omachi, Yuya Fujita, Shinji Watanabe, and Matthew Wiesner. 2021. End-to-end ASR to jointly predict transcriptions and linguistic annotations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1861–1871, Online. Association for Computational Linguistics.
Cite (Informal):
End-to-end ASR to jointly predict transcriptions and linguistic annotations (Omachi et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.149.pdf
Video:
 https://aclanthology.org/2021.naacl-main.149.mp4