How Might We Create Better Benchmarks for Speech Recognition?

Alëna Aksënova, Daan van Esch, James Flynn, Pavel Golik


Abstract
The applications of automatic speech recognition (ASR) systems are proliferating, in part due to recent significant quality improvements. However, as recent work indicates, even state-of-the-art speech recognition systems – some which deliver impressive benchmark results, struggle to generalize across use cases. We review relevant work, and, hoping to inform future benchmark development, outline a taxonomy of speech recognition use cases, proposed for the next generation of ASR benchmarks. We also survey work on metrics, in addition to the de facto standard Word Error Rate (WER) metric, and we introduce a versatile framework designed to describe interactions between linguistic variation and ASR performance metrics.
Anthology ID:
2021.bppf-1.4
Volume:
Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future
Month:
Aug
Year:
2021
Address:
Online
Editors:
Kenneth Church, Mark Liberman, Valia Kordoni
Venue:
BPPF
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22–34
Language:
URL:
https://aclanthology.org/2021.bppf-1.4
DOI:
10.18653/v1/2021.bppf-1.4
Bibkey:
Cite (ACL):
Alëna Aksënova, Daan van Esch, James Flynn, and Pavel Golik. 2021. How Might We Create Better Benchmarks for Speech Recognition?. In Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future, pages 22–34, Online. Association for Computational Linguistics.
Cite (Informal):
How Might We Create Better Benchmarks for Speech Recognition? (Aksënova et al., BPPF 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.bppf-1.4.pdf
Data
Common VoiceLibriSpeech