2024
pdf
bib
abs
Massive End-to-end Speech Recognition Models with Time Reduction
Weiran Wang
|
Rohit Prabhavalkar
|
Haozhe Shan
|
Zhong Meng
|
Dongseong Hwang
|
Qiujia Li
|
Khe Chai Sim
|
Bo Li
|
James Qin
|
Xingyu Cai
|
Adam Stooke
|
Chengjian Zheng
|
Yanzhang He
|
Tara Sainath
|
Pedro Moreno Mengibar
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We investigate massive end-to-end automatic speech recognition (ASR) models with efficiency improvements achieved by time reduction. The encoders of our models use the neural architecture of Google’s universal speech model (USM), with additional funnel pooling layers to significantly reduce the frame rate and speed up training and inference. We also explore a few practical methods to mitigate potential accuracy loss due to time reduction, while enjoying most efficiency gain. Our methods are demonstrated to work with both Connectionist Temporal Classification (CTC) and RNN-Transducer (RNN-T), with up to 2B model parameters, and over two domains. For a large-scale voice search recognition task, we perform extensive studies on vocabulary size, time reduction strategy, and its generalization performance on long-form test sets, and show that a 900M RNN-T is very tolerant to severe time reduction, with as low encoder output frame rate as 640ms. We also provide ablation studies on the Librispeech benchmark for important training hyperparameters and architecture designs, in training 600M RNN-T models at the frame rate of 160ms.
pdf
bib
abs
Deferred NAM: Low-latency Top-K Context Injection via Deferred Context Encoding for Non-Streaming ASR
Zelin Wu
|
Gan Song
|
Christopher Li
|
Pat Rondon
|
Zhong Meng
|
Xavier Velez
|
Weiran Wang
|
Diamantino Caseiro
|
Golan Pundak
|
Tsendsuren Munkhdalai
|
Angad Chandorkar
|
Rohit Prabhavalkar
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
Contextual biasing enables speech recognizers to transcribe important phrases in the speaker’s context, such as contact names, even if they are rare in, or absent from, the training data. Attention-based biasing is a leading approach which allows for full end-to-end cotraining of the recognizer and biasing system and requires no separate inference-time components. Such biasers typically consist of a context encoder; followed by a context filter which narrows down the context to apply, improving per-step inference time; and, finally, context application via cross attention. Though much work has gone into optimizing per-frame performance, the context encoder is at least as important: recognition cannot begin before context encoding ends. Here, we show the lightweight phrase selection pass can be moved before context encoding, resulting in a speedup of up to 16.1 times and enabling biasing to scale to 20K phrases with a maximum pre-decoding delay under 33ms. With the addition of phrase- and wordpiece-level cross-entropy losses, our technique also achieves up to a 37.5% relative WER reduction over the baseline without the losses and lightweight phrase selection pass.
2010
pdf
bib
Investigations into the Crandem Approach to Word Recognition
Rohit Prabhavalkar
|
Preethi Jyothi
|
William Hartmann
|
Jeremy Morris
|
Eric Fosler-Lussier
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics