Desheng Cui
2021
FastSeq: Make Sequence Generation Faster
Yu Yan
|
Fei Hu
|
Jiusheng Chen
|
Nikhil Bhendawade
|
Ting Ye
|
Yeyun Gong
|
Nan Duan
|
Desheng Cui
|
Bingyu Chi
|
Ruofei Zhang
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations
Transformer-based models have made tremendous impacts in natural language generation. However the inference speed is a bottleneck due to large model size and intensive computing involved in auto-regressive decoding process. We develop FastSeq framework to accelerate sequence generation without accuracy loss. The proposed optimization techniques include an attention cache optimization, an efficient algorithm for detecting repeated n-grams, and an asynchronous generation pipeline with parallel I/O. These optimizations are general enough to be applicable to Transformer-based models (e.g., T5, GPT2, and UniLM). Our benchmark results on a set of widely used and diverse models demonstrate 4-9x inference speed gain. Additionally, FastSeq is easy to use with a simple one-line code change. The source code is available at https://github.com/microsoft/fastseq.
Search
Co-authors
- Yu Yan 1
- Fei Hu 1
- Jiusheng Chen 1
- Nikhil Bhendawade 1
- Ting Ye 1
- show all...