Haotang Deng
2020
FastBERT: a Self-distilling BERT with Adaptive Inference Time
Weijie Liu
|
Peng Zhou
|
Zhiruo Wang
|
Zhe Zhao
|
Haotang Deng
|
Qi Ju
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources. To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time. The speed at inference can be flexibly adjusted under varying demands, while redundant calculation of samples is avoided. Moreover, this model adopts a unique self-distillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff.
2019
UER: An Open-Source Toolkit for Pre-training Models
Zhe Zhao
|
Hui Chen
|
Jinbin Zhang
|
Xin Zhao
|
Tao Liu
|
Wei Lu
|
Xi Chen
|
Haotang Deng
|
Qi Ju
|
Xiaoyong Du
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations
Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks. While there does not exist a single pre-training model that works best in all cases, it is of necessity to develop a framework that is able to deploy various pre-training models efficiently. For this purpose, we propose an assemble-on-demand pre-training toolkit, namely Universal Encoder Representations (UER). UER is loosely coupled, and encapsulated with rich modules. By assembling modules on demand, users can either reproduce a state-of-the-art pre-training model or develop a pre-training model that remains unexplored. With UER, we have built a model zoo, which contains pre-trained models based on different corpora, encoders, and targets (objectives). With proper pre-trained models, we could achieve new state-of-the-art results on a range of downstream datasets.
Search
Co-authors
- Zhe Zhao 2
- Qi Ju 2
- Weijie Liu 1
- Peng Zhou 1
- Zhiruo Wang 1
- show all...