Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou


Abstract
Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings.
Anthology ID:
2023.findings-emnlp.738
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11052–11067
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.738
DOI:
10.18653/v1/2023.findings-emnlp.738
Bibkey:
Cite (ACL):
Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2023. Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11052–11067, Singapore. Association for Computational Linguistics.
Cite (Informal):
Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models (Chen et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.738.pdf