Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference

Wangchunshu Zhou, Ronan Le Bras, Yejin Choi


Abstract
Pre-trained Transformer models like T5 and BART have advanced the state of the art on a wide range of text generation tasks. Compressing these models into smaller ones has become critically important for practical use. Common neural network compression techniques such as knowledge distillation or quantization are limited to static compression where the compression ratio is fixed. In this paper, we introduce Modular Transformers, a modularized encoder-decoder framework for flexible sequence-to-sequence model compression. Modular Transformers trains modularized layers that have the same function of two or more consecutive layers in the original model via module replacing and knowledge distillation. After training, the modularized layers can be flexibly assembled into sequence-to-sequence models that meet different performance-efficiency trade-offs. Experimental results show that after a single training phase, by simply varying the assemble strategy, Modular Transformers can achieve flexible compression ratios from 1.1x to 6x with little to moderate relative performance drop.
Anthology ID:
2023.findings-acl.664
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10452–10465
Language:
URL:
https://aclanthology.org/2023.findings-acl.664
DOI:
10.18653/v1/2023.findings-acl.664
Bibkey:
Cite (ACL):
Wangchunshu Zhou, Ronan Le Bras, and Yejin Choi. 2023. Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10452–10465, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference (Zhou et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.664.pdf