Exploring Transformers as Compact, Data-efficient Language Models

Clayton Fields, Casey Kennington


Abstract
Large scale transformer models, trained with massive datasets have become the standard in natural language processing. The huge size of most transformers make research with these models impossible for those with limited computational resources. Additionally, the enormous pretraining data requirements of transformers exclude pretraining them with many smaller datasets that might provide enlightening results. In this study, we show that transformers can be significantly reduced in size, with as few as 5.7 million parameters, and still retain most of their downstream capability. Further we show that transformer models can retain comparable results when trained on human-scale datasets, as few as 5 million words of pretraining data. Overall, the results of our study suggest transformers function well as compact, data efficient language models and that complex model compression methods, such as model distillation are not necessarily superior to pretraining reduced size transformer models from scratch.
Anthology ID:
2023.conll-1.35
Volume:
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Jing Jiang, David Reitter, Shumin Deng
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
521–531
Language:
URL:
https://aclanthology.org/2023.conll-1.35
DOI:
10.18653/v1/2023.conll-1.35
Bibkey:
Cite (ACL):
Clayton Fields and Casey Kennington. 2023. Exploring Transformers as Compact, Data-efficient Language Models. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 521–531, Singapore. Association for Computational Linguistics.
Cite (Informal):
Exploring Transformers as Compact, Data-efficient Language Models (Fields & Kennington, CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-1.35.pdf