Clayton Fields
2023
Exploring Transformers as Compact, Data-efficient Language Models
Clayton Fields
|
Casey Kennington
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Large scale transformer models, trained with massive datasets have become the standard in natural language processing. The huge size of most transformers make research with these models impossible for those with limited computational resources. Additionally, the enormous pretraining data requirements of transformers exclude pretraining them with many smaller datasets that might provide enlightening results. In this study, we show that transformers can be significantly reduced in size, with as few as 5.7 million parameters, and still retain most of their downstream capability. Further we show that transformer models can retain comparable results when trained on human-scale datasets, as few as 5 million words of pretraining data. Overall, the results of our study suggest transformers function well as compact, data efficient language models and that complex model compression methods, such as model distillation are not necessarily superior to pretraining reduced size transformer models from scratch.
Tiny Language Models Enriched with Multimodal Knowledge from Multiplex Networks
Clayton Fields
|
Osama Natouf
|
Andrew McMains
|
Catherine Henry
|
Casey Kennington
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Search