Leonidas Gee


2023

pdf bib
Are Compressed Language Models Less Subgroup Robust?
Leonidas Gee | Andrea Zugarini | Novi Quadrianto
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

To reduce the inference cost of large language models, model compression is increasingly used to create smaller scalable models. However, little is known about their robustness to minority subgroups defined by the labels and attributes of a dataset. In this paper, we investigate the effects of 18 different compression methods and settings on the subgroup robustness of BERT language models. We show that worst-group performance does not depend on model size alone, but also on the compression method used. Additionally, we find that model compression does not always worsen the performance on minority subgroups. Altogether, our analysis serves to further research into the subgroup robustness of model compression.

pdf bib
Multi-word Tokenization for Sequence Compression
Leonidas Gee | Leonardo Rigutini | Marco Ernandes | Andrea Zugarini
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track

Large Language Models have proven highly successful at modelling a variety of tasks. However, this comes at a steep computational cost that hinders wider industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that goes beyond word boundaries by representing frequent multi-word expressions as single tokens. MWTs produce a more compact and efficient tokenization that yields two benefits: (1) Increase in performance due to a greater coverage of input data given a fixed sequence length budget; (2) Faster and lighter inference due to the ability to reduce the sequence length with negligible drops in performance. Our results show that MWT is more robust across shorter sequence lengths, thus allowing for major speedups via early sequence truncation.

2022

pdf bib
Fast Vocabulary Transfer for Language Model Compression
Leonidas Gee | Andrea Zugarini | Leonardo Rigutini | Paolo Torroni
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track

Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.