Yuta Nozaki
2025
Efficient Vocabulary Reduction for Small Language Models
Yuta Nozaki
|
Dai Nakashima
|
Ryo Sato
|
Naoki Asaba
|
Shintaro Kawamura
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
The increasing size of large language models (LLMs) poses significant challenges due to their high computational costs and energy consumption, making their deployment in industrial settings difficult. Small language models (SLMs) have been introduced to mitigate these challenges by reducing model size while preserving performance. However, the embedding layer, which occupies a significant portion of the model, remains a bottleneck in model compression efforts. In this paper, we valdated vocabulary reduction as a solution to compress the embedding layer and reduce model size without significant loss of performance. We conduct a series of experiments to investigate how vocabulary reduction affects GPU memory footprint, inference speed, and task performance. Our results show that while performance generally declines with vocabulary reduction, fine-tuning can recover much of the lost performance. Moreover, in some tasks, such as truthfulness and summarization, the vocabulary-reduced models outperform the baseline. Finally, we demonstrate that vocabulary reduction can be effectively applied in domain adaptation, particularly in the medical domain, and in multilingual adaptation, improving task efficiency and cross-lingual robustness.
VRCP: Vocabulary Replacement Continued Pretraining for Efficient Multilingual Language Models
Yuta Nozaki
|
Dai Nakashima
|
Ryo Sato
|
Naoki Asaba
Proceedings of the Second Workshop on Scaling Up Multilingual & Multi-Cultural Evaluation
Building large language models (LLMs) for non-English languages involves leveraging extensively trained English models through continued pre-training on the target language corpora. This approach harnesses the rich semantic knowledge embedded in English models, allowing superior performance compared to training from scratch. However, tokenizers not optimized for the target language may make inefficiencies in training. We propose Vocabulary Replacement Continued Pretraining (VRCP), a method that optimizes the tokenizer for the target language by replacing unique (solely available) vocabulary from the source tokenizer while maintaining the overall vocabulary size. This approach preserves the semantic knowledge of the source model while enhancing token efficiency and performance for the target language. We evaluated VRCP using the Llama-2 model on Japanese and Chinese corpora. The results show that VRCP matches the performance of vocabulary expansion methods on benchmarks and achieves superior performance in summarization tasks. Additionally, VRCP provides an optimized tokenizer that balances token efficiency, task performance, and GPU memory footprint, making it particularly suitable for resource-constrained environments.
2022
Tweet Review Mining focusing on Celebrities by Machine Reading Comprehension based on BERT
Yuta Nozaki
|
Kotoe Sugawara
|
Yuki Zenimoto
|
Takehito Utsuro
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation
Search
Fix data
Co-authors
- Naoki Asaba 2
- Dai Nakashima 2
- Ryo Sato 2
- Shintaro Kawamura 1
- Kotoe Sugawara 1
- show all...