Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models

Sander Land, Max Bartolo


Abstract
The disconnect between tokenizer creation and model training in language models allows for specific inputs, such as the infamous SolidGoldMagikarp token, to induce unwanted model behaviour. Although such ‘glitch tokens’, tokens present in the tokenizer vocabulary but that are nearly or entirely absent during model training, have been observed across various models, a reliable method to identify and address them has been missing. We present a comprehensive analysis of Large Language Model tokenizers, specifically targeting this issue of detecting under-trained tokens. Through a combination of tokenizer analysis, model weight-based indicators, and prompting techniques, we develop novel and effective methods for automatically detecting these problematic tokens. Our findings demonstrate the prevalence of such tokens across a diverse set of models and provide insights into improving the efficiency and safety of language models.
Anthology ID:
2024.emnlp-main.649
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11631–11646
Language:
URL:
https://aclanthology.org/2024.emnlp-main.649
DOI:
10.18653/v1/2024.emnlp-main.649
Bibkey:
Cite (ACL):
Sander Land and Max Bartolo. 2024. Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 11631–11646, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models (Land & Bartolo, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.649.pdf