Ivan Yamshchikov


2024

pdf bib
BPE Gets Picky: Efficient Vocabulary Refinement During Tokenizer Training
Pavel Chizhov | Catherine Arnett | Elizaveta Korotkova | Ivan Yamshchikov
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Language models can greatly benefit from efficient tokenization. However, they still mostly utilize the classical Byte-Pair Encoding (BPE) algorithm, a simple and reliable method. BPE has been shown to cause such issues as under-trained tokens and sub-optimal compression that may affect the downstream performance. We introduce PickyBPE, a modified BPE algorithm that carries out vocabulary refinement during tokenizer training by removing merges that leave intermediate “junk” tokens. Our method improves vocabulary efficiency, eliminates under-trained tokens, and does not compromise text compression. Our experiments show that this method either improves downstream performance or does not harm it.

pdf bib
Individuation in Neural Models with and without Visual Grounding
Alexey Tikhonov | Lisa Bylinina | Ivan Yamshchikov
Proceedings of the 1st Workshop on NLP for Science (NLP4Science)

We show differences between a language-and-vision model CLIP and two text-only models — FastText and SBERT — when it comes to the encoding of individuation information. We study latent representations that CLIP provides for substrates, granular aggregates, and various numbers of objects. We demonstrate that CLIP embeddings capture quantitative differences in individuation better than models trained on text-only data. Moreover, the individuation hierarchy we deduce from the CLIP embeddings agrees with the hierarchies proposed in linguistics and cognitive science.