Lena Jurkschat
2024
Tokenizer Choice For LLM Training: Negligible or Crucial?
Mehdi Ali
|
Michael Fromm
|
Klaudia Thellmann
|
Richard Rutmann
|
Max Lübbering
|
Johannes Leveling
|
Katrin Klug
|
Jan Ebert
|
Niclas Doll
|
Jasper Buschhoff
|
Charvi Jain
|
Alexander Weber
|
Lena Jurkschat
|
Hammam Abdelwahab
|
Chelsea John
|
Pedro Ortiz Suarez
|
Malte Ostendorff
|
Samuel Weinbach
|
Rafet Sifa
|
Stefan Kesselheim
|
Nicolas Flores-Herr
Findings of the Association for Computational Linguistics: NAACL 2024
The recent success of large language models (LLMs) has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer influence as a blind spot.Shedding light on this underexplored area, we conduct a comprehensive study on the influence of tokenizer choice on LLM downstream performance by training 24 mono- and multilingual LLMs at a 2.6B parameter scale, ablating different tokenizer algorithms and parameterizations. Our studies highlight that the tokenizer choice can significantly impact the model’s downstream performance and training costs. In particular, we find that the common tokenizer evaluation metrics fertility and parity are not always predictive of model downstream performance, rendering these metrics a questionable proxy for the model’s downstream performance. Furthermore, we show that multilingual tokenizers trained on the five most frequent European languages require vocabulary size increases of factor three in comparison to English. While English-centric tokenizers have been applied to the training of multi-lingual LLMs in the past, we find that this approach results in a severe downstream performance degradation and additional training costs of up to 68%, due to an inefficient tokenization vocabulary.
2022
Few-Shot Learning for Argument Aspects of the Nuclear Energy Debate
Lena Jurkschat
|
Gregor Wiedemann
|
Maximilian Heinrich
|
Mattes Ruckdeschel
|
Sunna Torge
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We approach aspect-based argument mining as a supervised machine learning task to classify arguments into semantically coherent groups referring to the same defined aspect categories. As an exemplary use case, we introduce the Argument Aspect Corpus - Nuclear Energy that separates arguments about the topic of nuclear energy into nine major aspects. Since the collection of training data for further aspects and topics is costly, we investigate the potential for current transformer-based few-shot learning approaches to accurately classify argument aspects. The best approach is applied to a British newspaper corpus covering the debate on nuclear energy over the past 21 years. Our evaluation shows that a stable prediction of shares of argument aspects in this debate is feasible with 50 to 100 training samples per aspect. Moreover, we see signals for a clear shift in the public discourse in favor of nuclear energy in recent years. This revelation of changing patterns of pro and contra arguments related to certain aspects over time demonstrates the potential of supervised argument aspect detection for tracking issue-specific media discourses.
Search
Co-authors
- Mehdi Ali 1
- Michael Fromm 1
- Klaudia Thellmann 1
- Richard Rutmann 1
- Max Lübbering 1
- show all...
- Johannes Leveling 1
- Katrin Klug 1
- Jan Ebert 1
- Niclas Doll 1
- Jasper Buschhoff 1
- Charvi Jain 1
- Alexander Weber 1
- Hammam Abdelwahab 1
- Chelsea John 1
- Pedro Ortiz Suarez 1
- Malte Ostendorff 1
- Samuel Weinbach 1
- Rafet Sifa 1
- Stefan Kesselheim 1
- Nicolas Flores-Herr 1
- Gregor Wiedemann 1
- Maximilian Heinrich 1
- Mattes Ruckdeschel 1
- Sunna Torge 1