NEO-BENCH: Evaluating Robustness of Large Language Models with Neologisms

Jonathan Zheng, Alan Ritter, Wei Xu


Abstract
The performance of Large Language Models (LLMs) degrades from the temporal drift between data used for model training and newer text seen during inference. One understudied avenue of language change causing data drift is the emergence of neologisms – new word forms – over time. We create a diverse resource of recent English neologisms by using several popular collection methods. We analyze temporal drift using neologisms by comparing sentences containing new words with near-identical sentences that replace neologisms with existing substitute words. Model performance is nearly halved in machine translation when a single neologism is introduced in a sentence. Motivated by these results, we construct a benchmark to evaluate LLMs’ ability to generalize to neologisms with various natural language understanding tasks and model perplexity. Models with later knowledge cutoff dates yield lower perplexities and perform better in downstream tasks. LLMs are also affected differently based on the linguistic origins of words, indicating that neologisms are complex for static LLMs to address. We will release our benchmark and code for reproducing our experiments.
Anthology ID:
2024.acl-long.749
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13885–13906
Language:
URL:
https://aclanthology.org/2024.acl-long.749
DOI:
Bibkey:
Cite (ACL):
Jonathan Zheng, Alan Ritter, and Wei Xu. 2024. NEO-BENCH: Evaluating Robustness of Large Language Models with Neologisms. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13885–13906, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
NEO-BENCH: Evaluating Robustness of Large Language Models with Neologisms (Zheng et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.749.pdf