Revisiting subword tokenization: A case study on affixal negation in large language models

Thinh Truong, Yulia Otmakhova, Karin Verspoor, Trevor Cohn, Timothy Baldwin


Abstract
In this work, we measure the impact of affixal negation on modern English large language models (LLMs). In affixal negation, the negated meaning is expressed through a negative morpheme, which is potentially challenging for LLMs as their tokenizers are often not morphologically plausible. We conduct extensive experiments using LLMs with different subword tokenization methods, which lead to several insights on the interaction between tokenization performance and negation sensitivity. Despite some interesting mismatches between tokenization accuracy and negation detection performance, we show that models can, on the whole, reliably recognize the meaning of affixal negation.
Anthology ID:
2024.naacl-long.284
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5082–5095
Language:
URL:
https://aclanthology.org/2024.naacl-long.284
DOI:
10.18653/v1/2024.naacl-long.284
Bibkey:
Cite (ACL):
Thinh Truong, Yulia Otmakhova, Karin Verspoor, Trevor Cohn, and Timothy Baldwin. 2024. Revisiting subword tokenization: A case study on affixal negation in large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 5082–5095, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Revisiting subword tokenization: A case study on affixal negation in large language models (Truong et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.284.pdf