Learning Mutually Informed Representations for Characters and Subwords

Yilin Wang, Xinyi Hu, Matthew Gormley


Abstract
Most pretrained language models rely on subword tokenization, which processes text as a sequence of subword tokens. However, different granularities of text, such as characters, subwords, and words, can contain different kinds of information. Previous studies have shown that incorporating multiple input granularities improves model generalization, yet very few of them outputs useful representations for each granularity. In this paper, we introduce the entanglement model, aiming to combine character and subword language models. Inspired by vision-language models, our model treats characters and subwords as separate modalities, and it generates mutually informed representations for both granularities as output. We evaluate our model on text classification, named entity recognition, POS-tagging, and character-level sequence labeling (intraword code-switching). Notably, the entanglement model outperforms its backbone language models, particularly in the presence of noisy texts and low-resource languages. Furthermore, the entanglement model even outperforms larger pre-trained models on all English sequence labeling tasks and classification tasks. We make our code publically available.
Anthology ID:
2024.findings-naacl.202
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3201–3213
Language:
URL:
https://aclanthology.org/2024.findings-naacl.202
DOI:
10.18653/v1/2024.findings-naacl.202
Bibkey:
Cite (ACL):
Yilin Wang, Xinyi Hu, and Matthew Gormley. 2024. Learning Mutually Informed Representations for Characters and Subwords. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3201–3213, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Learning Mutually Informed Representations for Characters and Subwords (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.202.pdf