Logographic Information Aids Learning Better Representations for Natural Language Inference

Zijian Jin, Duygu Ataman


Abstract
Statistical language models conventionally implement representation learning based on the contextual distribution of words or other formal units, whereas any information related to the logographic features of written text are often ignored, assuming they should be retrieved relying on the cooccurence statistics. On the other hand, as language models become larger and require more data to learn reliable representations, such assumptions may start to fall back, especially under conditions of data sparsity. Many languages, including Chinese and Vietnamese, use logographic writing systems where surface forms are represented as a visual organization of smaller graphemic units, which often contain many semantic cues. In this paper, we present a novel study which explores the benefits of providing language models with logographic information in learning better semantic representations. We test our hypothesis in the natural language inference (NLI) task by evaluating the benefit of computing multi-modal representations that combine contextual information with glyph information. Our evaluation results in six languages with different typology and writing systems suggest significant benefits of using multi-modal embeddings in languages with logograhic systems, especially for words with less occurence statistics.
Anthology ID:
2022.findings-aacl.25
Volume:
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022
Month:
November
Year:
2022
Address:
Online only
Editors:
Yulan He, Heng Ji, Sujian Li, Yang Liu, Chua-Hui Chang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
268–273
Language:
URL:
https://aclanthology.org/2022.findings-aacl.25
DOI:
Bibkey:
Cite (ACL):
Zijian Jin and Duygu Ataman. 2022. Logographic Information Aids Learning Better Representations for Natural Language Inference. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 268–273, Online only. Association for Computational Linguistics.
Cite (Informal):
Logographic Information Aids Learning Better Representations for Natural Language Inference (Jin & Ataman, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-aacl.25.pdf