Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion

Ryo Sekizawa, Hitomi Yanaka


Abstract
Using Japanese honorifics is challenging because it requires not only knowledge of the grammatical rules but also contextual information, such as social relationships. It remains unclear whether pre-trained large language models (LLMs) can flexibly handle Japanese honorifics like humans. To analyze this, we introduce an honorific conversion task that considers social relationships among people mentioned in a conversation. We construct a Japanese honorifics dataset from problem templates of various sentence structures to investigate the syntactic generalization capacity of GPT-3, one of the leading LLMs, on this task under two settings: fine-tuning and prompt learning. Our results showed that the fine-tuned GPT-3 performed better in a context-aware honorific conversion task than the prompt-based one. The fine-tuned model demonstrated overall syntactic generalizability towards compound honorific sentences, except when tested with the data involving direct speech.
Anthology ID:
2023.starsem-1.5
Volume:
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Alexis Palmer, Jose Camacho-collados
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–47
Language:
URL:
https://aclanthology.org/2023.starsem-1.5
DOI:
10.18653/v1/2023.starsem-1.5
Bibkey:
Cite (ACL):
Ryo Sekizawa and Hitomi Yanaka. 2023. Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 40–47, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion (Sekizawa & Yanaka, *SEM 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.starsem-1.5.pdf