Using Language Models to Improve Rule-based Linguistic Annotation of Modern Historical Japanese Corpora

Jerry Bonnell, Mitsunori Ogihara


Abstract
Annotation of unlabeled textual corpora with linguistic metadata is a fundamental technology in many scholarly workflows in the digital humanities (DH). Pretrained natural language processing pipelines offer tokenization, tagging, and dependency parsing of raw text simultaneously using an annotation scheme like Universal Dependencies (UD). However, the accuracy of these UD tools remains unknown for historical texts and current methods lack mechanisms that enable helpful evaluations by domain experts. To address both points for the case of Modern Historical Japanese text, this paper proposes the use of unsupervised domain adaptation methods to develop a domain-adapted language model (LM) that can flag instances of inaccurate UD output from a pretrained LM and the use of these instances to form rules that, when applied, improves pretrained annotation accuracy. To test the efficacy of the proposed approach, the paper evaluates the domain-adapted LM against three baselines that are not adapted to the historical domain. The experiments conducted demonstrate that the domain-adapted LM improves UD annotation in the Modern Historical Japanese domain and that rules produced using this LM are best indicative of characteristics of the domain in terms of out-of-vocabulary rate and candidate normalized form discovery for “difficult” bigram terms.
Anthology ID:
2022.latechclfl-1.5
Volume:
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Stefania Degaetano, Anna Kazantseva, Nils Reiter, Stan Szpakowicz
Venue:
LaTeCHCLfL
SIG:
SIGHUM
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
30–39
Language:
URL:
https://aclanthology.org/2022.latechclfl-1.5
DOI:
Bibkey:
Cite (ACL):
Jerry Bonnell and Mitsunori Ogihara. 2022. Using Language Models to Improve Rule-based Linguistic Annotation of Modern Historical Japanese Corpora. In Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 30–39, Gyeongju, Republic of Korea. International Conference on Computational Linguistics.
Cite (Informal):
Using Language Models to Improve Rule-based Linguistic Annotation of Modern Historical Japanese Corpora (Bonnell & Ogihara, LaTeCHCLfL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.latechclfl-1.5.pdf
Code
 jerrybonnell/adapt-esupar