A Japanese Masked Language Model for Academic Domain

Hiroki Yamauchi, Tomoyuki Kajiwara, Marie Katsurai, Ikki Ohmukai, Takashi Ninomiya


Abstract
We release a pretrained Japanese masked language model for an academic domain. Pretrained masked language models have recently improved the performance of various natural language processing applications. In domains such as medical and academic, which include a lot of technical terms, domain-specific pretraining is effective. While domain-specific masked language models for medical and SNS domains are widely used in Japanese, along with domain-independent ones, pretrained models specific to the academic domain are not publicly available. In this study, we pretrained a RoBERTa-based Japanese masked language model on paper abstracts from the academic database CiNii Articles. Experimental results on Japanese text classification in the academic domain revealed the effectiveness of the proposed model over existing pretrained models.
Anthology ID:
2022.sdp-1.16
Volume:
Proceedings of the Third Workshop on Scholarly Document Processing
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Arman Cohan, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Drahomira Herrmannova, Petr Knoth, Kyle Lo, Philipp Mayr, Michal Shmueli-Scheuer, Anita de Waard, Lucy Lu Wang
Venue:
sdp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
152–157
Language:
URL:
https://aclanthology.org/2022.sdp-1.16
DOI:
Bibkey:
Cite (ACL):
Hiroki Yamauchi, Tomoyuki Kajiwara, Marie Katsurai, Ikki Ohmukai, and Takashi Ninomiya. 2022. A Japanese Masked Language Model for Academic Domain. In Proceedings of the Third Workshop on Scholarly Document Processing, pages 152–157, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
A Japanese Masked Language Model for Academic Domain (Yamauchi et al., sdp 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.sdp-1.16.pdf
Code
 hirokiyamauch/academicroberta