Quantifying Memorization and Detecting Training Data of Pre-trained Language Models using Japanese Newspaper

Shotaro Ishihara, Hiromu Takahashi


Abstract
Dominant pre-trained language models (PLMs) have demonstrated the potential risk of memorizing and outputting the training data. While this concern has been discussed mainly in English, it is also practically important to focus on domain-specific PLMs. In this study, we pre-trained domain-specific GPT-2 models using a limited corpus of Japanese newspaper articles and evaluated their behavior. Experiments replicated the empirical finding that memorization of PLMs is related to the duplication in the training data, model size, and prompt length, in Japanese the same as in previous English studies. Furthermore, we attempted membership inference attacks, demonstrating that the training data can be detected even in Japanese, which is the same trend as in English. The study warns that domain-specific PLMs, sometimes trained with valuable private data, can ”copy and paste” on a large scale.
Anthology ID:
2024.inlg-main.14
Volume:
Proceedings of the 17th International Natural Language Generation Conference
Month:
September
Year:
2024
Address:
Tokyo, Japan
Editors:
Saad Mahamood, Nguyen Le Minh, Daphne Ippolito
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
165–179
Language:
URL:
https://aclanthology.org/2024.inlg-main.14
DOI:
Bibkey:
Cite (ACL):
Shotaro Ishihara and Hiromu Takahashi. 2024. Quantifying Memorization and Detecting Training Data of Pre-trained Language Models using Japanese Newspaper. In Proceedings of the 17th International Natural Language Generation Conference, pages 165–179, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Quantifying Memorization and Detecting Training Data of Pre-trained Language Models using Japanese Newspaper (Ishihara & Takahashi, INLG 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.inlg-main.14.pdf