PatchBERT: Just-in-Time, Out-of-Vocabulary Patching

Sangwhan Moon, Naoaki Okazaki


Abstract
Large scale pre-trained language models have shown groundbreaking performance improvements for transfer learning in the domain of natural language processing. In our paper, we study a pre-trained multilingual BERT model and analyze the OOV rate on downstream tasks, how it introduces information loss, and as a side-effect, obstructs the potential of the underlying model. We then propose multiple approaches for mitigation and demonstrate that it improves performance with the same parameter count when combined with fine-tuning.
Anthology ID:
2020.emnlp-main.631
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7846–7852
Language:
URL:
https://aclanthology.org/2020.emnlp-main.631
DOI:
10.18653/v1/2020.emnlp-main.631
Bibkey:
Cite (ACL):
Sangwhan Moon and Naoaki Okazaki. 2020. PatchBERT: Just-in-Time, Out-of-Vocabulary Patching. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7846–7852, Online. Association for Computational Linguistics.
Cite (Informal):
PatchBERT: Just-in-Time, Out-of-Vocabulary Patching (Moon & Okazaki, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.631.pdf
Video:
 https://slideslive.com/38938869