Integrating Task Specific Information into Pretrained Language Models for Low Resource Fine Tuning

Rui Wang, Shijing Si, Guoyin Wang, Lei Zhang, Lawrence Carin, Ricardo Henao


Abstract
Pretrained Language Models (PLMs) have improved the performance of natural language understanding in recent years. Such models are pretrained on large corpora, which encode the general prior knowledge of natural languages but are agnostic to information characteristic of downstream tasks. This often results in overfitting when fine-tuned with low resource datasets where task-specific information is limited. In this paper, we integrate label information as a task-specific prior into the self-attention component of pretrained BERT models. Experiments on several benchmarks and real-word datasets suggest that the proposed approach can largely improve the performance of pretrained models when fine-tuning with small datasets.
Anthology ID:
2020.findings-emnlp.285
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3181–3186
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.285
DOI:
10.18653/v1/2020.findings-emnlp.285
Bibkey:
Cite (ACL):
Rui Wang, Shijing Si, Guoyin Wang, Lei Zhang, Lawrence Carin, and Ricardo Henao. 2020. Integrating Task Specific Information into Pretrained Language Models for Low Resource Fine Tuning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3181–3186, Online. Association for Computational Linguistics.
Cite (Informal):
Integrating Task Specific Information into Pretrained Language Models for Low Resource Fine Tuning (Wang et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.285.pdf
Code
 raywangwr/bert_label_embedding