Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance

Song Wang, Zhen Tan, Ruocheng Guo, Jundong Li


Abstract
Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language Models (PLMs) have achieved substantial advancements in the field of natural language processing. However, in real-world scenarios, data labels are often noisy due to the complex annotation process, making it essential to develop strategies for fine-tuning PLMs with such noisy labels. To this end, we introduce an innovative approach for fine-tuning PLMs using noisy labels, which incorporates the guidance of Large Language Models (LLMs) like ChatGPT. This guidance assists in accurately distinguishing between clean and noisy samples and provides supplementary information beyond the noisy labels, thereby boosting the learning process during fine-tuning PLMs. Extensive experiments on synthetic and real-world noisy datasets further demonstrate the superior advantages of our framework over the state-of-the-art baselines.
Anthology ID:
2023.findings-emnlp.834
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12528–12540
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.834
DOI:
10.18653/v1/2023.findings-emnlp.834
Bibkey:
Cite (ACL):
Song Wang, Zhen Tan, Ruocheng Guo, and Jundong Li. 2023. Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12528–12540, Singapore. Association for Computational Linguistics.
Cite (Informal):
Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.834.pdf