Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification

Dawei Zhu, Michael A. Hedderich, Fangzhou Zhai, David Adelani, Dietrich Klakow


Abstract
Incorrect labels in training data occur when human annotators make mistakes or when the data is generated via weak or distant supervision. It has been shown that complex noise-handling techniques - by modeling, cleaning or filtering the noisy instances - are required to prevent models from fitting this label noise. However, we show in this work that, for text classification tasks with modern NLP models like BERT, over a variety of noise types, existing noise-handling methods do not always improve its performance, and may even deteriorate it, suggesting the need for further investigation. We also back our observations with a comprehensive analysis.
Anthology ID:
2022.insights-1.8
Volume:
Proceedings of the Third Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun Akula
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
62–67
Language:
URL:
https://aclanthology.org/2022.insights-1.8
DOI:
10.18653/v1/2022.insights-1.8
Bibkey:
Cite (ACL):
Dawei Zhu, Michael A. Hedderich, Fangzhou Zhai, David Adelani, and Dietrich Klakow. 2022. Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 62–67, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification (Zhu et al., insights 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.insights-1.8.pdf
Video:
 https://aclanthology.org/2022.insights-1.8.mp4
Code
 uds-lsv/bert-lnl
Data
AG NewsIMDb Movie Reviews