Federated Learning for Exploiting Annotators’ Disagreements in Natural Language Processing

Nuria Rodríguez-Barroso, Eugenio Martínez Cámara, Jose Camacho Collados, M. Victoria Luzón, Francisco Herrera


Abstract
The annotation of ambiguous or subjective NLP tasks is usually addressed by various annotators. In most datasets, these annotations are aggregated into a single ground truth. However, this omits divergent opinions of annotators, hence missing individual perspectives. We propose FLEAD (Federated Learning for Exploiting Annotators’ Disagreements), a methodology built upon federated learning to independently learn from the opinions of all the annotators, thereby leveraging all their underlying information without relying on a single ground truth. We conduct an extensive experimental study and analysis in diverse text classification tasks to show the contribution of our approach with respect to mainstream approaches based on majority voting and other recent methodologies that also learn from annotator disagreements.
Anthology ID:
2024.tacl-1.35
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
630–648
Language:
URL:
https://aclanthology.org/2024.tacl-1.35
DOI:
10.1162/tacl_a_00664
Bibkey:
Cite (ACL):
Nuria Rodríguez-Barroso, Eugenio Martínez Cámara, Jose Camacho Collados, M. Victoria Luzón, and Francisco Herrera. 2024. Federated Learning for Exploiting Annotators’ Disagreements in Natural Language Processing. Transactions of the Association for Computational Linguistics, 12:630–648.
Cite (Informal):
Federated Learning for Exploiting Annotators’ Disagreements in Natural Language Processing (Rodríguez-Barroso et al., TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.35.pdf