Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation

Flor Miriam Plaza-del-Arco, Debora Nozza, Dirk Hovy


Abstract
Large Language Models (LLMs) exhibit remarkable text classification capabilities, excelling in zero- and few-shot learning (ZSL and FSL) scenarios. However, since they are trained on different datasets, performance varies widely across tasks between those models. Recent studies emphasize the importance of considering human label variation in data annotation. However, how this human label variation also applies to LLMs remains unexplored. Given this likely model specialization, we ask: Do aggregate LLM labels improve over individual models (as for human annotators)? We evaluate four recent instruction-tuned LLMs as “annotators” on five subjective tasks across four languages. We use ZSL and FSL setups and label aggregation from human annotation. Aggregations are indeed substantially better than any individual model, benefiting from specialization in diverse tasks or languages. Surprisingly, FSL does not surpass ZSL, as it depends on the quality of the selected examples. However, there seems to be no good information-theoretical strategy to select those. We find that no LLM method rivals even simple supervised models. We also discuss the tradeoffs in accuracy, cost, and moral/ethical considerations between LLM and human annotation.
Anthology ID:
2024.nlperspectives-1.2
Volume:
Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Gavin Abercrombie, Valerio Basile, Davide Bernadi, Shiran Dudy, Simona Frenda, Lucy Havens, Sara Tonelli
Venues:
NLPerspectives | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
19–30
Language:
URL:
https://aclanthology.org/2024.nlperspectives-1.2
DOI:
Bibkey:
Cite (ACL):
Flor Miriam Plaza-del-Arco, Debora Nozza, and Dirk Hovy. 2024. Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation. In Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024, pages 19–30, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation (Plaza-del-Arco et al., NLPerspectives-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nlperspectives-1.2.pdf