Establishing Trustworthiness: Rethinking Tasks and Model Evaluation

Robert Litschko, Max Müller-Eberstein, Rob van der Goot, Leon Weber-Genzel, Barbara Plank


Abstract
Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades. Traditionally, facets of linguistic intelligence have been compartmentalized into tasks with specialized model architectures and corresponding evaluation protocols. With the advent of large language models (LLMs) the community has witnessed a dramatic shift towards general purpose, task-agnostic approaches powered by generative models. As a consequence, the traditional compartmentalized notion of language tasks is breaking down, followed by an increasing challenge for evaluation and analysis. At the same time, LLMs are being deployed in more real-world scenarios, including previously unforeseen zero-shot setups, increasing the need for trustworthy and reliable systems. Therefore, we argue that it is time to rethink what constitutes tasks and model evaluation in NLP, and pursue a more holistic view on language, placing trustworthiness at the center. Towards this goal, we review existing compartmentalized approaches for understanding the origins of a model’s functional capacity, and provide recommendations for more multi-faceted evaluation protocols.
Anthology ID:
2023.emnlp-main.14
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
193–203
Language:
URL:
https://aclanthology.org/2023.emnlp-main.14
DOI:
10.18653/v1/2023.emnlp-main.14
Bibkey:
Cite (ACL):
Robert Litschko, Max Müller-Eberstein, Rob van der Goot, Leon Weber-Genzel, and Barbara Plank. 2023. Establishing Trustworthiness: Rethinking Tasks and Model Evaluation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 193–203, Singapore. Association for Computational Linguistics.
Cite (Informal):
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation (Litschko et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.14.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.14.mp4