A Course Shared Task on Evaluating LLM Output for Clinical Questions

Yufang Hou, Thy Tran, Doan Vu, Yiwen Cao, Kai Li, Lukas Rohde, Iryna Gurevych


Abstract
This paper presents a shared task that we organized at the Foundations of Language Technology (FoLT) course in 2023/2024 at the Technical University of Darmstadt, which focuses on evaluating the output of Large Language Models (LLMs) in generating harmful answers to health-related clinical questions. We describe the task design considerations and report the feedback we received from the students. We expect the task and the findings reported in this paper to be relevant for instructors teaching natural language processing (NLP).
Anthology ID:
2024.teachingnlp-1.11
Volume:
Proceedings of the Sixth Workshop on Teaching NLP
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Sana Al-azzawi, Laura Biester, György Kovács, Ana Marasović, Leena Mathur, Margot Mieskes, Leonie Weissweiler
Venues:
TeachingNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
77–80
Language:
URL:
https://aclanthology.org/2024.teachingnlp-1.11
DOI:
Bibkey:
Cite (ACL):
Yufang Hou, Thy Tran, Doan Vu, Yiwen Cao, Kai Li, Lukas Rohde, and Iryna Gurevych. 2024. A Course Shared Task on Evaluating LLM Output for Clinical Questions. In Proceedings of the Sixth Workshop on Teaching NLP, pages 77–80, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
A Course Shared Task on Evaluating LLM Output for Clinical Questions (Hou et al., TeachingNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.teachingnlp-1.11.pdf