Tutorial Proposal: Hallucination in Large Language Models

Vipula Rawte, Aman Chadha, Amit Sheth, Amitava Das


Abstract
In the fast-paced domain of Large Language Models (LLMs), the issue of hallucination is a prominent challenge. Despite continuous endeavors to address this concern, it remains a highly active area of research within the LLM landscape. Grasping the intricacies of this problem can be daunting, especially for those new to the field. This tutorial aims to bridge this knowledge gap by introducing the emerging realm of hallucination in LLMs. It will comprehensively explore the key aspects of hallucination, including benchmarking, detection, and mitigation techniques. Furthermore, we will delve into the specific constraints and shortcomings of current approaches, providing valuable insights to guide future research efforts for participants.
Anthology ID:
2024.lrec-tutorials.11
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Roman Klinger, Naozaki Okazaki, Nicoletta Calzolari, Min-Yen Kan
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
68–72
Language:
URL:
https://aclanthology.org/2024.lrec-tutorials.11
DOI:
Bibkey:
Cite (ACL):
Vipula Rawte, Aman Chadha, Amit Sheth, and Amitava Das. 2024. Tutorial Proposal: Hallucination in Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries, pages 68–72, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Tutorial Proposal: Hallucination in Large Language Models (Rawte et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-tutorials.11.pdf