Prarit Lamba
2024
Mitigating Hallucination in Fictional Character Role-Play
Nafis Sadeq
|
Zhouhang Xie
|
Byungkyu Kang
|
Prarit Lamba
|
Xiang Gao
|
Julian McAuley
Findings of the Association for Computational Linguistics: EMNLP 2024
Role-playing has wide-ranging applications in customer support, embodied agents, and computational social science. The influence of parametric world knowledge of large language models (LLMs) often causes role-playing characters to act out of character and to hallucinate about things outside the scope of their knowledge. In this work, we focus on the evaluation and mitigation of hallucination in fictional character role-play. We introduce a dataset with over 2,000 characters and 72,000 interviews, including 18,000 adversarial questions. We propose RoleFact, a role-playing method that mitigates hallucination by modulating the influence of parametric knowledge using a pre-calibrated confidence threshold. Experiments show that the proposed method improves the factual precision of generated responses by 18% for adversarial questions with a 44% reduction in temporal hallucination for time-sensitive interviews. The code and the dataset are available at https://github.com/NafisSadeq/rolefact.git.
2023
Unsupervised Improvement of Factual Knowledge in Language Models
Nafis Sadeq
|
Byungkyu Kang
|
Prarit Lamba
|
Julian McAuley
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Masked language modeling (MLM) plays a key role in pretraining large language models. But the MLM objective is often dominated by high-frequency words that are sub-optimal for learning factual knowledge. In this work, we propose an approach for influencing MLM pretraining in a way that can improve language model performance on a variety of knowledge-intensive tasks. We force the language model to prioritize informative words in a fully unsupervised way. Experiments demonstrate that the proposed approach can significantly improve the performance of pretrained language models on tasks such as factual recall, question answering, sentiment analysis, and natural language inference in a closed-book setting.
Search