Georgi Nenkov Georgiev
2024
Factuality of Large Language Models: A Survey
Yuxia Wang
|
Minghan Wang
|
Muhammad Arslan Manzoor
|
Fei Liu
|
Georgi Nenkov Georgiev
|
Rocktim Jyoti Das
|
Preslav Nakov
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs), especially when instruction-tuned for chat, have become part of our daily lives, freeing people from the process of searching, extracting, and integrating information from multiple sources by offering a straightforward answer to a variety of questions in a single place. Unfortunately, in many cases, LLM responses are factually incorrect, which limits their applicability in real-world scenarios. As a result, research on evaluating and improving the factuality of LLMs has attracted a lot of research attention recently. In this survey, we critically analyze existing work with the aim to identify the major challenges and their associated causes, pointing out to potential solutions for improving the factuality of LLMs, and analyzing the obstacles to automated factuality evaluation for open-ended text generation. We further offer an outlook on where future research should go.
OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs
Hasan Iqbal
|
Yuxia Wang
|
Minghan Wang
|
Georgi Nenkov Georgiev
|
Jiahui Geng
|
Iryna Gurevych
|
Preslav Nakov
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
The increased use of large language models (LLMs) across a variety of real-world applications calls for automatic tools to check the factual accuracy of their outputs, as LLMs often hallucinate. This is difficult as it requires assessing the factuality of free-form open-domain responses. While there has been a lot of research on this topic, different papers use different evaluation benchmarks and measures,which makes them hard to compare and hampers future progress. To mitigate these issues, we developed OpenFactCheck, a unified framework, with three modules: (i) RESPONSEEVAL, which allows users to easily customize an automatic fact-checking system and to assess the factuality of all claims in an input document using that system, (ii) LLMEVAL, which assesses the overall factuality of an LLM, and (iii) CHECKEREVAL, a module to evaluate automatic fact-checking systems. OpenFactCheck is open-sourced (https://github.com/mbzuai-nlp/openfactcheck) and publicly released as a Python library (https://pypi.org/project/openfactcheck/) and also as a web service (http://app.openfactcheck.com). A video describing the system is available at https://youtu.be/-i9VKL0HleI.
Search
Co-authors
- Yuxia Wang 2
- Minghan Wang 2
- Preslav Nakov 2
- Muhammad Arslan Manzoor 1
- Fei Liu 1
- show all...