Detecting Response Generation Not Requiring Factual Judgment

Ryohei Kamei, Daiki Shiono, Reina Akama, Jun Suzuki


Abstract
With the remarkable development of large language models (LLMs), ensuring the factuality of output has become a challenge.However, having all the contents of the response with given knowledge or facts is not necessarily a good thing in dialogues.This study aimed to achieve both attractiveness and factuality in a dialogue response for which a task was set to predict sentences that do not require factual correctness judgment such as agreeing, or personal opinions/feelings.We created a dataset, dialogue dataset annotated with fact-check-needed label (DDFC), for this task via crowdsourcing, and classification tasks were performed on several models using this dataset.The model with the highest classification accuracy could yield about 88% accurate classification results.
Anthology ID:
2024.naacl-srw.13
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Yang (Trista) Cao, Isabel Papadimitriou, Anaelia Ovalle
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
116–123
Language:
URL:
https://aclanthology.org/2024.naacl-srw.13
DOI:
Bibkey:
Cite (ACL):
Ryohei Kamei, Daiki Shiono, Reina Akama, and Jun Suzuki. 2024. Detecting Response Generation Not Requiring Factual Judgment. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop), pages 116–123, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Detecting Response Generation Not Requiring Factual Judgment (Kamei et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-srw.13.pdf