Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks

Aditi Mishra, Sajjadur Rahman, Kushan Mitra, Hannah Kim, Estevam Hruschka


Abstract
Large language models (LLMs) are proficient at generating fluent text with minimal task-specific supervision. However, their ability to generate rationales for knowledge-intensive tasks (KITs) remains under-explored. Generating rationales for KIT solutions, such as commonsense multiple-choice QA, requires external knowledge to support predictions and refute alternate options. In this work, we consider the task of generating retrieval-augmented rationalization of KIT model predictions via external knowledge guidance within a few-shot setting. Surprisingly, crowd-workers preferred LLM-generated rationales over existing crowd-sourced rationales, generated in a similar knowledge-guided setting, on aspects such as factuality, sufficiency, and convincingness. However, fine-grained evaluation of such rationales highlights the need for further improvements in conciseness, novelty, and domain invariance. Additionally, through an expert-sourced study evaluating the reliability of the rationales, we demonstrate that humans’ trust in LLM-generated rationales erodes when communicated faithfully, i.e., without taking model prediction accuracy into account. We find that even instrumenting simple guardrails can be effective for reliable rationalization.
Anthology ID:
2024.findings-acl.484
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8117–8139
Language:
URL:
https://aclanthology.org/2024.findings-acl.484
DOI:
Bibkey:
Cite (ACL):
Aditi Mishra, Sajjadur Rahman, Kushan Mitra, Hannah Kim, and Estevam Hruschka. 2024. Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks. In Findings of the Association for Computational Linguistics ACL 2024, pages 8117–8139, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks (Mishra et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.484.pdf