Towards Uncertainty-Aware Language Agent

Jiuzhou Han, Wray Buntine, Ehsan Shareghi


Abstract
While Language Agents have achieved promising success by placing Large Language Models at the core of a more versatile design that dynamically interacts with the external world, the existing approaches neglect the notion of uncertainty during these interactions. We present the Uncertainty-Aware Language Agent (UALA), a framework that orchestrates the interaction between the agent and the external world using uncertainty quantification. Compared with other well-known counterparts like ReAct, our extensive experiments across 3 representative tasks (HotpotQA, StrategyQA, MMLU) and various LLM sizes demonstrate that UALA brings a significant improvement of performance, while having a substantially lower reliance on the external world (i.e., reduced number of tool calls and tokens). Our analyses provide various insights including the great potential of UALA compared with agent fine-tuning, and underscore the unreliability of verbalised confidence of LLMs as a proxy for uncertainty.
Anthology ID:
2024.findings-acl.398
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6662–6685
Language:
URL:
https://aclanthology.org/2024.findings-acl.398
DOI:
Bibkey:
Cite (ACL):
Jiuzhou Han, Wray Buntine, and Ehsan Shareghi. 2024. Towards Uncertainty-Aware Language Agent. In Findings of the Association for Computational Linguistics ACL 2024, pages 6662–6685, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Towards Uncertainty-Aware Language Agent (Han et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.398.pdf