Tools Fail: Detecting Silent Errors in Faulty Tools

Jimin Sun, So Yeon Min, Yingshan Chang, Yonatan Bisk


Abstract
Tools have become a mainstay of LLMs, allowing them to retrieve knowledge not in their weights, to perform tasks on the web, and even to control robots. However, most ontologies and surveys of tool-use have assumed the core challenge for LLMs is choosing the tool. Instead, we introduce a framework for tools more broadly which guides us to explore a model’s ability to detect “silent” tool errors, and reflect on how to plan. This more directly aligns with the increasingly popular use of models as tools. We provide an initial approach to failure recovery with promising results both on a controlled calculator setting and embodied agent planning.
Anthology ID:
2024.emnlp-main.790
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14272–14289
Language:
URL:
https://aclanthology.org/2024.emnlp-main.790/
DOI:
10.18653/v1/2024.emnlp-main.790
Bibkey:
Cite (ACL):
Jimin Sun, So Yeon Min, Yingshan Chang, and Yonatan Bisk. 2024. Tools Fail: Detecting Silent Errors in Faulty Tools. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 14272–14289, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Tools Fail: Detecting Silent Errors in Faulty Tools (Sun et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.790.pdf