Siqing Huo
2024
Assessing and Verifying Task Utility in LLM-Powered Applications
Negar Arabzadeh
|
Siqing Huo
|
Nikhil Mehta
|
Qingyun Wu
|
Chi Wang
|
Ahmed Hassan Awadallah
|
Charles L. A. Clarke
|
Julia Kiseleva
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
The rapid development of Large Language Models (LLMs) has led to a surge in applications that facilitate collaboration among multiple agents, assisting humans in their daily tasks. However, a significant gap remains in assessing to what extent LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the need to verify utility of LLM-powered applications, particularly by ensuring alignment between the application’s functionality and end-user needs. We introduce AgentEval, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the effectiveness and robustness of AgentEval for two open source datasets including Math Problem solving and ALFWorld House-hold related tasks. For reproducibility purposes, we make the data, code and all the logs publicly available at https://github.com/Narabzad/AgentEval
Search
Co-authors
- Negar Arabzadeh 1
- Nikhil Mehta 1
- Qingyun Wu 1
- Chi Wang 1
- Ahmed Hassan 1
- show all...