Evaluating LLMs Efficiency Using Successive Attempts on Binary-Outcome Tasks

Mohamed Amine El Yagouby, Mehdi Zekroum, Abdelkader Lahmadi, Mounir Ghogho, Olivier Festor


Abstract
Evaluating Large Language Models (LLMs) using single-attempt metrics like Success Rate (SR) overlooks their capacity for iterative problem solving. In tasks with binary outcomes (success or failure), such as coding or planning, LLMs often benefit from multiple attempts. Existing multiattempt metrics like pass@k and success@k account for eventual success but ignore how efficiently it is achieved, making them more costly. We propose a new evaluation method with Successive Multiple Attempts, where a maximum number of retries is fixed, and introduce our Success Efficiency (SE) metric, which captures both success and efficiency in a single value by rewarding earlier successes and penalizing delays. Tested using the HumanEval dataset across six LLMs, SE captures how quickly an LLM solves tasks, which existing metrics do not offer. This work complements existing evaluation methods by measuring not only whether LLMs succeed but also how efficiently they do so.
Anthology ID:
2025.jeptalnrecital-evalllm.10
Volume:
Actes de l'atelier Évaluation des modèles génératifs (LLM) et challenge 2025 (EvalLLM)
Month:
6
Year:
2025
Address:
Marseille, France
Editors:
Frédéric Bechet, Adrian-Gabriel Chifu, Karen Pinel-sauvagnat, Benoit Favre, Eliot Maes, Diana Nurbakova
Venue:
JEP/TALN/RECITAL
SIG:
Publisher:
ATALA \\& ARIA
Note:
Pages:
120–126
Language:
URL:
https://aclanthology.org/2025.jeptalnrecital-evalllm.10/
DOI:
Bibkey:
Cite (ACL):
Mohamed Amine El Yagouby, Mehdi Zekroum, Abdelkader Lahmadi, Mounir Ghogho, and Olivier Festor. 2025. Evaluating LLMs Efficiency Using Successive Attempts on Binary-Outcome Tasks. In Actes de l'atelier Évaluation des modèles génératifs (LLM) et challenge 2025 (EvalLLM), pages 120–126, Marseille, France. ATALA \\& ARIA.
Cite (Informal):
Evaluating LLMs Efficiency Using Successive Attempts on Binary-Outcome Tasks (El Yagouby et al., JEP/TALN/RECITAL 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.jeptalnrecital-evalllm.10.pdf