PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion

Yiduo Guo, Zekai Zhang, Yaobo Liang, Dongyan Zhao, Nan Duan


Abstract
Recent evaluations of Large Language Models (LLMs) have centered around testing their zero-shot/few-shot capabilities for basic natural language tasks and their ability to translate instructions into tool APIs. However, the evaluation of LLMs utilizing complex tools to finish multi-turn, multi-modal instructions in a complex multi-modal environment has not been investigated. To address this gap, we introduce the PowerPoint Task Completion (PPTC) benchmark to assess LLMs’ ability to create and edit PPT files based on user instructions. It contains 279 multi-turn sessions covering diverse topics and hundreds of instructions involving multi-modal operations. We also propose the PPTX-Match Evaluation System that evaluates if LLMs finish the instruction based on the prediction file rather than the label API sequence, thus it supports various LLM-generated API sequences. We measure 3 closed LLMs and 6 open-source LLMs. The results show that GPT-4 outperforms other LLMs with 75.1% accuracy in single-turn dialogue testing but faces challenges in completing entire sessions, achieving just 6% session accuracy. We find three main error causes in our benchmark: error accumulation in the multi-turn session, long PPT template processing, and multi-modality perception. These pose great challenges for future LLM and agent systems .
Anthology ID:
2024.findings-acl.514
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8682–8701
Language:
URL:
https://aclanthology.org/2024.findings-acl.514
DOI:
Bibkey:
Cite (ACL):
Yiduo Guo, Zekai Zhang, Yaobo Liang, Dongyan Zhao, and Nan Duan. 2024. PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion. In Findings of the Association for Computational Linguistics ACL 2024, pages 8682–8701, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion (Guo et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.514.pdf