Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs

Claire Jin, Sudha Rao, Xiangyu Peng, Portia Botchway, Jessica Quaye, Chris Brockett, Bill Dolan


Abstract
Advancements in large language models (LLMs) are revolutionizing interactive game design, enabling dynamic plotlines and interactions between players and non-player characters (NPCs). However, LLMs may exhibit flaws such as hallucinations, forgetfulness, or misinterpretations of prompts, causing logical inconsistencies and unexpected deviations from intended designs. Automated techniques for detecting such game bugs are still lacking. To address this, we propose a systematic LLM-based method for automatically identifying such bugs from player game logs, eliminating the need for collecting additional data such as post-play surveys. Applied to a text-based game DejaBoom!, our approach effectively identifies bugs inherent in LLM-powered interactive games, surpassing unstructured LLM-powered bug-catching methods and filling the gap in automated detection of logical and design flaws.
Anthology ID:
2024.findings-acl.907
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15353–15368
Language:
URL:
https://aclanthology.org/2024.findings-acl.907
DOI:
Bibkey:
Cite (ACL):
Claire Jin, Sudha Rao, Xiangyu Peng, Portia Botchway, Jessica Quaye, Chris Brockett, and Bill Dolan. 2024. Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs. In Findings of the Association for Computational Linguistics ACL 2024, pages 15353–15368, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs (Jin et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.907.pdf