Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study

Yinghao Li, Haorui Wang, Chao Zhang


Abstract
Large Language Models (LLMs) have shown remarkable proficiency in language understanding and have been successfully applied to a variety of real-world tasks through task-specific fine-tuning or prompt engineering. Despite these advancements, it remains an open question whether LLMs are fundamentally capable of reasoning and planning, or if they primarily rely on recalling and synthesizing information from their training data. In our research, we introduce a novel task—Minesweeper—specifically designed in a format unfamiliar to LLMs and absent from their training datasets. This task challenges LLMs to identify the locations of mines based on numerical clues provided by adjacent opened cells. Successfully completing this task requires an understanding of each cell’s state, discerning spatial relationships between the clues and mines, and strategizing actions based on logical deductions drawn from the arrangement of the cells. Our experiments, including trials with the advanced GPT-4 model, indicate that while LLMs possess the foundational abilities required for this task, they struggle to integrate these into a coherent, multi-step logical reasoning process needed to solve Minesweeper. These findings highlight the need for further research to understand the nature of reasoning capabilities in LLMs under similar circumstances, and to explore pathways towards more sophisticated AI reasoning and planning models.
Anthology ID:
2024.naacl-long.4
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
59–81
Language:
URL:
https://aclanthology.org/2024.naacl-long.4
DOI:
Bibkey:
Cite (ACL):
Yinghao Li, Haorui Wang, and Chao Zhang. 2024. Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 59–81, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study (Li et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.4.pdf
Copyright:
 2024.naacl-long.4.copyright.pdf