PARADISE: Evaluating Implicit Planning Skills of Language Models with Procedural Warnings and Tips Dataset

Arda Uzunoglu, Gözde Şahin, Abdulfattah Safa


Abstract
Recently, there has been growing interest within the community regarding whether large language models are capable of planning or executing plans. However, most prior studies use LLMs to generate high-level plans for simplified scenarios lacking linguistic complexity and domain diversity, limiting analysis of their planning abilities. These setups constrain evaluation methods (e.g., predefined action space), architectural choices (e.g., only generative models), and overlook the linguistic nuances essential for realistic analysis. To tackle this, we present PARADISE, an abductive reasoning task using Q&A format on practical procedural text sourced from wikiHow. It involves tip and warning inference tasks directly associated with goals, excluding intermediary steps, with the aim of testing the ability of the models to infer implicit knowledge of the plan solely from the given goal. Our experiments, utilizing fine-tuned language models and zero-shot prompting, reveal the effectiveness of task-specific small models over large language models in most scenarios. Despite advancements, all models fall short of human performance. Notably, our analysis uncovers intriguing insights, such as variations in model behavior with dropped keywords, struggles of BERT-family and GPT-4 with physical and abstract goals, and the proposed tasks offering valuable prior knowledge for other unseen procedural tasks. The PARADISE dataset and associated resources are publicly available for further research exploration with https://anonymous.4open.science/r/paradise-53BD/README.md.
Anthology ID:
2024.findings-acl.599
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10085–10102
Language:
URL:
https://aclanthology.org/2024.findings-acl.599
DOI:
Bibkey:
Cite (ACL):
Arda Uzunoglu, Gözde Şahin, and Abdulfattah Safa. 2024. PARADISE: Evaluating Implicit Planning Skills of Language Models with Procedural Warnings and Tips Dataset. In Findings of the Association for Computational Linguistics ACL 2024, pages 10085–10102, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
PARADISE: Evaluating Implicit Planning Skills of Language Models with Procedural Warnings and Tips Dataset (Uzunoglu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.599.pdf