Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction

Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, Deqing Yang


Abstract
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures. Despite the attention previous research has given to word analogies, this work suggests that Large Language Models (LLMs) often overlook the structures that underpin these analogies, raising questions about the efficacy of word analogies as a measure of analogical reasoning skills akin to human cognition. In response to this, our paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems. In support of this task, we establish a benchmark called SCAR, containing 400 scientific analogies from 13 distinct fields, tailored for evaluating analogical reasoning with structure abduction. The empirical evidence underlines the continued challenges faced by LLMs, including ChatGPT and GPT-4, in mastering this task, signifying the need for future exploration to enhance their abilities.
Anthology ID:
2023.findings-emnlp.160
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2446–2460
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.160
DOI:
10.18653/v1/2023.findings-emnlp.160
Bibkey:
Cite (ACL):
Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, and Deqing Yang. 2023. Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2446–2460, Singapore. Association for Computational Linguistics.
Cite (Informal):
Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction (Yuan et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.160.pdf