Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance

Molly Petersen, Lonneke van der Plas


Abstract
While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training models approach human performance.
Anthology ID:
2023.emnlp-main.1022
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16414–16425
Language:
URL:
https://aclanthology.org/2023.emnlp-main.1022
DOI:
10.18653/v1/2023.emnlp-main.1022
Bibkey:
Cite (ACL):
Molly Petersen and Lonneke van der Plas. 2023. Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16414–16425, Singapore. Association for Computational Linguistics.
Cite (Informal):
Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance (Petersen & van der Plas, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.1022.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.1022.mp4