Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?

Ning Bian, Xianpei Han, Hongyu Lin, Yaojie Lu, Ben He, Le Sun


Abstract
Building machines with commonsense has been a longstanding challenge in NLP due to the reporting bias of commonsense rules and the exposure bias of rule-based commonsense reasoning. In contrast, humans convey and pass down commonsense implicitly through stories. This paper investigates the inherent commonsense ability of large language models (LLMs) expressed through storytelling. We systematically investigate and compare stories and rules for retrieving and leveraging commonsense in LLMs. Experimental results on 28 commonsense QA datasets show that stories outperform rules as the expression for retrieving commonsense from LLMs, exhibiting higher generation confidence and commonsense accuracy. Moreover, stories are the more effective commonsense expression for answering questions regarding daily events, while rules are more effective for scientific questions. This aligns with the reporting bias of commonsense in text corpora. We further show that the correctness and relevance of commonsense stories can be further improved via iterative self-supervised fine-tuning. These findings emphasize the importance of using appropriate language to express, retrieve, and leverage commonsense for LLMs, highlighting a promising direction for better exploiting their commonsense abilities.
Anthology ID:
2024.acl-long.221
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4023–4043
Language:
URL:
https://aclanthology.org/2024.acl-long.221
DOI:
Bibkey:
Cite (ACL):
Ning Bian, Xianpei Han, Hongyu Lin, Yaojie Lu, Ben He, and Le Sun. 2024. Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4023–4043, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models? (Bian et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.221.pdf