Does ChatGPT Resemble Humans in Processing Implicatures?

Zhuang Qiu, Xufeng Duan, Zhenguang Cai


Abstract
Recent advances in large language models (LLMs) and LLM-driven chatbots, such as ChatGPT, have sparked interest in the extent to which these artificial systems possess human-like linguistic abilities. In this study, we assessed ChatGPT’s pragmatic capabilities by conducting three preregistered experiments focused on its ability to compute pragmatic implicatures. The first experiment tested whether ChatGPT inhibits the computation of generalized conversational implicatures (GCIs) when explicitly required to process the text’s truth-conditional meaning. The second and third experiments examined whether the communicative context affects ChatGPT’s ability to compute scalar implicatures (SIs). Our results showed that ChatGPT did not demonstrate human-like flexibility in switching between pragmatic and semantic processing. Additionally, ChatGPT’s judgments did not exhibit the well-established effect of communicative context on SI rates.
Anthology ID:
2023.naloma-1.3
Volume:
Proceedings of the 4th Natural Logic Meets Machine Learning Workshop
Month:
June
Year:
2023
Address:
Nancy, France
Editors:
Stergios Chatzikyriakidis, Valeria de Paiva
Venues:
NALOMA | WS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–34
Language:
URL:
https://aclanthology.org/2023.naloma-1.3
DOI:
Bibkey:
Cite (ACL):
Zhuang Qiu, Xufeng Duan, and Zhenguang Cai. 2023. Does ChatGPT Resemble Humans in Processing Implicatures?. In Proceedings of the 4th Natural Logic Meets Machine Learning Workshop, pages 25–34, Nancy, France. Association for Computational Linguistics.
Cite (Informal):
Does ChatGPT Resemble Humans in Processing Implicatures? (Qiu et al., NALOMA-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.naloma-1.3.pdf