Consistency Analysis of ChatGPT

Myeongjun Jang, Thomas Lukasiewicz


Abstract
ChatGPT has gained a huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, adding extra support to the claim that AI can now assist and even replace humans in industrial fields. Others, however, doubt its reliability and trustworthiness. This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour, focusing specifically on semantic consistency and the properties of negation, symmetric, and transitive consistency. Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions. We also ascertain via experiments that prompt designing, few-shot learning and employing larger large language models (LLMs) are unlikely to be the ultimate solution to resolve the inconsistency issue of LLMs.
Anthology ID:
2023.emnlp-main.991
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15970–15985
Language:
URL:
https://aclanthology.org/2023.emnlp-main.991
DOI:
10.18653/v1/2023.emnlp-main.991
Bibkey:
Cite (ACL):
Myeongjun Jang and Thomas Lukasiewicz. 2023. Consistency Analysis of ChatGPT. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15970–15985, Singapore. Association for Computational Linguistics.
Cite (Informal):
Consistency Analysis of ChatGPT (Jang & Lukasiewicz, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.991.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.991.mp4