Large Language Models are biased to overestimate profoundness

Eugenio Herrera-Berg, Tomás Browne, Pablo León-Villagrá, Marc-Lluís Vives, Cristian Calderon


Abstract
Recent advancements in natural language processing by large language models (LLMs), such as GPT-4, have been suggested to approach Artificial General Intelligence. And yet, it is still under dispute whether LLMs possess similar reasoning abilities to humans. This study evaluates GPT-4 and various other LLMs in judging the profoundness of mundane, motivational, and pseudo-profound statements. We found a significant statement-to-statement correlation between the LLMs and humans, irrespective of the type of statements and the prompting technique used. However, LLMs systematically overestimate the profoundness of nonsensical statements, with the exception of Tk-instruct, which uniquely underestimates the profoundness of statements. Only few-shot learning prompts, as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans. Furthermore, this work provides insights into the potential biases induced by Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the bias to overestimate the profoundness of statements.
Anthology ID:
2023.emnlp-main.599
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9653–9661
Language:
URL:
https://aclanthology.org/2023.emnlp-main.599
DOI:
10.18653/v1/2023.emnlp-main.599
Bibkey:
Cite (ACL):
Eugenio Herrera-Berg, Tomás Browne, Pablo León-Villagrá, Marc-Lluís Vives, and Cristian Calderon. 2023. Large Language Models are biased to overestimate profoundness. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9653–9661, Singapore. Association for Computational Linguistics.
Cite (Informal):
Large Language Models are biased to overestimate profoundness (Herrera-Berg et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.599.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.599.mp4