Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge

Jiahuan Li, Yiqing Cao, Shujian Huang, Jiajun Chen


Abstract
Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting information, and it is intriguing to understand how LLMs handle these noisy data during training. In this study, we systematically analyze LLMs’ learning preferences for data with conflicting knowledge. We find that pretrained LLMs establish learning preferences similar to humans, i.e., preferences towards formal texts and texts with fewer spelling errors, resulting in faster learning and more favorable treatment of knowledge in data with such features when facing conflicts. This finding is generalizable across models and languages and is more evident in larger models. An in-depth analysis reveals that LLMs tend to trust data with features that signify consistency with the majority of data, and it is possible to instill new preferences and erase old ones by manipulating the degree of consistency with the majority data.
Anthology ID:
2024.emnlp-main.304
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5307–5320
Language:
URL:
https://aclanthology.org/2024.emnlp-main.304
DOI:
Bibkey:
Cite (ACL):
Jiahuan Li, Yiqing Cao, Shujian Huang, and Jiajun Chen. 2024. Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 5307–5320, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge (Li et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.304.pdf