Generic Overgeneralization in Pre-trained Language Models

Sello Ralethe, Jan Buys


Abstract
Generic statements such as “ducks lay eggs” make claims about kinds, e.g., ducks as a category. The generic overgeneralization effect refers to the inclination to accept false universal generalizations such as “all ducks lay eggs” or “all lions have manes” as true. In this paper, we investigate the generic overgeneralization effect in pre-trained language models experimentally. We show that pre-trained language models suffer from overgeneralization and tend to treat quantified generic statements such as “all ducks lay eggs” as if they were true generics. Furthermore, we demonstrate how knowledge embedding methods can lessen this effect by injecting factual knowledge about kinds into pre-trained language models. To this end, we source factual knowledge about two types of generics, minority characteristic generics and majority characteristic generics, and inject this knowledge using a knowledge embedding model. Our results show that knowledge injection reduces, but does not eliminate, generic overgeneralization, and that majority characteristic generics of kinds are more susceptible to overgeneralization bias.
Anthology ID:
2022.coling-1.282
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3187–3196
Language:
URL:
https://aclanthology.org/2022.coling-1.282
DOI:
Bibkey:
Cite (ACL):
Sello Ralethe and Jan Buys. 2022. Generic Overgeneralization in Pre-trained Language Models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3187–3196, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Generic Overgeneralization in Pre-trained Language Models (Ralethe & Buys, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.282.pdf
Code
 sello-ralethe/gog-in-plms
Data
Ascent KBGenericsKB