Do Neural Language Models Overcome Reporting Bias?

Vered Shwartz, Yejin Choi


Abstract
Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013). We study to what extent pre-trained language models overcome this issue. We find that while their generalization capacity allows them to better estimate the plausibility of frequent but unspoken of actions, outcomes, and properties, they also tend to overestimate that of the very rare, amplifying the bias that already exists in their training corpus.
Anthology ID:
2020.coling-main.605
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
6863–6870
Language:
URL:
https://aclanthology.org/2020.coling-main.605
DOI:
10.18653/v1/2020.coling-main.605
Bibkey:
Cite (ACL):
Vered Shwartz and Yejin Choi. 2020. Do Neural Language Models Overcome Reporting Bias?. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6863–6870, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Do Neural Language Models Overcome Reporting Bias? (Shwartz & Choi, COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.605.pdf
Code
 vered1986/reporting_bias_lms
Data
COPA