Richer Countries and Richer Representations

Kaitlyn Zhou, Kawin Ethayarajh, Dan Jurafsky


Abstract
We examine whether some countries are more richly represented in embedding space than others. We find that countries whose names occur with low frequency in training corpora are more likely to be tokenized into subwords, are less semantically distinct in embedding space, and are less likely to be correctly predicted: e.g., Ghana (the correct answer and in-vocabulary) is not predicted for, “The country producing the most cocoa is [MASK].”. Although these performance discrepancies and representational harms are due to frequency, we find that frequency is highly correlated with a country’s GDP; thus perpetuating historic power and wealth inequalities. We analyze the effectiveness of mitigation strategies; recommend that researchers report training word frequencies; and recommend future work for the community to define and design representational guarantees.
Anthology ID:
2022.findings-acl.164
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2074–2085
Language:
URL:
https://aclanthology.org/2022.findings-acl.164
DOI:
10.18653/v1/2022.findings-acl.164
Bibkey:
Cite (ACL):
Kaitlyn Zhou, Kawin Ethayarajh, and Dan Jurafsky. 2022. Richer Countries and Richer Representations. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2074–2085, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Richer Countries and Richer Representations (Zhou et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.164.pdf
Code
 katezhou/country_distortions