Michele Donini


pdf bib
Geographical Erasure in Language Generation
Pola Schwöbel | Jacek Golebiowski | Michele Donini | Cedric Archambeau | Danish Pruthi
Findings of the Association for Computational Linguistics: EMNLP 2023

Large language models (LLMs) encode vast amounts of world knowledge. However, since these models are trained on large swaths of internet data, they are at risk of inordinately capturing information about dominant groups. This imbalance can propagate into generated language. In this work, we study and operationalise a form of geographical erasure wherein language models underpredict certain countries. We demonstrate consistent instances of erasure across a range of LLMs. We discover that erasure strongly correlates with low frequencies of country mentions in the training corpus. Lastly, we mitigate erasure by finetuning using a custom objective.


pdf bib
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar | Michele Donini | Dylan Slack | Cedric Archambeau | Sanjiv Das | Krishnaram Kenthapadi
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021