On Shortcuts and Biases: How Finetuned Language Models Distinguish Audience-Specific Instructions in Italian and English

Nicola Fanton, Michael Roth


Abstract
Instructional texts for different audience groups can help to address specific needs, but at the same time run the risk of perpetrating biases. In this paper, we extend previous findings on disparate social norms and subtle stereotypes in wikiHow in two directions: We explore the use of fine-tuned language models to determine how audience-specific instructional texts can be distinguished and we transfer the methodology to another language, Italian, to identify cross-linguistic patterns. We find that language models mostly rely on group terms, gender markings, and attributes reinforcing stereotypes.
Anthology ID:
2024.gebnlp-1.6
Volume:
Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Seraphina Goldfarb-Tarrant, Debora Nozza
Venues:
GeBNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–93
Language:
URL:
https://aclanthology.org/2024.gebnlp-1.6
DOI:
Bibkey:
Cite (ACL):
Nicola Fanton and Michael Roth. 2024. On Shortcuts and Biases: How Finetuned Language Models Distinguish Audience-Specific Instructions in Italian and English. In Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 78–93, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
On Shortcuts and Biases: How Finetuned Language Models Distinguish Audience-Specific Instructions in Italian and English (Fanton & Roth, GeBNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.gebnlp-1.6.pdf