Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation

Brooklyn Sheppard, Anna Richter, Allison Cohen, Elizabeth Smith, Tamara Kneese, Carolyne Pelletier, Ioana Baldini, Yue Dong


Abstract
Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature. Built in collaboration with multi-disciplinary experts and annotators themselves, the dataset contains annotations of movie subtitles, capturing colloquial expressions of misogyny in North American film. The open-source dataset can be used for a range of NLP tasks, including binary and multi-label classification, severity score regression, and text generation for rewrites. In this paper, we discuss the methodology used, analyze the annotations obtained, provide baselines for each task using common NLP algorithms, and furnish error analyses to give insight into model behaviour when fine-tuned on the Biasly dataset.
Anthology ID:
2024.findings-acl.24
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
427–452
Language:
URL:
https://aclanthology.org/2024.findings-acl.24
DOI:
Bibkey:
Cite (ACL):
Brooklyn Sheppard, Anna Richter, Allison Cohen, Elizabeth Smith, Tamara Kneese, Carolyne Pelletier, Ioana Baldini, and Yue Dong. 2024. Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation. In Findings of the Association for Computational Linguistics ACL 2024, pages 427–452, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation (Sheppard et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.24.pdf