Yuki M Asano
2022
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
Conrad Borchers
|
Dalia Gala
|
Benjamin Gilburt
|
Eduard Oravkin
|
Wilfried Bounsi
|
Yuki M Asano
|
Hannah Kirk
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated advertisements and compare them to real-world advertisements. We then evaluate prompt-engineering and fine-tuning as debiasing methods. We find that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism. Conversely, fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias.
2021
Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset
Hannah Kirk
|
Yennie Jun
|
Paulius Rauba
|
Gal Wachtel
|
Ruining Li
|
Xingjian Bai
|
Noah Broestl
|
Martin Doff-Sotta
|
Aleksandar Shtedritski
|
Yuki M Asano
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
Hateful memes pose a unique challenge for current machine learning systems because their message is derived from both text- and visual-modalities. To this effect, Facebook released the Hateful Memes Challenge, a dataset of memes with pre-extracted text captions, but it is unclear whether these synthetic examples generalize to ‘memes in the wild’. In this paper, we collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset. We find that ‘memes in the wild’ differ in two key aspects: 1) Captions must be extracted via OCR, injecting noise and diminishing performance of multimodal models, and 2) Memes are more diverse than ‘traditional memes’, including screenshots of conversations or text on a plain background. This paper thus serves as a reality-check for the current benchmark of hateful meme detection and its applicability for detecting real world hate.
Search
Co-authors
- Hannah Kirk 2
- Conrad Borchers 1
- Dalia Gala 1
- Benjamin Gilburt 1
- Eduard Oravkin 1
- show all...