Yennie Jun
2021
Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset
Hannah Kirk
|
Yennie Jun
|
Paulius Rauba
|
Gal Wachtel
|
Ruining Li
|
Xingjian Bai
|
Noah Broestl
|
Martin Doff-Sotta
|
Aleksandar Shtedritski
|
Yuki M Asano
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
Hateful memes pose a unique challenge for current machine learning systems because their message is derived from both text- and visual-modalities. To this effect, Facebook released the Hateful Memes Challenge, a dataset of memes with pre-extracted text captions, but it is unclear whether these synthetic examples generalize to ‘memes in the wild’. In this paper, we collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset. We find that ‘memes in the wild’ differ in two key aspects: 1) Captions must be extracted via OCR, injecting noise and diminishing performance of multimodal models, and 2) Memes are more diverse than ‘traditional memes’, including screenshots of conversations or text on a plain background. This paper thus serves as a reality-check for the current benchmark of hateful meme detection and its applicability for detecting real world hate.
Search
Co-authors
- Hannah Kirk 1
- Paulius Rauba 1
- Gal Wachtel 1
- Ruining Li 1
- Xingjian Bai 1
- show all...
Venues
- woah1